下载
中文
注册
模型训练时未使用配套PyTorch或Python版本,报错“E1017”

模型训练时未使用配套PyTorch或Python版本,报错“E1017”

2025/11/17

4

暂无评分
我要评分

问题信息

问题来源产品大类产品子类关键字
官方模型训练PyTorchE1017

问题现象描述

  • 报错截图

    放大

  • 报错文本
    [ranko]:[W017 19:12:55.484886190 compiler_depend.ts:65] Warning: Warning: The torch.npu.*DtypeTensor constructors are no longer recom ommended. It's best to use  methods  Is such as torch.tensor(data, dtype=*, device='npu') to create tensors.
    Warning: Default value of `set_to_none` in torch.nn.Module.zero_grad() is set as False for combine grad, which is True since torch 2.0. 
    number of params: 28288354 
    number of GFLOPs: 4.49440512 
    All checkpoints founded in output/test/swin_tiny_patch4 window7_224/default: []  
    no checkpoint found in output/test/swin_tiny_patch4_window7_224/default, ignoring auto resume   
    Start training  
    Save internal 10 
    [rank0] :[W1017 19:13:04.658394720 compiler_depend.ts:67] Warnir npu::npu_format_cast: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is .E1017 19:13:16.561000 3317037 site-packages/torch/distributed/elastic/ multiprocessing/api.py: 869] failed (exitcode: -11) local_rank:0 (pid: 3319032) of binary:

原因分析

在模型训练过程中出现“E1017”报错,原因是使用了未适配过的PyTorch和Python版本,导致功能错误,无法正常执行训练。

解决措施

请根据各模型使用说明中推荐的PyTorch、Python等配套版本进行环境准备,以确保功能正常,性能和精度达标。

本页内容