-
Notifications
You must be signed in to change notification settings - Fork 748
Description
Hello, I tried to run it on the HUAWEI ASEND 910B NPU, but an error occurred. Do you have any plans to support a Propainter model running on the NPU?
The error log is as follows:
`
[root@cf6d1ec4a275 ProPainter]# python inference_propainter.py --video /root/inputs/afda4476-3574-49b4-a7c3-9ee44ff3117a-running_car.mp4 --mask /root/inputs/afda4476-3574-49b4-a7c3-9ee44ff3117a-mask_square.png --width 640 --height 360 --ref_stride 10 --neighbor_length 10 --subvideo_length 80
/usr/local/lib64/python3.11/site-packages/torch_npu/contrib/transfer_to_npu.py:295: ImportWarning:
*************************************************************************************************************
The torch.Tensor.cuda and torch.nn.Module.cuda are replaced with torch.Tensor.npu and torch.nn.Module.npu now..
The torch.cuda.DoubleTensor is replaced with torch.npu.FloatTensor cause the double type is not supported now..
The backend in torch.distributed.init_process_group set to hccl now..
The torch.cuda.* and torch.cuda.amp.* are replaced with torch.npu.* and torch.npu.amp.* now..
The device parameters have been replaced with npu in the function below:
torch.logspace, torch.randint, torch.hann_window, torch.rand, torch.full_like, torch.ones_like, torch.rand_like, torch.randperm, torch.arange, torch.frombuffer, torch.normal, torch._empty_per_channel_affine_quantized, torch.empty_strided, torch.empty_like, torch.scalar_tensor, torch.tril_indices, torch.bartlett_window, torch.ones, torch.sparse_coo_tensor, torch.randn, torch.kaiser_window, torch.tensor, torch.triu_indices, torch.as_tensor, torch.zeros, torch.randint_like, torch.full, torch.eye, torch._sparse_csr_tensor_unsafe, torch.empty, torch._sparse_coo_tensor_unsafe, torch.blackman_window, torch.zeros_like, torch.range, torch.sparse_csr_tensor, torch.randn_like, torch.from_file, torch._cudnn_init_dropout_state, torch._empty_affine_quantized, torch.linspace, torch.hamming_window, torch.empty_quantized, torch._pin_memory, torch.autocast, torch.load, torch.Generator, torch.set_default_device, torch.Tensor.new_empty, torch.Tensor.new_empty_strided, torch.Tensor.new_full, torch.Tensor.new_ones, torch.Tensor.new_tensor, torch.Tensor.new_zeros, torch.Tensor.to, torch.Tensor.pin_memory, torch.nn.Module.to, torch.nn.Module.to_empty
*************************************************************************************************************
warnings.warn(msg, ImportWarning)
/usr/local/lib64/python3.11/site-packages/torch_npu/contrib/transfer_to_npu.py:250: RuntimeWarning: torch.jit.script and torch.jit.script_method will be disabled by transfer_to_npu, which currently does not support them, if you need to enable them, please do not use transfer_to_npu.
warnings.warn(msg, RuntimeWarning)
Traceback (most recent call last):
File "/app/ProPainter/inference_propainter.py", line 278, in
fix_raft = RAFT_bi(ckpt_path, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/ProPainter/model/modules/flow_comp_raft.py", line 31, in init
self.fix_raft = initialize_RAFT(model_path, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/ProPainter/model/modules/flow_comp_raft.py", line 18, in initialize_RAFT
model = torch.nn.DataParallel(RAFT(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib64/python3.11/site-packages/torch/nn/parallel/data_parallel.py", line 159, in init
_check_balance(self.device_ids)
File "/usr/local/lib64/python3.11/site-packages/torch/nn/parallel/data_parallel.py", line 39, in _check_balance
if warn_imbalance(lambda props: props.multi_processor_count):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib64/python3.11/site-packages/torch/nn/parallel/data_parallel.py", line 29, in warn_imbalance
values = [get_prop(props) for props in dev_props]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib64/python3.11/site-packages/torch/nn/parallel/data_parallel.py", line 29, in
values = [get_prop(props) for props in dev_props]
^^^^^^^^^^^^^^^
File "/usr/local/lib64/python3.11/site-packages/torch/nn/parallel/data_parallel.py", line 39, in
if warn_imbalance(lambda props: props.multi_processor_count):
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'torch_npu._C._NPUDeviceProperties' object has no attribute 'multi_processor_count'
[ERROR] 2025-05-22-17:44:49 (PID:695110, Device:0, RankID:-1) ERR99999 UNKNOWN application exception
`