Thanks to visit codestin.com
Credit goes to github.com

Skip to content

UserWarning: The operator 'aten::_foreach_lerp_.Scalar' is not currently supported on the DML backend #604

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
a36624705 opened this issue Jul 8, 2024 · 3 comments

Comments

@a36624705
Copy link

I encountered some issues while using the optimizers, and don't know how to resolve them.

Adam:
xxx\Lib\site-packages\torch\optim\adam.py:522: UserWarning: The operator 'aten::_foreach_lerp_.Scalar' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:\__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.) torch._foreach_lerp_(device_exp_avgs, device_grads, 1 - beta1)

SGD:
xxx\Lib\site-packages\torch\optim\sgd.py:360: UserWarning: The operator 'aten::_foreach_add_.List' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:\__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.) torch._foreach_add_(device_params, device_grads, alpha=-lr)

@trevortai
Copy link

Are you using the latest torch-directml? It may be the problem.
try reverting to:

pip install torch-directml==0.2.0.dev230426

@a36624705
Copy link
Author

Yeah, I resolved this warning by downgrading torch-directml from version 0.2.2 to version 0.2.0.

@ianujv4231
Copy link

sir, NotImplementedError: The operator 'aten::_foreach_add.List' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on pytorch/pytorch#77764. As a temporary fix, you can set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
i got this erorr, did pip install torch-directml==0.2.0
, but again eroror , it says, ERROR: Could not find a version that satisfies the requirement torch-directml==0.2.0 (from versions: none)
ERROR: No matching distribution found for torch-directml==0.2.0
Note: you may need to restart the kernel to use updated packages.
pls help me resolve it.,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants