Replies: 2 comments 1 reply
-
|
Nope, you need a GPU unfortunately. |
Beta Was this translation helpful? Give feedback.
-
|
There is a possibility to check if MPS (Apple Metal -> M1 GPU support) via torch.backends.is_mps_available() and convert .cuda() calls into .to(torch.device('mps')) however... Generally speaking it should be doable anyway ( but I am not python dev/experienced ML dev at all so it would take me too much time ) by replacing to specific MPS engine commands. Unfortunately MPS engine is not always as friendly as cuda - in CUDA pytorch simple .cuda() calls works. You also need to make sure to update requirements.txt numba=0.55.1 to 0.55.2 as Apple M1 is supported in 0.55.2 (without that compile fails -> numba/numba#7951) If someone experienced could pick this.. M1 machines/gpus are very strong and can yield better results that TESLA T4 etc.. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
.
Beta Was this translation helpful? Give feedback.
All reactions