Skip to content

Commit b9ca0de

Browse files
FlorianValFlorian valade
andauthored
check if cuda is available before calling nvidia-smi (#3872)
Co-authored-by: Florian valade <[email protected]>
1 parent 75983a5 commit b9ca0de

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/accelerate/launchers.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,7 @@ def train(*args):
149149
launcher = PrepareForLaunch(function, distributed_type="XLA")
150150
print("Launching a training on TPU cores.")
151151
xmp.spawn(launcher, args=args, start_method="fork")
152-
elif in_colab and get_gpu_info()[1] < 2:
152+
elif in_colab and (not torch.cuda.is_available() or get_gpu_info()[1] < 2):
153153
# No need for a distributed launch otherwise as it's either CPU or one GPU.
154154
if torch.cuda.is_available():
155155
print("Launching training on one GPU.")

0 commit comments

Comments
 (0)