-
Notifications
You must be signed in to change notification settings - Fork 537
Chronos-2: Add option to disable DataParallel
#434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| if disable_data_parallel: | ||
| # This is a hack to disable the default `transformers` behavior of using DataParallel | ||
| training_args._n_gpu = 1 | ||
| assert training_args.n_gpu == 1 # Ensure that the hack worked |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some branches where this is set to 0 (e.g. on the CPU)
https://github.com/huggingface/transformers/blob/40dc11cd3eb4126652aa41ef8272525affd4a636/src/transformers/training_args.py#L1778
Are we sure we don't break it? Should we instead set either
training_args._n_gpu = min(1, training_args._n_gpu)or
if disable_data_parallel and torch.cuda.device_count() > 1:
training_args._n_gpu = min(1, training_args._n_gpu)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a guard not use_cpu. Do you think is good?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
| if disable_data_parallel: | ||
| # This is a hack to disable the default `transformers` behavior of using DataParallel | ||
| training_args._n_gpu = 1 | ||
| assert training_args.n_gpu == 1 # Ensure that the hack worked |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
Issue #, if available:
Description of changes:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.