-
Notifications
You must be signed in to change notification settings - Fork 735
Open
Labels
Cross PlatformDuplicateThis issue or pull request already existsThis issue or pull request already existsEnhancementNew feature or requestNew feature or requestHigh Priority(first issues that will be worked on)(first issues that will be worked on)OptimizersIssues or feature requests relating to optimizersIssues or feature requests relating to optimizersx64 CPU
Milestone
Description
Feature request
Hi thanks for the library! It would be great if the optimizers can be run on CPU. For example, I would like to try adamw_8bit to full-finetune a 8B model on a 24GB GPU card (RTX4090). With deepspeed offload, the GPU memory is OK, but the CPU memory requirement is still very huge, partially because it uses normal adamw, thus needs 8x8=64GB for the optimizer itself.
This package creates the super helpful adamw_8bit, thus I would appreciate it if it can be used with the settings above, hopefully reducing 64GB to 8x2=16GB for optimizer state.
Motivation
(see above)
Your contribution
Yes
Metadata
Metadata
Assignees
Labels
Cross PlatformDuplicateThis issue or pull request already existsThis issue or pull request already existsEnhancementNew feature or requestNew feature or requestHigh Priority(first issues that will be worked on)(first issues that will be worked on)OptimizersIssues or feature requests relating to optimizersIssues or feature requests relating to optimizersx64 CPU