-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Add dense_npu.sh for megatron lora training in huawei npu #6976
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @vx120, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new example script designed to streamline the process of performing Megatron LoRA training on Huawei NPU hardware. The script provides a comprehensive configuration for fine-tuning the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new script, dense_npu.sh, to support Megatron LoRA training on Huawei NPU. This is a valuable addition for expanding hardware compatibility. However, I've identified a critical typo in an environment variable that needs immediate correction. Additionally, there are notable differences in model loading arguments and training hyperparameters (batch sizes, fusion flags) compared to the existing CUDA script (dense.sh). These differences should be clarified or justified to ensure the script functions correctly and efficiently on NPU platforms.
examples/megatron/lora/dense_npu.sh
Outdated
| NPROC_PER_NODE=2 \ | ||
| ASCEND_RT_VISIBLE_DEVICESVISIBLE_DEVICES=0,1 \ | ||
| megatron sft \ | ||
| --model Qwen/Qwen2.5-7B-Instruct \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The --model argument is used here, whereas dense.sh and the LoRA-Training.md documentation suggest using --load with a pre-converted mcore model (e.g., Qwen2.5-7B-Instruct-mcore).
Could you clarify if --model Qwen/Qwen2.5-7B-Instruct implicitly handles the conversion to mcore format for NPU, or if an explicit conversion step and --load argument are intended, similar to the CUDA example? Inconsistency here might lead to unexpected model loading behavior or failures.
examples/megatron/lora/dense_npu.sh
Outdated
| --micro_batch_size 1 \ | ||
| --global_batch_size 2 \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The --micro_batch_size is set to 1 and --global_batch_size to 2. In contrast, dense.sh uses 16 for both. This is a significant change in batching strategy.
Could you explain the rationale behind these smaller batch sizes for NPU training? This could impact training stability, convergence, and overall throughput.
examples/megatron/lora/dense_npu.sh
Outdated
| --no_gradient_accumulation_fusion true \ | ||
| --no_masked_softmax_fusion true \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The flags --no_gradient_accumulation_fusion true and --no_masked_softmax_fusion true are newly introduced in this NPU script. Disabling these fusion optimizations can potentially affect performance.
Is there a specific reason or known compatibility issue on NPU that necessitates disabling these fusions? Providing context for these choices would be beneficial.
|
Hello, thanks for the PR. Please place it in this folder. https://github.com/modelscope/ms-swift/tree/main/examples/ascend/train |
done |
|
Big thanks to @tongtong0613 for providing the key help. This couldn‘t have been done without him. |
PR type
PR information
Add an example scripts for megatron lora training in huawei npu.