-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Various small fixes for Megatron-FSDP. #2346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
d8050db to
91698d8
Compare
Signed-off-by: Cory Ye <[email protected]>
Signed-off-by: Cory Ye <[email protected]>
…ead of HFSDP. Signed-off-by: Cory Ye <[email protected]>
91698d8 to
b8f5682
Compare
|
/ok to test b8f5682 |
| # FIXME(@cspades): Currently not used gradient_reduce_preprocessing()? | ||
| expert_gradient_scaling_factor = ( | ||
| self.dist_index.get_dp_group(is_expert_parallel=True).size() | ||
| / self.dist_index.get_dp_group().size() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will the torch 2.9 check affect the behavior of get_dp_group().size()? Do we need to update the logic here?
| if (dp_outer_dim is None) ^ (hybrid_fsdp_group is None): | ||
| # XOR - HSDP requires both or neither of dp_outer_dim and hybrid_fsdp_group | ||
| # to be specified, so if XOR then raise an error. | ||
| if _outer_fsdp_sharding and hybrid_fsdp_group is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to change this line? My understanding is that we only need to handle the new ValidateError introduced by the PyTorch 2.9 and perform the check before calling device_mesh._flatten.
What does this PR do ?
hybrid_fsdp_grouprequirement when using HSDP without optimizer state sharding.FSDPDistributedIndex.get_dp_group()when not necessary because it refers toFSDPDistributedIndex.hybrid_fsdp_groupeven whenFSDPDistributedIndex.hsdp_outer_dp_shard=False(such as during HSDP with DP-Replicate) and we don't need this group to compute the DP size for FSDP or HSDP. All other instances ofget_dp_group()happen whenFSDPDistributedIndex.hybrid_fsdp_groupexists due to DP-Outer optimizer sharding in HFSDP.DeviceMesh._flattenvalidation error when runninggather_uneven_dtensor_to_full_tensoron a 1D sharding mesh. This unblocks BioNeMo unit tests that rely ongather_uneven_dtensor_to_full_tensorto test DCP checkpointing functionality.tests/unit_tests/distributed/fsdp/test_mfsdp_fully_shard.py::TestMegatronFsdpFullyShard::test_fully_shard) due to not asserting failure on every rank.Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
[email protected]or[email protected].Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.