-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Various small fixes for Megatron-FSDP. #2346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
d8050db to
91698d8
Compare
91698d8 to
b8f5682
Compare
|
/ok to test b8f5682 |
Signed-off-by: Cory Ye <[email protected]>
| process_group = device_mesh._flatten().get_group() | ||
| # Check if the fully-flattened mesh exists first. | ||
| full_flattened_mesh_dim_name = "_".join(device_mesh.mesh_dim_names) | ||
| if full_flattened_mesh_dim_name in get_mesh_names(device_mesh): | ||
| # Retrieve the existing flattened DeviceMesh ProcessGroup. | ||
| process_group = device_mesh[full_flattened_mesh_dim_name].get_group() | ||
| else: | ||
| # Create the _-separated flattened DeviceMesh ProcessGroup. | ||
| process_group = device_mesh._flatten().get_group() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shjwudp Everything related to (2) is fixed in 3-4 lines of code here^^^
Before, we just immediately _flatten(). Going into Torch 2.10 or 2.11, they will not allow us to create new DeviceMesh that matches the flattened name of an existing DeviceMesh. So I just use our helper function get_mesh_names() which checks for sub- and flat- dimensions and if I find an existing flattened mesh, we just use that mesh.
I believe this still potentially has loopholes. If the user creates a flattened DeviceMesh dimension with the same name but different topology than our desired mesh, then it will use the user's setting instead. I feel like there is no way to fix this fundamental issue (but we can add a warning message, that may be a good idea), so it will be the user's responsibility to have reasonably-named flattened meshes, i.e. dp_cp ~ the flattening of the dp and cp dims, which is the default behavior (i.e. "_".join([<mesh dims to flatten>])) of device_mesh._flatten().
Another thing to note is the DTensor.device_mesh here is a child/sub-mesh of the Megatron-FSDP root mesh. In future Torch versions, the flattened mesh will be a member of the DeviceMesh used to flatten. So there will be a lower chance of issues, because the user will likely call root_mesh._flatten() but not call root_mesh[("dp_shard", "dp_outer")]._flatten(), so we will be less likely to accidentally use the user's original DeviceMesh!
What does this PR do ?
hybrid_fsdp_grouprequirement when using HSDP without optimizer state sharding.FSDPDistributedIndex.get_dp_group()when not necessary because it refers toFSDPDistributedIndex.hybrid_fsdp_groupeven whenFSDPDistributedIndex.hsdp_outer_dp_shard=False(such as during HSDP with DP-Replicate) and we don't need this group to compute the DP size for FSDP or HSDP. All other instances ofget_dp_group()happen whenFSDPDistributedIndex.hybrid_fsdp_groupexists due to DP-Outer optimizer sharding in HFSDP.DeviceMesh._flattenvalidation error when runninggather_uneven_dtensor_to_full_tensoron a 1D sharding mesh. This unblocks BioNeMo unit tests that rely ongather_uneven_dtensor_to_full_tensorto test DCP checkpointing functionality.tests/unit_tests/distributed/fsdp/test_mfsdp_fully_shard.py::TestMegatronFsdpFullyShard::test_fully_shard) due to not asserting failure on every rank.Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
[email protected]or[email protected].Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.