Skip to content

Conversation

@cspades
Copy link
Member

@cspades cspades commented Nov 21, 2025

What does this PR do ?

  • Remove the hybrid_fsdp_group requirement when using HSDP without optimizer state sharding.
    • Main WAR is to avoid using FSDPDistributedIndex.get_dp_group() when not necessary because it refers to FSDPDistributedIndex.hybrid_fsdp_group even when FSDPDistributedIndex.hsdp_outer_dp_shard=False (such as during HSDP with DP-Replicate) and we don't need this group to compute the DP size for FSDP or HSDP. All other instances of get_dp_group() happen when FSDPDistributedIndex.hybrid_fsdp_group exists due to DP-Outer optimizer sharding in HFSDP.
  • Fix a recent Torch 2.9 DeviceMesh._flatten validation error when running gather_uneven_dtensor_to_full_tensor on a 1D sharding mesh. This unblocks BioNeMo unit tests that rely on gather_uneven_dtensor_to_full_tensor to test DCP checkpointing functionality.
FAILED tests/test_distributed_checkpointing.py::test_final_model_save_mfsdp - RuntimeError: ("dp already exists for submesh of the DeviceMesh((dp=1, tp=1), device: 'cuda', stride: ...
  • Fix a hanging gradient unit test (tests/unit_tests/distributed/fsdp/test_mfsdp_fully_shard.py::TestMegatronFsdpFullyShard::test_fully_shard) due to not asserting failure on every rank.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share discuss a design-doc with the team.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either [email protected] or [email protected].

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@cspades cspades self-assigned this Nov 21, 2025
@cspades cspades requested review from a team as code owners November 21, 2025 03:04
@cspades cspades added the Expert Review Apply this label to indicate that your PR is ready for expert review. label Nov 21, 2025
@copy-pr-bot
Copy link

copy-pr-bot bot commented Nov 21, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@cspades
Copy link
Member Author

cspades commented Nov 21, 2025

/ok to test b8f5682

Comment on lines -275 to +284
process_group = device_mesh._flatten().get_group()
# Check if the fully-flattened mesh exists first.
full_flattened_mesh_dim_name = "_".join(device_mesh.mesh_dim_names)
if full_flattened_mesh_dim_name in get_mesh_names(device_mesh):
# Retrieve the existing flattened DeviceMesh ProcessGroup.
process_group = device_mesh[full_flattened_mesh_dim_name].get_group()
else:
# Create the _-separated flattened DeviceMesh ProcessGroup.
process_group = device_mesh._flatten().get_group()
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shjwudp Everything related to (2) is fixed in 3-4 lines of code here^^^

Before, we just immediately _flatten(). Going into Torch 2.10 or 2.11, they will not allow us to create new DeviceMesh that matches the flattened name of an existing DeviceMesh. So I just use our helper function get_mesh_names() which checks for sub- and flat- dimensions and if I find an existing flattened mesh, we just use that mesh.

I believe this still potentially has loopholes. If the user creates a flattened DeviceMesh dimension with the same name but different topology than our desired mesh, then it will use the user's setting instead. I feel like there is no way to fix this fundamental issue (but we can add a warning message, that may be a good idea), so it will be the user's responsibility to have reasonably-named flattened meshes, i.e. dp_cp ~ the flattening of the dp and cp dims, which is the default behavior (i.e. "_".join([<mesh dims to flatten>])) of device_mesh._flatten().

Another thing to note is the DTensor.device_mesh here is a child/sub-mesh of the Megatron-FSDP root mesh. In future Torch versions, the flattened mesh will be a member of the DeviceMesh used to flatten. So there will be a lower chance of issues, because the user will likely call root_mesh._flatten() but not call root_mesh[("dp_shard", "dp_outer")]._flatten(), so we will be less likely to accidentally use the user's original DeviceMesh!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Expert Review Apply this label to indicate that your PR is ready for expert review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants