Skip to content

Conversation

@cspades
Copy link
Member

@cspades cspades commented Nov 21, 2025

What does this PR do ?

  • Remove the hybrid_fsdp_group requirement when using HSDP without optimizer state sharding.
    • Main WAR is to avoid using FSDPDistributedIndex.get_dp_group() when not necessary because it refers to FSDPDistributedIndex.hybrid_fsdp_group even when FSDPDistributedIndex.hsdp_outer_dp_shard=False (such as during HSDP with DP-Replicate) and we don't need this group to compute the DP size for FSDP or HSDP. All other instances of get_dp_group() happen when FSDPDistributedIndex.hybrid_fsdp_group exists due to DP-Outer optimizer sharding in HFSDP.
  • Fix a recent Torch 2.9 DeviceMesh._flatten validation error when running gather_uneven_dtensor_to_full_tensor on a 1D sharding mesh. This unblocks BioNeMo unit tests that rely on gather_uneven_dtensor_to_full_tensor to test DCP checkpointing functionality.
FAILED tests/test_distributed_checkpointing.py::test_final_model_save_mfsdp - RuntimeError: ("dp already exists for submesh of the DeviceMesh((dp=1, tp=1), device: 'cuda', stride: ...
  • Fix a hanging gradient unit test (tests/unit_tests/distributed/fsdp/test_mfsdp_fully_shard.py::TestMegatronFsdpFullyShard::test_fully_shard) due to not asserting failure on every rank.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share discuss a design-doc with the team.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either [email protected] or [email protected].

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@cspades cspades self-assigned this Nov 21, 2025
@cspades cspades requested review from a team as code owners November 21, 2025 03:04
@cspades cspades added the Expert Review Apply this label to indicate that your PR is ready for expert review. label Nov 21, 2025
@copy-pr-bot
Copy link

copy-pr-bot bot commented Nov 21, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@cspades
Copy link
Member Author

cspades commented Nov 21, 2025

/ok to test b8f5682

# FIXME(@cspades): Currently not used gradient_reduce_preprocessing()?
expert_gradient_scaling_factor = (
self.dist_index.get_dp_group(is_expert_parallel=True).size()
/ self.dist_index.get_dp_group().size()
Copy link
Contributor

@shjwudp shjwudp Nov 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will the torch 2.9 check affect the behavior of get_dp_group().size()? Do we need to update the logic here?

if (dp_outer_dim is None) ^ (hybrid_fsdp_group is None):
# XOR - HSDP requires both or neither of dp_outer_dim and hybrid_fsdp_group
# to be specified, so if XOR then raise an error.
if _outer_fsdp_sharding and hybrid_fsdp_group is None:
Copy link
Contributor

@shjwudp shjwudp Nov 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to change this line? My understanding is that we only need to handle the new ValidateError introduced by the PyTorch 2.9 and perform the check before calling device_mesh._flatten.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Expert Review Apply this label to indicate that your PR is ready for expert review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants