Skip to content

Conversation

@HaochenYuan
Copy link

@HaochenYuan HaochenYuan commented Nov 4, 2025

What does this PR do ?

Related issue: 1982
This PR removes the calculation of padding token in aux loss.
PR to the main branch: #2142

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share discuss a design-doc with the team.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either [email protected] or [email protected].

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@HaochenYuan HaochenYuan requested review from a team as code owners November 4, 2025 03:00
@copy-pr-bot
Copy link

copy-pr-bot bot commented Nov 4, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@HaochenYuan HaochenYuan added dev branch Dev branch related issues and development module: moe labels Nov 4, 2025
@Victarry
Copy link
Contributor

Victarry commented Nov 4, 2025

Could you please add an UT to the megatron-lm/tests/unit_tests/transformer/moe/test_routers.py and megatron-lm/tests/unit_tests/transformer/moe/test_aux_loss.py

@HaochenYuan
Copy link
Author

Could you please add an UT to the megatron-lm/tests/unit_tests/transformer/moe/test_routers.py and megatron-lm/tests/unit_tests/transformer/moe/test_aux_loss.py

Done

@Victarry
Copy link
Contributor

Victarry commented Nov 4, 2025

/ok to test 96b0d00

@Victarry Victarry added this to the Core 0.16 milestone Nov 4, 2025
@yanring
Copy link
Contributor

yanring commented Nov 5, 2025

Please submit a mirror PR to main as well

@HaochenYuan
Copy link
Author

Please submit a mirror PR to main as well

Done

@HaochenYuan HaochenYuan force-pushed the dev branch 2 times, most recently from 1a58ae5 to a389f33 Compare November 6, 2025 10:21
@BestJuly
Copy link
Contributor

BestJuly commented Nov 7, 2025

/ok to test 7a34303

@HaochenYuan
Copy link
Author

/ok to test f1b4e84

Comment on lines 445 to +448
*,
inference_params: Optional[BaseInferenceContext] = None,
loss_mask: Optional[Tensor] = None,
padding_mask: Optional[Tensor] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: add comments about the meaning of values in padding_mask

)

# Verify that probs for valid tokens are similar
torch.testing.assert_close(probs_valid_part, probs_without_mask, rtol=1e-3, atol=1e-3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check with torch.equal

Comment on lines +162 to +164
padding_mask (torch.Tensor, optional): Boolean mask indicating non-padding tokens.
Shape in [num_tokens]. True for valid tokens,
False for padding tokens. Defaults to None.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About the convention of using True or False to mark the padded token,
could you take a look to other frameworks like PyTorch and transformers for the typical choice?

I see the attention mask use False for valid attention in TransformerEngine
https://github.com/NVIDIA/TransformerEngine/tree/main?tab=readme-ov-file#v17-padding-mask-definition-for-pytorch

Comment on lines +664 to +696
if self.is_moe_layer:
# For MoE with checkpointing, we need a wrapper to pass padding_mask
def mlp_forward_with_padding(hidden_states):
return self.mlp(hidden_states, padding_mask=padding_mask_for_moe)

mlp_output_with_bias = te_checkpoint(
mlp_forward_with_padding,
False,
tensor_parallel.random.get_cuda_rng_tracker,
self.pg_collection.tp,
pre_mlp_layernorm_output,
)
else:
mlp_output_with_bias = te_checkpoint(
self.mlp,
False,
tensor_parallel.random.get_cuda_rng_tracker,
self.pg_collection.tp,
pre_mlp_layernorm_output,
)
else:
mlp_output_with_bias = tensor_parallel.checkpoint(
self.mlp, False, pre_mlp_layernorm_output
)
if self.is_moe_layer:
# For MoE with checkpointing, we need a wrapper to pass padding_mask
def mlp_forward_with_padding(hidden_states):
return self.mlp(hidden_states, padding_mask=padding_mask_for_moe)

mlp_output_with_bias = tensor_parallel.checkpoint(
mlp_forward_with_padding, False, pre_mlp_layernorm_output
)
else:
mlp_output_with_bias = tensor_parallel.checkpoint(
self.mlp, False, pre_mlp_layernorm_output
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems a little duplicated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dev branch Dev branch related issues and development module: moe

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants