-
Notifications
You must be signed in to change notification settings - Fork 3.3k
[Dev] Remove calculation of padding token in moe routing loss #2121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
|
Could you please add an UT to the |
Done |
|
/ok to test 96b0d00 |
|
Please submit a mirror PR to main as well |
Done |
1a58ae5 to
a389f33
Compare
|
/ok to test 7a34303 |
|
/ok to test f1b4e84 |
| *, | ||
| inference_params: Optional[BaseInferenceContext] = None, | ||
| loss_mask: Optional[Tensor] = None, | ||
| padding_mask: Optional[Tensor] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: add comments about the meaning of values in padding_mask
| ) | ||
|
|
||
| # Verify that probs for valid tokens are similar | ||
| torch.testing.assert_close(probs_valid_part, probs_without_mask, rtol=1e-3, atol=1e-3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check with torch.equal
| padding_mask (torch.Tensor, optional): Boolean mask indicating non-padding tokens. | ||
| Shape in [num_tokens]. True for valid tokens, | ||
| False for padding tokens. Defaults to None. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About the convention of using True or False to mark the padded token,
could you take a look to other frameworks like PyTorch and transformers for the typical choice?
I see the attention mask use False for valid attention in TransformerEngine
https://github.com/NVIDIA/TransformerEngine/tree/main?tab=readme-ov-file#v17-padding-mask-definition-for-pytorch
| if self.is_moe_layer: | ||
| # For MoE with checkpointing, we need a wrapper to pass padding_mask | ||
| def mlp_forward_with_padding(hidden_states): | ||
| return self.mlp(hidden_states, padding_mask=padding_mask_for_moe) | ||
|
|
||
| mlp_output_with_bias = te_checkpoint( | ||
| mlp_forward_with_padding, | ||
| False, | ||
| tensor_parallel.random.get_cuda_rng_tracker, | ||
| self.pg_collection.tp, | ||
| pre_mlp_layernorm_output, | ||
| ) | ||
| else: | ||
| mlp_output_with_bias = te_checkpoint( | ||
| self.mlp, | ||
| False, | ||
| tensor_parallel.random.get_cuda_rng_tracker, | ||
| self.pg_collection.tp, | ||
| pre_mlp_layernorm_output, | ||
| ) | ||
| else: | ||
| mlp_output_with_bias = tensor_parallel.checkpoint( | ||
| self.mlp, False, pre_mlp_layernorm_output | ||
| ) | ||
| if self.is_moe_layer: | ||
| # For MoE with checkpointing, we need a wrapper to pass padding_mask | ||
| def mlp_forward_with_padding(hidden_states): | ||
| return self.mlp(hidden_states, padding_mask=padding_mask_for_moe) | ||
|
|
||
| mlp_output_with_bias = tensor_parallel.checkpoint( | ||
| mlp_forward_with_padding, False, pre_mlp_layernorm_output | ||
| ) | ||
| else: | ||
| mlp_output_with_bias = tensor_parallel.checkpoint( | ||
| self.mlp, False, pre_mlp_layernorm_output | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems a little duplicated.
What does this PR do ?
Related issue: 1982
This PR removes the calculation of padding token in aux loss.
PR to the main branch: #2142
Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
[email protected]or[email protected].Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.