-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Explicitly zero out padding token activations for dynamic inference #2008
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Keshav Santhanam <[email protected]>
Signed-off-by: Keshav Santhanam <[email protected]>
Signed-off-by: Keshav Santhanam <[email protected]>
Signed-off-by: Keshav Santhanam <[email protected]>
Signed-off-by: Keshav Santhanam <[email protected]>
| } | ||
| "0": { | ||
| "input_prompt": "Time travel to 2008, and go to a bar or a club or one of the myriad disco-basements on the Lower East Side that does not quite know which of those it is. Dance awkwardly in a room full of other glittered-up nerds, and wait for something to happen, buoyed on the feeling that this is the big swollen heart of life, that this is New York like the movies.", | ||
| "generated_text": " And that this is the place where you can be yourself, and be yourself in the most beautiful way possible. And that this is the place where you", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this changing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry missed this, the outputs were previously incorrect due to the influence from the padding tokens.
deepakn94
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems ok to me, but why are we padding out outputs in three specific places (decoder_input in GPTModel and MambaModel, and core_attn_out in Attention)?
These are the places where the hidden states for padding tokens enter as zero but exit as nonzero. This is problematic because these non-zero padding values can corrupt amax calculations. |
Signed-off-by: Keshav Santhanam <[email protected]>
|
/ok to test 39794d9 |
JRD971000
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed over Slack, LGTM, thanks!
What does this PR do ?
Explicitly zeroes out padding token activations for dynamic inference. This is necessary to ensure that padding tokens do not influence quantization scaling factors.
Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
[email protected]or[email protected].Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.