-
Notifications
You must be signed in to change notification settings - Fork 3.3k
[MoE] Improvement of shared expert overlap, support shared expert overlap for FlexDispatcher #2207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[MoE] Improvement of shared expert overlap, support shared expert overlap for FlexDispatcher #2207
Conversation
1. Add shared expert overlap for FlexDispatcher 2. Add stream wait for cases where CUDA_DEVICE_MAX_CONNECTIONS> 1 to prevent shared expert GEMM launched too early. 3. Change fc1 location of shared experts in A2A dispatcher for better overlap.
|
@fanshiqing can you please take a look at this MR? |
|
|
||
| if self.stream is None: | ||
| self.stream = torch.cuda.Stream() | ||
| if SharedExpertMLP.stream is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yikes, isn't this a ClassVar now? Will we never need 2 different stream for the same class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. It's intended. There will not be two shared experts overlap with each other.
Moreover, if we creating a new stream for each instance, PyTorch may run out of streams from stream pool and reuse existing stream. This may cause interference and unwanted behavior.
|
/ok to test 3e308ac |
| group=group, | ||
| ) | ||
| if use_nccl_stream: | ||
| handle = torch.distributed.all_to_all_single( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the differene between if and else paths?
|
@fanshiqing can you please take a look at this MR? |
What does this PR do ?
Design doc: https://docs.google.com/document/d/1whtnUiw1hpfdkjFss_g5P8fyIT9xBA5XJvmdklFes48/edit?usp=sharing
Changelog:
Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
[email protected]or[email protected].Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.