Skip to content

[Deepseek] Router collapse on deepseek training loop #1246

@xuanzhang816

Description

@xuanzhang816

I noticed that as the training progresses, sometimes most tokens are routed to the same expert. Would be worthy it to add in expert bias handling to avoid this situation.

For instance, on a smaller setup with topk=1, bs=1, seqLen=8, if I print out the topk_idx returned by the router, I see patterns like:

[XXX] layer:  1 topk_idx: [30, 30, 30, 30, 30, 30, 30, 30]
[XXX] layer:  2 topk_idx: [19, 19, 19, 19, 19, 19, 19, 19]
[XXX] layer:  3 topk_idx: [9, 9, 9, 9, 9, 9, 9, 9]
[XXX] layer:  4 topk_idx: [43, 43, 43, 43, 43, 43, 43, 43]
[XXX] layer:  5 topk_idx: [56, 56, 56, 56, 56, 56, 56, 56]
[XXX] layer:  6 topk_idx: [18, 18, 18, 18, 18, 18, 18, 18]
[XXX] layer:  7 topk_idx: [33, 33, 33, 33, 33, 33, 33, 33]

cc: @lessw2020

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions