Skip to content

Conversation

@zhuzilin
Copy link
Collaborator

Motivation

This PR is trying to offload the mamba pool within the hybrid model so that we can correctly do RL on models like Qwen3Next.

Thank you for your time on reviewing this PR :)

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @zhuzilin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing memory management for hybrid linear attention models, particularly to facilitate Reinforcement Learning (RL) with models such as Qwen3Next. It achieves this by integrating a memory saver mechanism directly into the Mamba pool's initialization process and ensuring that memory saving configurations are consistently applied throughout the memory allocation components.

Highlights

  • Mamba Pool Memory Offloading: The initialization of the Mamba pool is now explicitly managed by a memory saver adapter, allowing for more efficient memory utilization, especially for hybrid models.
  • Memory Saver Propagation: The enable_memory_saver flag is now correctly passed down to the ReqToTokenPool and its underlying components (TokenToKVPool, MLATokenToKVPool), ensuring consistent application of memory saving configurations across the memory management system.
  • Server Argument Integration: The enable_memory_saver setting from the server arguments is now utilized during the memory pool initialization in the model runner, providing dynamic control over memory saving features based on server configuration.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly enables memory offloading for hybrid linear attention models. The changes are well-targeted and effectively integrate torch-memory-saver for both the Mamba pool and the full attention KV pool within the hybrid model. By wrapping the Mamba pool allocation in a memory saver region and propagating the enable_memory_saver flag to the HybridLinearKVPool, the PR ensures that memory-intensive components can be offloaded, which is crucial for running large models in memory-constrained environments like reinforcement learning setups. The implementation is clean and follows existing patterns in the codebase. I have no further comments.

@fzyzcjy fzyzcjy self-assigned this Nov 15, 2025
@fzyzcjy
Copy link
Collaborator

fzyzcjy commented Nov 16, 2025

maybe need to fix this

image

@fzyzcjy fzyzcjy merged commit 9509c4c into sgl-project:main Nov 16, 2025
91 of 120 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants