Skip to content

Add Cache For XGrammar Backend #933

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

flyinglandlord
Copy link
Collaborator

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @flyinglandlord, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly optimizes the XGrammarBackend by implementing a caching mechanism for compiled grammars and JSON schemas. It also introduces a new Triton-based kernel for highly efficient, batch-level application of token bitmasks, leading to improved performance for grammar-guided generation.

Highlights

  • Caching XGrammar Grammars: Introduced a caching mechanism within XgrammarBackend to store and reuse compiled xgrammar grammars and JSON schemas, reducing redundant compilation overhead for repeated requests.
  • XGrammarGrammar Wrapper: Created a new XGrammarGrammar class (lines 33-129) to encapsulate xgrammar.GrammarMatcher logic and state, providing a consistent interface for managing grammar state, token acceptance, and facilitating the new caching strategy.
  • Optimized Vocab Masking: Transitioned from per-request bitmask allocation to a batch-level vocab_mask (lines 146, 279-282), which is now allocated once per batch and filled for each request, enabling more efficient processing.
  • Triton-Accelerated Bitmask Application: Added a new Triton kernel (apply_token_bitmask_inplace_triton in bit_mask_ops.py) to efficiently apply the batch-level token bitmask to logits on CUDA devices (lines 187, 233), significantly speeding up the masking process.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a caching mechanism for compiled XGrammar objects and a new XGrammarGrammar class. Key areas for review include a bug in the JSON schema cache key, the handling of CPU-side vocabulary mask application, and opportunities for code deduplication and adding docstrings.

Comment on lines 176 to 184
if not all_has_no_constraint:
for i, run_obj in enumerate(run_reqs):
if (
run_obj.sampling_param.guided_grammar is not None
or run_obj.sampling_param.guided_json is not None
):
if first_grammar is None:
first_grammar = run_obj.sampling_param.guided_grammar or run_obj.sampling_param.guided_json
self._mask_req_out_token(i, run_obj, logits[i])

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for finding first_grammar and iterating through run_reqs is duplicated in the prefill block (lines 219-227). Consider refactoring this into a helper method to reduce redundancy.

@@ -137,22 +270,45 @@ def _mask_req_out_token(self, i, run_obj: InferReq, logits):
if run_obj.get_chuncked_input_token_len() == run_obj.get_cur_total_len():
sample_params = run_obj.sampling_param
if sample_params.guided_grammar is not None or sample_params.guided_json is not None:
sample_params.xgrammar_matcher.fill_next_token_bitmask(self.xgrammar_token_bitmask)
xgr.apply_token_bitmask_inplace(logits, self.xgrammar_token_bitmask.to(logits.device))
sample_params.xgrammar_matcher.fill_vocab_mask(self.vocab_mask, i)
return

def _init_req_xgrammer_matcher_infos(self, run_reqs: List[InferReq]):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Typo: _init_req_xgrammer_matcher_infos should be _init_req_xgrammar_matcher_infos (grammar instead of grammer).

Suggested change
def _init_req_xgrammer_matcher_infos(self, run_reqs: List[InferReq]):
def _init_req_xgrammar_matcher_infos(self, run_reqs: List[InferReq]):

MAX_ROLLBACK_TOKENS = 200


class XGrammarGrammar:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Add docstrings to the XGrammarGrammar class and its methods to improve code readability.

…l/impl_for_xgrammar_mode.py

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant