Add Agent Evaluation skill for accuracy benchmarking#1132
Add Agent Evaluation skill for accuracy benchmarking#1132
Conversation
📝 WalkthroughWalkthroughAdds a new NeMo Evaluator Launcher (NEL) evaluation skill with end-to-end workflow docs, reference guides for model-card research, multi-node deployments, quantization-aware benchmarks, two evaluation test specs, and minor markdownlint configuration updates. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant Skill as "Evaluation Skill (SKILL.md)"
participant NEL as "nel CLI"
participant Repo as "Model Repo / HF"
participant SLURM as SLURM
participant MLflow as MLflow
User->>Skill: provide answers (mode, backend, auto-export, model type, benchmarks)
Skill->>NEL: run `nel skills build-config` (generate base config)
Skill->>Repo: inspect model card / read `hf_quant_config.json` or `config.json`
Repo-->>Skill: return model metadata, quantization flags
Skill->>NEL: inject deployment.extra_args (quant flags), add benchmark recommendations
User->>Skill: confirm tasks / adjust SLURM & MLflow tags
Skill->>SLURM: submit via `nel run` (dry-run/test/full)
Skill->>MLflow: export results (if enabled)
NEL->>Skill: report status/logs
Skill->>User: present monitoring instructions and results
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/skills/evaluation/SKILL.md:
- Around line 269-275: Update the SKILL.md snippet that documents
NEMO_EVALUATOR_TRUST_PRE_CMD to include a clear security warning: note that
setting NEMO_EVALUATOR_TRUST_PRE_CMD=1 enables execution of pre_cmd and post_cmd
which run arbitrary shell commands with the evaluator's privileges, instruct
users to review pre_cmd content, only trust configs from known sources, and be
cautious when using nemo_skills.* self-deployment tasks; reference the
environment variable name (NEMO_EVALUATOR_TRUST_PRE_CMD) and the config keys
(pre_cmd, post_cmd, nemo_skills.*) so readers can find and audit them.
- Around line 100-116: Update the documentation in SKILL.md to remove the
incorrect per-algorithm vLLM flag mapping and instead state that if
hf_quant_config.json exists (read quantization.quant_algo), vLLM uses a single
unified flag --quantization modelopt which auto-detects NVFP4, W4A8_AWQ, FP8,
etc.; replace the table and related lines with a concise statement: "If
hf_quant_config.json exists, vLLM auto-detects the quantization format and you
should pass --quantization modelopt (no format-specific flags required)."
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 697ac2e8-f1a5-41d4-9870-adb1ff730091
📒 Files selected for processing (3)
.claude/skills/evaluation/SKILL.md.claude/skills/evaluation/evals/nemotron3-nano-bf16-reasoning.json.markdownlint-cli2.yaml
.claude/skills/evaluation/SKILL.md
Outdated
| **Auto-detect ModelOpt quantization format** (checkpoint paths only): | ||
|
|
||
| Check for `hf_quant_config.json` in the checkpoint directory: | ||
|
|
||
| ```bash | ||
| cat <checkpoint_path>/hf_quant_config.json 2>/dev/null | ||
| ``` | ||
|
|
||
| If found, read `quantization.quant_algo` and set the correct vLLM/SGLang quantization flag in `deployment.extra_args`: | ||
|
|
||
| | `quant_algo` | Flag to add | | ||
| |-------------|-------------| | ||
| | `FP8` | `--quantization modelopt` | | ||
| | `W4A8_AWQ` | `--quantization modelopt` | | ||
| | `NVFP4`, `NVFP4_AWQ` | `--quantization modelopt_fp4` | | ||
|
|
||
| If no `hf_quant_config.json`, the checkpoint is unquantized — no flag needed. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for existing ModelOpt quantization format references in the codebase
echo "=== Searching for hf_quant_config.json references ==="
rg -n 'hf_quant_config' -C3
echo -e "\n=== Searching for quantization.quant_algo references ==="
rg -n 'quant_algo' -C3
echo -e "\n=== Searching for FP8, W4A8_AWQ, NVFP4 algorithm references ==="
rg -n 'FP8|W4A8_AWQ|NVFP4' -C2Repository: NVIDIA/Model-Optimizer
Length of output: 50379
🌐 Web query:
What are the supported quantization formats in vLLM for ModelOpt quantized models? Specifically, does vLLM support --quantization modelopt and --quantization modelopt_fp4 flags?
💡 Result:
vLLM supports ModelOpt quantized models via the --quantization modelopt flag. This includes formats like FP8 (ModelOpt FP8), NVFP4 (a 4-bit floating point format from NVIDIA Model Optimizer), MXFP8, and mixed precision (e.g., FP8 dense + NVFP4 MoE). There is no specific --quantization modelopt_fp4 flag; FP4 support (NVFP4) is handled under the unified modelopt quantization method, auto-detected from the model's quantization config (e.g., hf_quant_config.json or quantization_config in config.json). For CLI usage, specify --quantization modelopt when loading ModelOpt-quantized checkpoints from Hugging Face, as shown in examples like nvidia/Llama-3.1-8B-Instruct-FP8.
Citations:
- 1: https://docs.vllm.ai/en/latest/features/quantization/
- 2: https://docs.vllm.ai/en/latest/api/vllm/model_executor/layers/quantization/modelopt.html
- 3: https://docs.vllm.ai/en/stable/api/vllm/model_executor/layers/quantization/modelopt/
- 4: https://docs.vllm.ai/en/v0.9.0/features/quantization/modelopt.html
- 5: https://docs.vllm.ai/projects/llm-compressor/en/0.8.0/examples/quantization_w4a4_fp4/
- 6: https://docs.vllm.ai/en/stable/features/quantization/modelopt.html
Incorrect vLLM flag mapping for ModelOpt quantization auto-detection.
The JSON path quantization.quant_algo is correct and the algorithm names (FP8, W4A8_AWQ, NVFP4, NVFP4_AWQ) are confirmed in the ModelOpt codebase. However, the proposed flag mapping is incorrect:
-
--quantization modelopt_fp4does not exist in vLLM. vLLM provides a single unified flag:--quantization modelopt, which auto-detects the quantization format from the model's quantization config (eitherhf_quant_config.jsonorconfig.json'squantization_configfield). -
NVFP4 is auto-detected, not mapped to a separate flag. vLLM automatically recognizes NVFP4, W4A8_AWQ, FP8, and other formats when
quant_algois present in the quantization config.
Remove the flag mapping table and replace with: "If hf_quant_config.json exists, vLLM auto-detects the quantization format and applies --quantization modelopt automatically. No additional format-specific flags are needed."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/evaluation/SKILL.md around lines 100 - 116, Update the
documentation in SKILL.md to remove the incorrect per-algorithm vLLM flag
mapping and instead state that if hf_quant_config.json exists (read
quantization.quant_algo), vLLM uses a single unified flag --quantization
modelopt which auto-detects NVFP4, W4A8_AWQ, FP8, etc.; replace the table and
related lines with a concise statement: "If hf_quant_config.json exists, vLLM
auto-detects the quantization format and you should pass --quantization modelopt
(no format-specific flags required)."
| ```bash | ||
| # If using pre_cmd or post_cmd: | ||
| export NEMO_EVALUATOR_TRUST_PRE_CMD=1 | ||
|
|
||
| # If using nemo_skills.* tasks with self-deployment: | ||
| export DUMMY_API_KEY=dummy | ||
| ``` |
There was a problem hiding this comment.
Consider security implications of NEMO_EVALUATOR_TRUST_PRE_CMD.
Line 271 sets NEMO_EVALUATOR_TRUST_PRE_CMD=1 to enable pre_cmd execution. Since pre_cmd can run arbitrary shell commands (including downloads via curl as shown in line 149), this environment variable effectively disables a security safeguard.
While necessary for the workflow, consider adding a security note warning users to:
- Review
pre_cmdcontent in configs before running - Only trust configs from known sources
- Understand that
pre_cmdruns with the same privileges as NEL
🛡️ Proposed security notice
**Important**: Export required environment variables based on your config. If any tokens or keys are missing (e.g. `HF_TOKEN`, `NGC_API_KEY`, `api_key_name` from the config), ask the user to put them in a `.env` file in the project root so you can run `set -a && source .env && set +a` (or equivalent) before executing `nel run` commands.
```bash
-# If using pre_cmd or post_cmd:
+# If using pre_cmd or post_cmd (review commands first - they execute with your privileges):
export NEMO_EVALUATOR_TRUST_PRE_CMD=1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ```bash | |
| # If using pre_cmd or post_cmd: | |
| export NEMO_EVALUATOR_TRUST_PRE_CMD=1 | |
| # If using nemo_skills.* tasks with self-deployment: | |
| export DUMMY_API_KEY=dummy | |
| ``` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/evaluation/SKILL.md around lines 269 - 275, Update the
SKILL.md snippet that documents NEMO_EVALUATOR_TRUST_PRE_CMD to include a clear
security warning: note that setting NEMO_EVALUATOR_TRUST_PRE_CMD=1 enables
execution of pre_cmd and post_cmd which run arbitrary shell commands with the
evaluator's privileges, instruct users to review pre_cmd content, only trust
configs from known sources, and be cautious when using nemo_skills.*
self-deployment tasks; reference the environment variable name
(NEMO_EVALUATOR_TRUST_PRE_CMD) and the config keys (pre_cmd, post_cmd,
nemo_skills.*) so readers can find and audit them.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1132 +/- ##
==========================================
- Coverage 70.20% 70.18% -0.03%
==========================================
Files 230 230
Lines 26080 26080
==========================================
- Hits 18310 18304 -6
- Misses 7770 7776 +6 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
♻️ Duplicate comments (2)
.claude/skills/evaluation/SKILL.md (2)
194-199:⚠️ Potential issue | 🟠 MajorAdd explicit security warning for
NEMO_EVALUATOR_TRUST_PRE_CMD.Line 195 enables trusted command execution, but the snippet lacks a caution that
pre_cmd/post_cmdrun arbitrary shell commands with evaluator privileges. Please add an explicit warning in this section.Proposed doc fix
-# If using pre_cmd or post_cmd: +# If using pre_cmd or post_cmd: +# WARNING: NEMO_EVALUATOR_TRUST_PRE_CMD=1 allows `pre_cmd`/`post_cmd` execution +# from config with your current privileges. Review `pre_cmd`, `post_cmd`, and +# `nemo_skills.*` task settings, and only run trusted configs. export NEMO_EVALUATOR_TRUST_PRE_CMD=1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/evaluation/SKILL.md around lines 194 - 199, Add a clear security warning near the NEMO_EVALUATOR_TRUST_PRE_CMD export: explain that setting NEMO_EVALUATOR_TRUST_PRE_CMD=1 allows evaluator to run arbitrary shell commands via pre_cmd and post_cmd with evaluator privileges, warn against enabling it in untrusted environments or on production hosts, and recommend alternatives (avoid enabling, run in isolated sandbox/container, or validate commands) while referencing the environment variable name NEMO_EVALUATOR_TRUST_PRE_CMD and the pre_cmd/post_cmd hooks so readers know exactly which settings are risky.
108-115:⚠️ Potential issue | 🔴 CriticalIncorrect ModelOpt flag mapping for NVFP4 in vLLM docs path.
Line 114 maps
NVFP4/NVFP4_AWQto--quantization modelopt_fp4, which is likely invalid; this should use the unified ModelOpt quantization flag flow instead. This can misconfigure deployments at runtime.Proposed doc fix
-| `NVFP4`, `NVFP4_AWQ` | `--quantization modelopt_fp4` | -| Other values | Try `--quantization modelopt`; consult vLLM/SGLang docs if unsure | +| `NVFP4`, `NVFP4_AWQ` | `--quantization modelopt` | +| Other values | Use `--quantization modelopt`; verify backend support in docs if unsure |What are the valid vLLM CLI values for `--quantization` when loading NVIDIA ModelOpt-quantized checkpoints, and is `modelopt_fp4` a supported value?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/evaluation/SKILL.md around lines 108 - 115, The doc incorrectly maps NVFP4/NVFP4_AWQ to the unsupported flag `--quantization modelopt_fp4`; update the mapping logic so that when reading quantization.quant_algo you add the unified ModelOpt flag (e.g., `--quantization modelopt`) into deployment.extra_args for NVFP4/NVFP4_AWQ instead of `modelopt_fp4`, and add a note clarifying that vLLM/SGLang use `modelopt` for NVIDIA ModelOpt formats and to consult vLLM docs for any newer flag names; ensure the table rows referencing `NVFP4`, `NVFP4_AWQ`, `modelopt_fp4` are replaced with `--quantization modelopt` and adjust any code/comments that conditionally look for `modelopt_fp4`.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.claude/skills/evaluation/SKILL.md:
- Around line 194-199: Add a clear security warning near the
NEMO_EVALUATOR_TRUST_PRE_CMD export: explain that setting
NEMO_EVALUATOR_TRUST_PRE_CMD=1 allows evaluator to run arbitrary shell commands
via pre_cmd and post_cmd with evaluator privileges, warn against enabling it in
untrusted environments or on production hosts, and recommend alternatives (avoid
enabling, run in isolated sandbox/container, or validate commands) while
referencing the environment variable name NEMO_EVALUATOR_TRUST_PRE_CMD and the
pre_cmd/post_cmd hooks so readers know exactly which settings are risky.
- Around line 108-115: The doc incorrectly maps NVFP4/NVFP4_AWQ to the
unsupported flag `--quantization modelopt_fp4`; update the mapping logic so that
when reading quantization.quant_algo you add the unified ModelOpt flag (e.g.,
`--quantization modelopt`) into deployment.extra_args for NVFP4/NVFP4_AWQ
instead of `modelopt_fp4`, and add a note clarifying that vLLM/SGLang use
`modelopt` for NVIDIA ModelOpt formats and to consult vLLM docs for any newer
flag names; ensure the table rows referencing `NVFP4`, `NVFP4_AWQ`,
`modelopt_fp4` are replaced with `--quantization modelopt` and adjust any
code/comments that conditionally look for `modelopt_fp4`.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 626f8ed2-3b5d-4a8f-b618-c68007312945
📒 Files selected for processing (15)
.claude/skills/evaluation/SKILL.md.claude/skills/evaluation/evals/base-model-local-execution.json.claude/skills/evaluation/evals/external-deployment-eval.json.claude/skills/evaluation/evals/interceptor-configuration.json.claude/skills/evaluation/evals/multi-node-evaluation.json.claude/skills/evaluation/evals/nel-not-installed.json.claude/skills/evaluation/evals/nemotron3-nano-bf16-reasoning.json.claude/skills/evaluation/evals/nvfp4-auto-detect-quantization.json.claude/skills/evaluation/evals/quantized-checkpoint-local-vllm.json.claude/skills/evaluation/evals/reasoning-model-sglang.json.claude/skills/evaluation/evals/safety-multilingual-benchmarks.json.claude/skills/evaluation/evals/wandb-export-code-benchmarks.json.claude/skills/evaluation/evals/workspace-reuse-from-ptq.json.claude/skills/evaluation/references/model-card-research.md.claude/skills/evaluation/references/multi-node.md
✅ Files skipped from review due to trivial changes (13)
- .claude/skills/evaluation/evals/nel-not-installed.json
- .claude/skills/evaluation/evals/interceptor-configuration.json
- .claude/skills/evaluation/evals/external-deployment-eval.json
- .claude/skills/evaluation/evals/reasoning-model-sglang.json
- .claude/skills/evaluation/evals/wandb-export-code-benchmarks.json
- .claude/skills/evaluation/evals/nvfp4-auto-detect-quantization.json
- .claude/skills/evaluation/evals/workspace-reuse-from-ptq.json
- .claude/skills/evaluation/evals/base-model-local-execution.json
- .claude/skills/evaluation/evals/safety-multilingual-benchmarks.json
- .claude/skills/evaluation/evals/multi-node-evaluation.json
- .claude/skills/evaluation/evals/quantized-checkpoint-local-vllm.json
- .claude/skills/evaluation/references/model-card-research.md
- .claude/skills/evaluation/evals/nemotron3-nano-bf16-reasoning.json
Edwardf0t1
left a comment
There was a problem hiding this comment.
Left a few comments - I think overall it's in a good shape and aligned well with the design we discussed 👍
.claude/skills/evaluation/SKILL.md
Outdated
| @@ -0,0 +1,310 @@ | |||
| --- | |||
| name: evaluation | |||
| description: Evaluate accuracy of quantized or unquantized LLMs using NeMo Evaluator Launcher (NEL). Use when user says "evaluate model", "benchmark accuracy", "run MMLU", "evaluate quantized model", "accuracy drop", "run nel", or needs to measure how quantization affects model quality. Handles model deployment, config generation, and evaluation execution. | |||
There was a problem hiding this comment.
Similar to my comment in the deployment skills PR, we can add some negative triggers as well.
|
|
||
| If no `hf_quant_config.json`, also check `config.json` for a `quantization_config` section with `quant_method: "modelopt"`. If neither is found, the checkpoint is unquantized — no flag needed. | ||
|
|
||
| **Quantization-aware benchmark defaults:** |
There was a problem hiding this comment.
I think we can consider extracting quantization benchmarks including benchmark sensitivity ranking and recommended sets to a reference file, e.g., references/quantization-benchmarks.md, so it can be reused by the compare-results skills later.
The reason to have a compare-results skill is that evaluation is about configuring and running NEL, while compare-results is about interpreting and acting on results from multiple runs.
There was a problem hiding this comment.
Added references/quantization-benchmarks.md.
| When you have all the answers, run the script to build the base config: | ||
|
|
||
| ```bash | ||
| nel skills build-config --execution <local|slurm> --deployment <none|vllm|sglang|nim|trtllm> --model_type <base|chat|reasoning> --benchmarks <standard|code|math_reasoning|safety|multilingual> [--export <none|mlflow|wandb>] [--output <OUTPUT>] |
There was a problem hiding this comment.
I think we need to verify benchmark categories to match NEL's build-config CLI - If NEL's categories changed, we need to update accordingly.
68fe89e to
0a4a607
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
.claude/skills/evaluation/SKILL.md (1)
110-116:⚠️ Potential issue | 🔴 CriticalCRITICAL: Incorrect vLLM quantization flag mapping persists.
This issue was previously flagged but remains unresolved. The table mapping
quant_algoto vLLM flags is incorrect:
--quantization modelopt_fp4does not exist in vLLM. vLLM provides only--quantization modelopt, which auto-detects all ModelOpt formats (FP8, W4A8_AWQ, NVFP4, NVFP4_AWQ) fromhf_quant_config.jsonorconfig.json.Remove the per-algorithm flag mapping. Replace lines 108-116 with a single statement: "If
hf_quant_config.jsonorconfig.jsoncontainsquant_method: "modelopt", add--quantization modelopttodeployment.extra_args. vLLM auto-detects the specific format (FP8, NVFP4, W4A8_AWQ, etc.)."🔧 Proposed fix
-**Auto-detect ModelOpt quantization format** (checkpoint paths only): - -Check for `hf_quant_config.json` in the checkpoint directory: - -```bash -cat <checkpoint_path>/hf_quant_config.json 2>/dev/null -``` - -If found, read `quantization.quant_algo` and set the correct vLLM/SGLang quantization flag in `deployment.extra_args`: - -| `quant_algo` | Flag to add | -|-------------|-------------| -| `FP8` | `--quantization modelopt` | -| `W4A8_AWQ` | `--quantization modelopt` | -| `NVFP4`, `NVFP4_AWQ` | `--quantization modelopt_fp4` | -| Other values | Try `--quantization modelopt`; consult vLLM/SGLang docs if unsure | - -If no `hf_quant_config.json`, also check `config.json` for a `quantization_config` section with `quant_method: "modelopt"`. If neither is found, the checkpoint is unquantized — no flag needed. +**Auto-detect ModelOpt quantization format** (checkpoint paths only): + +Check for `hf_quant_config.json` in the checkpoint directory: + +```bash +cat <checkpoint_path>/hf_quant_config.json 2>/dev/null +``` + +If found (or if `config.json` contains `quantization_config` with `quant_method: "modelopt"`), add `--quantization modelopt` to `deployment.extra_args`. vLLM auto-detects the specific quantization format (FP8, W4A8_AWQ, NVFP4, NVFP4_AWQ, etc.) from the config. No format-specific flags are needed. + +If neither config is found, the checkpoint is unquantized — no quantization flag needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/evaluation/SKILL.md around lines 110 - 116, Replace the per-algorithm mapping logic that reads quantization.quant_algo and emits algorithm-specific flags with a single, deterministic check: if hf_quant_config.json exists in the checkpoint directory or config.json contains quantization_config with quant_method: "modelopt", then add "--quantization modelopt" to deployment.extra_args (vLLM will auto-detect FP8/NVFP4/W4A8_AWQ formats); otherwise emit no quantization flag. Update any references to quantization.quant_algo, the table mapping entries, and the logic that would add "--quantization modelopt_fp4" so they instead perform the single modelopt presence check and set deployment.extra_args accordingly.
🧹 Nitpick comments (3)
.claude/skills/evaluation/SKILL.md (3)
3-3: Consider adding negative triggers for skill activation.The description includes positive triggers ("evaluate model", "benchmark accuracy", etc.) that indicate when to use this skill. Consider adding negative triggers to clarify when NOT to use it, e.g., "Do not use for: deploying models (use deployment skill), training/fine-tuning, or inference-only workflows."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/evaluation/SKILL.md at line 3, Update the SKILL.md description entry for this skill to include negative triggers clarifying when not to use it: append a "Do not use for" clause after the existing trigger phrases (the description line containing "Evaluate accuracy of quantized or unquantized LLMs..." and the positive triggers like "evaluate model", "benchmark accuracy", "run MMLU", "evaluate quantized model", "accuracy drop", "run nel"). List concise exclusions such as "Do not use for: deploying models (use deployment skill), training/fine-tuning, inference-only workflows, or model serving/ops tasks" so the skill activation is unambiguous.
229-289: Consider moving monitoring section to a standalone skill.The monitoring section (lines 229-289) provides detailed
nel status,nel logs, and SSH log inspection guidance. This could be extracted to a reusablerun-and-monitorskill that handles progress tracking, log inspection, and failure diagnosis for both evaluation and deployment workflows.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/evaluation/SKILL.md around lines 229 - 289, Extract the "Monitoring Progress" section (the block starting with "Monitoring Progress" that documents `nel status`, `nel logs`, `nel info`, and SSH log inspection) into a new standalone skill named e.g. "run-and-monitor" so it can be reused by other skills; create a new SKILL.md for that skill containing the full procedures (check job status, stream logs, SSH inspection, grep search) and update the original .claude/skills/evaluation/SKILL.md by removing the extracted block and adding a short cross-reference or link to the new "run-and-monitor" skill; ensure references to the commands `nel status`, `nel logs`, and `nel info` remain identical so examples and searchability are preserved.
119-128: Consider extracting quantization benchmarks to a reference file.The quantization-aware benchmark recommendations (sensitivity ranking, recommended sets) are valuable knowledge that could be reused by other skills (e.g., a future
compare-resultsskill). Consider extracting this toreferences/quantization-benchmarks.mdfor consistency and reusability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/evaluation/SKILL.md around lines 119 - 128, Extract the "Quantization-aware benchmark defaults" section (the list referencing MMLU, GSM8K, ARC-Challenge, HumanEval, Winogrande, IFEval) into a reusable reference document named quantization-benchmarks.md and replace the full block in SKILL.md with a short pointer to that reference; ensure the skill logic still presents the recommendations to the user and asks which to include, and if the user already specified benchmarks it preserves their choice while mentioning any missed accuracy-sensitive benchmarks (MMLU, GSM8K, ARC-Challenge, HumanEval, Winogrande) in the prompt.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.claude/skills/evaluation/SKILL.md:
- Around line 110-116: Replace the per-algorithm mapping logic that reads
quantization.quant_algo and emits algorithm-specific flags with a single,
deterministic check: if hf_quant_config.json exists in the checkpoint directory
or config.json contains quantization_config with quant_method: "modelopt", then
add "--quantization modelopt" to deployment.extra_args (vLLM will auto-detect
FP8/NVFP4/W4A8_AWQ formats); otherwise emit no quantization flag. Update any
references to quantization.quant_algo, the table mapping entries, and the logic
that would add "--quantization modelopt_fp4" so they instead perform the single
modelopt presence check and set deployment.extra_args accordingly.
---
Nitpick comments:
In @.claude/skills/evaluation/SKILL.md:
- Line 3: Update the SKILL.md description entry for this skill to include
negative triggers clarifying when not to use it: append a "Do not use for"
clause after the existing trigger phrases (the description line containing
"Evaluate accuracy of quantized or unquantized LLMs..." and the positive
triggers like "evaluate model", "benchmark accuracy", "run MMLU", "evaluate
quantized model", "accuracy drop", "run nel"). List concise exclusions such as
"Do not use for: deploying models (use deployment skill), training/fine-tuning,
inference-only workflows, or model serving/ops tasks" so the skill activation is
unambiguous.
- Around line 229-289: Extract the "Monitoring Progress" section (the block
starting with "Monitoring Progress" that documents `nel status`, `nel logs`,
`nel info`, and SSH log inspection) into a new standalone skill named e.g.
"run-and-monitor" so it can be reused by other skills; create a new SKILL.md for
that skill containing the full procedures (check job status, stream logs, SSH
inspection, grep search) and update the original
.claude/skills/evaluation/SKILL.md by removing the extracted block and adding a
short cross-reference or link to the new "run-and-monitor" skill; ensure
references to the commands `nel status`, `nel logs`, and `nel info` remain
identical so examples and searchability are preserved.
- Around line 119-128: Extract the "Quantization-aware benchmark defaults"
section (the list referencing MMLU, GSM8K, ARC-Challenge, HumanEval, Winogrande,
IFEval) into a reusable reference document named quantization-benchmarks.md and
replace the full block in SKILL.md with a short pointer to that reference;
ensure the skill logic still presents the recommendations to the user and asks
which to include, and if the user already specified benchmarks it preserves
their choice while mentioning any missed accuracy-sensitive benchmarks (MMLU,
GSM8K, ARC-Challenge, HumanEval, Winogrande) in the prompt.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 506c0f6a-d3ea-4b11-9c74-7262cfb545e6
📒 Files selected for processing (6)
.claude/skills/evaluation/SKILL.md.claude/skills/evaluation/references/model-card-research.md.claude/skills/evaluation/references/multi-node.md.claude/skills/evaluation/tests/nemotron3-nano-bf16-reasoning.json.claude/skills/evaluation/tests/quantized-checkpoint-local-vllm.json.markdownlint-cli2.yaml
✅ Files skipped from review due to trivial changes (4)
- .markdownlint-cli2.yaml
- .claude/skills/evaluation/tests/quantized-checkpoint-local-vllm.json
- .claude/skills/evaluation/references/model-card-research.md
- .claude/skills/evaluation/tests/nemotron3-nano-bf16-reasoning.json
Add a Claude Code skill for evaluating LLM accuracy using NeMo Evaluator Launcher (NEL). Based on the upstream nel-assistant skill (commit f1fa073) with ModelOpt-specific additions: - Auto-detect ModelOpt quantization format from hf_quant_config.json and set the correct vLLM/SGLang --quantization flag - Quantization-aware benchmark defaults (recommend MMLU, GSM8K, ARC-Challenge for quantized models) - Workspace management for multi-user environments (Step 0) - Disable MD036/MD029 markdownlint rules for upstream NEL formatting The skill guides users through NEL config generation, model card research, and evaluation execution (local and SLURM). Signed-off-by: Kai Xu <kaix@nvidia.com>
Signed-off-by: Kai Xu <kaix@nvidia.com>
Signed-off-by: Kai Xu <kaix@nvidia.com>
Signed-off-by: Kai Xu <kaix@nvidia.com>
Signed-off-by: Kai Xu <kaix@nvidia.com>
0a4a607 to
27a9aa1
Compare
cjluo-nv
left a comment
There was a problem hiding this comment.
Summary: Adds a new Claude Code skill (evaluation/) that guides users through LLM accuracy evaluation using NeMo Evaluator Launcher (NEL), with ModelOpt-specific quantization auto-detection and benchmark recommendations.
Issues Found:
-
[Correctness] Missing dependency not verifiable —
skills/common/workspace-management.mddoes not exist onmainor any visible branch (SKILL.md:19)
The skill referencesskills/common/workspace-management.mdwhenMODELOPT_WORKSPACE_ROOTis set, but the dependency PR #1107 that introduces common files has no corresponding branch (kaix/agent-commondoesn't exist). If #1107 isn't merged first, this skill will fail at Step 0. Either:- Merge #1107 before this PR, or
- Remove the workspace step until the dependency lands, or
- Include
workspace-management.mdin this PR.
-
[Correctness] PR description says
evals/directory but actual path istests/(PR description vs actual files)
The description's directory tree showsevals/nemotron3-nano-bf16-reasoning.json, but the actual files are undertests/. The deployment skill onkaix/agent-deploymentalso usestests/. Minor inconsistency in the description only — the actual files are correct. -
[Readability] Numbered list restart issue in Step 2 benchmarks (
SKILL.md:71-77)
The benchmark choices use1.through5.but the overall Step 2 questions also use numbered list items. At line 71, the benchmarks list starts with1.immediately after5. Benchmarks:, creating a confusing nested numbered list that may render ambiguously in some markdown parsers. Consider using bullet points or lettered sub-items for the benchmark categories. -
[Readability]
quant_algofallback guidance is vague (SKILL.md:119)
"Other values | Try --quantization modelopt; consult vLLM/SGLang docs if unsure"— this puts the AI agent in a position of guessing. Consider either enumerating known values exhaustively or providing a more actionable fallback (e.g., warn the user and ask them to specify). -
[Tests] Test specs are comprehensive but untestable in CI (
tests/*.json)
The test JSON files describe expected interactive behaviors (web search, user questions, SSH log inspection) that can't be validated by any automated test. The PR says "N/A (skill evals provided separately)" — what is the plan for validating these? Are they run manually, or is there a skill-eval harness? Clarification would be helpful. -
[Readability]
SKILL.mdline 81 refers to--benchmarksoptions but uses different casing than the list (SKILL.md:77 vs 85)
The user-facing list shows "Math & Reasoning" but the CLI flag usesmath_reasoning. The note at line 87 partially addresses this, but consider adding the CLI value next to each user-facing label (e.g., "3. Math & Reasoning (math_reasoning)") to reduce ambiguity. -
[Correctness] Markdown lint rule disabling rationale is narrow (
.markdownlint-cli2.yaml)
MD029 and MD036 are disabled globally with comments saying "upstream NEL skill uses..." — but these rules affect the entire repo. If only the evaluation skill triggers these, consider using a per-directory.markdownlint-cli2.yamloverride in.claude/skills/evaluation/instead of weakening global linting.
Suggestions:
- The
quantization-benchmarks.mdreference is well-structured and useful. Consider whether similar sensitivity data could be auto-generated from ModelOpt's own benchmark results in future. - The
model-card-research.mdchecklist is thorough. Thecurloverwgettip and--no-cache-dirfor pip are good operational details. - The deployment skill on
kaix/agent-deploymentusesscripts/for shell scripts — this skill has no scripts, which is fine, but worth noting the consistent pattern ofreferences/+tests/across skills.
Overall Assessment: This is a well-structured documentation-only PR. The skill workflow is comprehensive and the ModelOpt-specific additions (quant auto-detection, benchmark recommendations) add clear value. The main concern is the unresolved dependency on PR #1107 for common files — the skill will break at Step 0 if that PR doesn't land first. The global markdownlint changes should ideally be scoped more narrowly. No Python code changes, so correctness risk is low.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
.claude/skills/evaluation/SKILL.md (1)
102-119:⚠️ Potential issue | 🔴 CriticalCritical: Remove incorrect vLLM flag mapping table.
Lines 112-117 document a per-algorithm flag mapping that includes
--quantization modelopt_fp4, which does not exist in vLLM. According to the past review comment, web search results (citing vLLM docs), and context snippets from ModelOpt's codebase:
- vLLM provides a single unified flag:
--quantization modelopt- This flag auto-detects the quantization format (FP8, NVFP4, W4A8_AWQ, etc.) from the model's
hf_quant_config.jsonorconfig.json- There is no separate
--quantization modelopt_fp4flagThe current documentation will cause the AI agent to generate invalid vLLM commands that will fail at runtime.
🔧 Proposed fix
Replace lines 102-119 with:
**Auto-detect ModelOpt quantization format** (checkpoint paths only): Check for `hf_quant_config.json` in the checkpoint directory: ```bash cat <checkpoint_path>/hf_quant_config.json 2>/dev/null-If found, read
quantization.quant_algoand set the correct vLLM/SGLang quantization flag indeployment.extra_args:
- quant_algoFlag to add - FP8--quantization modelopt- W4A8_AWQ--quantization modelopt- NVFP4,NVFP4_AWQ--quantization modelopt_fp4- Other values Try --quantization modelopt; consult vLLM/SGLang docs if unsure
-If no
hf_quant_config.json, also checkconfig.jsonfor aquantization_configsection withquant_method: "modelopt". If neither is found, the checkpoint is unquantized — no flag needed.
+If found, vLLM/SGLang will auto-detect the quantization format (FP8, NVFP4, W4A8_AWQ, etc.) fromquantization.quant_algo. Add todeployment.extra_args:
+
+```yaml
+deployment:
- extra_args: "--quantization modelopt"
+```+If no
hf_quant_config.json, also checkconfig.jsonfor aquantization_configsection withquant_method: "modelopt". If found, use the same--quantization modeloptflag. If neither is found, the checkpoint is unquantized — no flag needed.</details> This aligns with the past review feedback and vLLM's unified ModelOpt quantization support. <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In @.claude/skills/evaluation/SKILL.md around lines 102 - 119, The documentation
incorrectly maps specific ModelOpt algorithms to multiple vLLM flags (e.g.,
--quantization modelopt_fp4); instead, update the section that inspects
hf_quant_config.json/quantization.quant_algo(and fallbackconfig.json→
quantization_config.quant_method) to instruct adding a single unified flag in
deployment.extra_args:--quantization modelopt, since vLLM/SGLang
auto-detects the concrete format from the checkpoint; remove the per-algorithm
table and replace it with guidance to use--quantization modeloptwhen
quant_method: "modelopt"orquantization.quant_algois present, otherwise
omit the flag.</details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.claude/skills/evaluation/SKILL.md:
- Around line 102-119: The documentation incorrectly maps specific ModelOpt
algorithms to multiple vLLM flags (e.g.,--quantization modelopt_fp4);
instead, update the section that inspects
hf_quant_config.json/quantization.quant_algo(and fallbackconfig.json→
quantization_config.quant_method) to instruct adding a single unified flag in
deployment.extra_args:--quantization modelopt, since vLLM/SGLang
auto-detects the concrete format from the checkpoint; remove the per-algorithm
table and replace it with guidance to use--quantization modeloptwhen
quant_method: "modelopt"orquantization.quant_algois present, otherwise
omit the flag.</details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: Path: .coderabbit.yaml **Review profile**: CHILL **Plan**: Pro **Run ID**: `abfe18ed-90df-4a26-94ef-79642a168e99` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 0a4a607de5360e4e848e9a691fcc090cc11b8e49 and 27a9aa12ac71d8a86b8f2fce5d8e7e07d57e691b. </details> <details> <summary>📒 Files selected for processing (7)</summary> * `.claude/skills/evaluation/SKILL.md` * `.claude/skills/evaluation/references/model-card-research.md` * `.claude/skills/evaluation/references/multi-node.md` * `.claude/skills/evaluation/references/quantization-benchmarks.md` * `.claude/skills/evaluation/tests/nemotron3-nano-bf16-reasoning.json` * `.claude/skills/evaluation/tests/quantized-checkpoint-local-vllm.json` * `.markdownlint-cli2.yaml` </details> <details> <summary>✅ Files skipped from review due to trivial changes (4)</summary> * .markdownlint-cli2.yaml * .claude/skills/evaluation/references/quantization-benchmarks.md * .claude/skills/evaluation/references/model-card-research.md * .claude/skills/evaluation/tests/nemotron3-nano-bf16-reasoning.json </details> <details> <summary>🚧 Files skipped from review as they are similar to previous changes (1)</summary> * .claude/skills/evaluation/tests/quantized-checkpoint-local-vllm.json </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
Edwardf0t1
left a comment
There was a problem hiding this comment.
Discussed offline on all comments - LGTM
|
why does it depend on remote_exec.sh? |
What does this PR do?
Type of change: New feature
Add a Claude Code skill for evaluating LLM accuracy using NeMo Evaluator Launcher (NEL). Based on the upstream nel-assistant skill with ModelOpt-specific additions:
hf_quant_config.json(withconfig.jsonfallback) and set the correct vLLM/SGLang--quantizationflagreferences/for on-demand loadingSkill structure
The skill guides users through: NEL installation check → config generation via
nel skills build-config→ model card research → parameter tuning → task selection → multi-node setup → interceptors → execution with dry-run/test/full modes.Depends on: #1107 (common files:
remote_exec.sh,workspace-management.md,environment-setup.md)Testing
Invoke in Claude Code:
Before your PR is "Ready for review"
CONTRIBUTING.md: ✅ (NEL skill attributed in frontmatter)🤖 Generated with Claude Code
Summary by CodeRabbit