Skip to content

Commit ed8c171

Browse files
committed
add tests in different gpus
Signed-off-by: jiant <[email protected]>
1 parent 537efd7 commit ed8c171

File tree

10 files changed

+29
-6
lines changed

10 files changed

+29
-6
lines changed

tests/integration/defs/accuracy/references/gsm8k.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -139,10 +139,10 @@ Qwen3/Qwen3-235B-A22B:
139139
Qwen3/Qwen3-Next-80B-A3B-Thinking:
140140
- accuracy: 81.577
141141
Qwen3/Qwen3-Next-80B-A3B-Instruct:
142-
- accuracy: 84.42
142+
- accuracy: 83.36
143143
- quant_algo: NVFP4
144144
kv_cache_quant_algo: FP8
145-
accuracy: 84.32
145+
accuracy: 80.84
146146
moonshotai/Kimi-K2-Instruct:
147147
- quant_algo: FP8_BLOCK_SCALES
148148
accuracy: 94.84

tests/integration/defs/accuracy/references/mmlu.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -243,10 +243,10 @@ Qwen3/Qwen3-235B-A22B:
243243
Qwen3/Qwen3-Next-80B-A3B-Thinking:
244244
- accuracy: 86
245245
Qwen3/Qwen3-Next-80B-A3B-Instruct:
246-
- accuracy: 85.58
246+
- accuracy: 86.03
247247
- quant_algo: NVFP4
248248
kv_cache_quant_algo: FP8
249-
accuracy: 85
249+
accuracy: 85.08
250250
moonshotai/Kimi-K2-Instruct:
251251
- quant_algo: FP8_BLOCK_SCALES
252252
accuracy: 87.65

tests/integration/defs/accuracy/test_llm_api_pytorch.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4157,7 +4157,6 @@ class TestQwen3NextInstruct(LlmapiAccuracyTestHarness):
41574157
def test_bf16_4gpu(self, tp_size, pp_size, ep_size, cuda_graph,
41584158
overlap_scheduler):
41594159
model_path = f"{self.MODEL_PATH}/Qwen3-Next-80B-A3B-Instruct"
4160-
model_path = "Qwen/Qwen3-Next-80B-A3B-Instruct"
41614160
kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.6,
41624161
enable_block_reuse=False)
41634162
pytorch_config = dict(disable_overlap_scheduler=not overlap_scheduler,
@@ -4189,7 +4188,6 @@ def test_bf16_4gpu(self, tp_size, pp_size, ep_size, cuda_graph,
41894188
def test_nvfp4(self, moe_backend, tp_size, pp_size, ep_size, cuda_graph,
41904189
overlap_scheduler):
41914190
model_path = f"{self.MODEL_PATH}/qwen3-next-80b-instruct-nvfp4-ptq-fp8kv"
4192-
model_path = "/home/scratch.didow_sw_1/models/qwen3-next-80b-instruct-nvfp4-ptq-fp8kv"
41934191

41944192
kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.6,
41954193
enable_block_reuse=False)

tests/integration/test_lists/test-db/l0_b200.yml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,11 @@ l0_b200:
5454
- accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_w4a8_mxfp4[mxfp8-latency-TRTLLM]
5555
- accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_w4a8_mxfp4[mxfp8-latency-CUTLASS]
5656
- accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_w4a16_mxfp4[latency-TRTLLM]
57+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp1-cutlass]
58+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep1-cutlass]
59+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep4-cutlass]
60+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[no_cuda_graph_overlap-cutlass]
61+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep4-trtllm]
5762
- disaggregated/test_workers.py::test_workers_kv_cache_aware_router_eviction[TinyLlama-1.1B-Chat-v1.0] # nvbugs 5300551
5863
- test_e2e.py::test_ptp_quickstart_advanced[Llama3.1-8B-NVFP4-nvfp4-quantized/Meta-Llama-3.1-8B]
5964
- test_e2e.py::test_ptp_quickstart_advanced[Llama3.1-8B-FP8-llama-3.1-model/Llama-3.1-8B-Instruct-FP8]

tests/integration/test_lists/test-db/l0_dgx_b200.yml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,12 @@ l0_dgx_b200:
4242
- accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_nvfp4[dep4_latency_moe_cutlass-torch_compile=False]
4343
- accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_nvfp4[dep4_latency_moe_cutlass-torch_compile=True] ISOLATION
4444
- accuracy/test_llm_api_pytorch.py::TestQwen3NextThinking::test_auto_dtype[tp4ep4]
45+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_bf16_4gpu[tp4ep4_cudagraph_overlap]
46+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp1-cutlass]
47+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep1-cutlass]
48+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep4-cutlass]
49+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[no_cuda_graph_overlap-cutlass]
50+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep4-trtllm]
4551
- disaggregated/test_disaggregated.py::test_disaggregated_deepseek_v3_lite_fp8_ucx[DeepSeek-V3-Lite-fp8]
4652
- accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_w4_4gpus[tp4-trtllm-auto]
4753
- accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_w4_4gpus[ep4-cutlass-auto]

tests/integration/test_lists/test-db/l0_dgx_b300.yml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,11 @@ l0_dgx_b300:
5858
- accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_nvfp4[dep4_latency_moe_trtllm-torch_compile=False]
5959
- accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_nvfp4[dep4_latency_moe_cutlass-torch_compile=True]
6060
- accuracy/test_disaggregated_serving.py::TestQwen3_30B_A3B::test_mixed_ctx_gen_model[ctxpp2gentp2]
61+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp1-cutlass]
62+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep1-cutlass]
63+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep4-cutlass]
64+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[no_cuda_graph_overlap-cutlass]
65+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep4-trtllm]
6166
- disaggregated/test_disaggregated.py::test_disaggregated_deepseek_v3_lite_fp8_ucx[DeepSeek-V3-Lite-fp8]
6267
- accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_w4_4gpus[tp4-cutlass-fp8]
6368
- accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_w4_4gpus[tp4-cutlass-auto]

tests/integration/test_lists/test-db/l0_dgx_h100.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -93,6 +93,7 @@ l0_dgx_h100:
9393
- accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_ctx_pp_gen_tp_asymmetric[MMLU-gen_tp=2-ctx_pp=2]
9494
- accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_multi_instance[GSM8K]
9595
- accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_multi_instance[MMLU]
96+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_bf16_4gpu[tp4ep4_cudagraph_overlap]
9697
- disaggregated/test_auto_scaling.py::test_service_discovery[etcd-round_robin]
9798
- disaggregated/test_auto_scaling.py::test_service_discovery[etcd-load_balancing]
9899
- disaggregated/test_auto_scaling.py::test_worker_restart[etcd-round_robin]

tests/integration/test_lists/test-db/l0_dgx_h200.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ l0_dgx_h200:
3333
- accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_ctx_pp_gen_tp_asymmetric[MMLU-gen_tp=2-ctx_pp=4]
3434
- accuracy/test_disaggregated_serving.py::TestGPTOSS::test_auto_dtype[False]
3535
- accuracy/test_disaggregated_serving.py::TestGPTOSS::test_auto_dtype[True]
36+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_bf16_4gpu[tp4ep4_cudagraph_overlap]
3637
- disaggregated/test_disaggregated.py::test_disaggregated_ctxtp2pp2_gentp2pp2[TinyLlama-1.1B-Chat-v1.0]
3738
- disaggregated/test_disaggregated.py::test_disaggregated_ctxpp4_genpp4[TinyLlama-1.1B-Chat-v1.0]
3839
- unittest/llmapi/test_llm_pytorch.py::test_nemotron_nas_lora

tests/integration/test_lists/test-db/l0_gb200_multi_gpus.yml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,11 @@ l0_gb200_multi_gpus:
4343
- accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_w4_4gpus[tp4-trtllm-auto]
4444
- accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_w4_4gpus[dp4-cutlass-auto]
4545
- accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_w4_4gpus_online_eplb[fp8]
46+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp1-cutlass]
47+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep1-cutlass]
48+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep4-cutlass]
49+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[no_cuda_graph_overlap-cutlass]
50+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep4-trtllm]
4651
- condition:
4752
ranges:
4853
system_gpu_count:

tests/integration/test_lists/test-db/l0_rtx_pro_6000.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,8 @@ l0_rtx_pro_6000:
4343
- accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_w4_1gpu[True-True-cutlass-auto]
4444
- accuracy/test_llm_api_pytorch.py::TestPhi4MM::test_fp4
4545
- accuracy/test_llm_api_pytorch.py::TestPhi4MM::test_fp8
46+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp1-cutlass]
47+
- accuracy/test_llm_api_pytorch.py::TestQwen3NextInstruct::test_nvfp4[tp4ep4-trtllm]
4648

4749
- condition:
4850
ranges:

0 commit comments

Comments
 (0)