Skip to content

Conversation

@hidefromkgb
Copy link
Contributor

@hidefromkgb hidefromkgb commented Nov 19, 2025

This PR is intended to test if clipping B, M, N helps reduce the overall Matmul LLM CI time.
Addresses MFDNN-14287.

@hidefromkgb hidefromkgb requested a review from a team as a code owner November 19, 2025 21:44
@github-actions github-actions bot added the component:tests Codeowner: @oneapi-src/onednn-arch label Nov 19, 2025
@hidefromkgb hidefromkgb force-pushed the aguskov/matmul_llm_clamp branch from 6cde1ec to 4a5ac56 Compare November 20, 2025 19:27
@hidefromkgb hidefromkgb force-pushed the aguskov/matmul_llm_clamp branch from 4a5ac56 to c59e081 Compare November 21, 2025 00:09
@hidefromkgb
Copy link
Contributor Author

make test
set test_scope=NIGHTLY
disable test_device_cpu
disable benchdnn_all
enable benchdnn_matmul
enable arch_gpu_xe-hpc
enable arch_gpu_xe-hpg-atsm
enable arch_gpu_xe-hpg-dg2
enable arch_gpu_xe-lp
enable arch_gpu_xe-lpg
enable arch_gpu_xe-lpg+
enable arch_gpu_xe2-hpg-bmg
enable arch_gpu_xe2-lpg
enable arch_gpu_xe3-lpg

@kealan-barbieri
Copy link
Contributor

would it be possible to maintain a complete set for optional use outside nightly runs?

@hidefromkgb
Copy link
Contributor Author

would it be possible to maintain a complete set for optional use outside nightly runs?

option_set_fwks_llm_gpu is not an independent set, as it's generated from KPIs/gpu/common/LLM/*.log files using gen_benchdnn.sh (see onednn-perf-report) — which is why I don't believe it's worth keeping the original set.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

component:tests Codeowner: @oneapi-src/onednn-arch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants