First of all. Thanks for great work that's already done!
My experiences in running the benchmarks scripts/bench_llm.py are different than the master readme implies. See results below.
It also took way over 15 minutes to run (1.5 hour). It almost looks like that most of the time the GPU is idle during the benchmarks (But the models memory is allocated). And there's some light work being processed on the CPU.
Hardware:
Nvidia RTX 3090
AMD 9800X3D
64 GB DDR5
Software:
W11. NVIDIA-SMI 590.57 Driver Version: 591.86 CUDA Version: 13.1
WSL 2 Ubuntu-20.04.
Cuda compilation tools, release 12.4, V12.4.131
Any idea what could be the issue?
`[bench] ==== HumanEval (n=10) ====
[01/10] n_tok= 84 AR= 78.85 DFlash= 121.90 AL= 8.53
[02/10] n_tok= 138 AR= 68.65 DFlash= 117.67 AL= 8.00
[03/10] n_tok= 134 AR=109.38 DFlash= 116.37 AL= 8.00
[04/10] n_tok= 120 AR= 96.39 DFlash= 116.16 AL= 8.26
[05/10] n_tok= 172 AR= 77.54 DFlash= 126.24 AL= 8.83
[06/10] n_tok= 118 AR= 55.74 DFlash= 91.89 AL= 6.24
[07/10] n_tok= 51 AR= 57.20 DFlash= 104.43 AL= 6.92
[08/10] n_tok= 141 AR= 69.42 DFlash= 105.30 AL= 6.92
[09/10] n_tok= 125 AR= 74.19 DFlash= 120.03 AL= 8.26
[10/10] n_tok= 95 AR= 44.79 DFlash= 83.09 AL= 5.57
HumanEval mean: AR=73.21 DFlash=110.31 AL=7.55 1.51x
[bench] ==== GSM8K (n=10) ====
[01/10] n_tok= 45 AR= 48.03 DFlash= 110.71 AL= 7.53
[02/10] n_tok= 111 AR= 54.80 DFlash= 83.57 AL= 5.82
[03/10] n_tok= 49 AR= 61.76 DFlash= 75.50 AL= 5.22
[04/10] n_tok= 70 AR= 56.97 DFlash= 84.73 AL= 5.69
[05/10] n_tok= 102 AR= 52.72 DFlash= 77.22 AL= 5.22
[06/10] n_tok= 118 AR= 58.10 DFlash= 93.15 AL= 6.40
[07/10] n_tok= 113 AR= 39.98 DFlash= 70.05 AL= 5.04
[08/10] n_tok= 50 AR= 58.00 DFlash= 108.16 AL= 7.31
[09/10] n_tok= 43 AR= 56.33 DFlash= 76.73 AL= 5.45
[10/10] n_tok= 96 AR= 51.01 DFlash= 101.21 AL= 7.31
GSM8K mean: AR=53.77 DFlash=88.10 AL=6.10 1.64x
[bench] ==== Math500 (n=10) ====
[01/10] n_tok= 257 AR= 45.28 DFlash= 66.33 AL= 4.57
[02/10] n_tok= 53 AR= 88.35 DFlash= 132.47 AL= 9.14
[03/10] n_tok= 40 AR= 72.42 DFlash= 108.07 AL= 7.76
[04/10] n_tok= 50 AR= 61.64 DFlash= 104.34 AL= 6.92
[05/10] n_tok= 117 AR= 75.65 DFlash= 113.57 AL= 7.76
[06/10] n_tok= 76 AR= 61.95 DFlash= 89.59 AL= 5.95
[07/10] n_tok= 43 AR= 74.30 DFlash= 122.81 AL= 8.53
[08/10] n_tok= 79 AR= 56.98 DFlash= 88.43 AL= 5.95
[09/10] n_tok= 52 AR= 67.48 DFlash= 96.15 AL= 6.40
[10/10] n_tok= 57 AR= 62.19 DFlash= 109.76 AL= 7.31
Math500 mean: AR=66.62 DFlash=103.15 AL=7.03 1.55x
[bench] === SUMMARY ===
Task AR DFlash AL Speedup
HumanEval 73.21 110.31 7.55 1.51x
GSM8K 53.77 88.10 6.10 1.64x
Math500 66.62 103.15 7.03 1.55x`
First of all. Thanks for great work that's already done!
My experiences in running the benchmarks
scripts/bench_llm.pyare different than the master readme implies. See results below.It also took way over 15 minutes to run (1.5 hour). It almost looks like that most of the time the GPU is idle during the benchmarks (But the models memory is allocated). And there's some light work being processed on the CPU.
Hardware:
Nvidia RTX 3090
AMD 9800X3D
64 GB DDR5
Software:
W11. NVIDIA-SMI 590.57 Driver Version: 591.86 CUDA Version: 13.1
WSL 2 Ubuntu-20.04.
Cuda compilation tools, release 12.4, V12.4.131
Any idea what could be the issue?
`[bench] ==== HumanEval (n=10) ====
[01/10] n_tok= 84 AR= 78.85 DFlash= 121.90 AL= 8.53
[02/10] n_tok= 138 AR= 68.65 DFlash= 117.67 AL= 8.00
[03/10] n_tok= 134 AR=109.38 DFlash= 116.37 AL= 8.00
[04/10] n_tok= 120 AR= 96.39 DFlash= 116.16 AL= 8.26
[05/10] n_tok= 172 AR= 77.54 DFlash= 126.24 AL= 8.83
[06/10] n_tok= 118 AR= 55.74 DFlash= 91.89 AL= 6.24
[07/10] n_tok= 51 AR= 57.20 DFlash= 104.43 AL= 6.92
[08/10] n_tok= 141 AR= 69.42 DFlash= 105.30 AL= 6.92
[09/10] n_tok= 125 AR= 74.19 DFlash= 120.03 AL= 8.26
[10/10] n_tok= 95 AR= 44.79 DFlash= 83.09 AL= 5.57
HumanEval mean: AR=73.21 DFlash=110.31 AL=7.55 1.51x
[bench] ==== GSM8K (n=10) ====
[01/10] n_tok= 45 AR= 48.03 DFlash= 110.71 AL= 7.53
[02/10] n_tok= 111 AR= 54.80 DFlash= 83.57 AL= 5.82
[03/10] n_tok= 49 AR= 61.76 DFlash= 75.50 AL= 5.22
[04/10] n_tok= 70 AR= 56.97 DFlash= 84.73 AL= 5.69
[05/10] n_tok= 102 AR= 52.72 DFlash= 77.22 AL= 5.22
[06/10] n_tok= 118 AR= 58.10 DFlash= 93.15 AL= 6.40
[07/10] n_tok= 113 AR= 39.98 DFlash= 70.05 AL= 5.04
[08/10] n_tok= 50 AR= 58.00 DFlash= 108.16 AL= 7.31
[09/10] n_tok= 43 AR= 56.33 DFlash= 76.73 AL= 5.45
[10/10] n_tok= 96 AR= 51.01 DFlash= 101.21 AL= 7.31
GSM8K mean: AR=53.77 DFlash=88.10 AL=6.10 1.64x
[bench] ==== Math500 (n=10) ====
[01/10] n_tok= 257 AR= 45.28 DFlash= 66.33 AL= 4.57
[02/10] n_tok= 53 AR= 88.35 DFlash= 132.47 AL= 9.14
[03/10] n_tok= 40 AR= 72.42 DFlash= 108.07 AL= 7.76
[04/10] n_tok= 50 AR= 61.64 DFlash= 104.34 AL= 6.92
[05/10] n_tok= 117 AR= 75.65 DFlash= 113.57 AL= 7.76
[06/10] n_tok= 76 AR= 61.95 DFlash= 89.59 AL= 5.95
[07/10] n_tok= 43 AR= 74.30 DFlash= 122.81 AL= 8.53
[08/10] n_tok= 79 AR= 56.98 DFlash= 88.43 AL= 5.95
[09/10] n_tok= 52 AR= 67.48 DFlash= 96.15 AL= 6.40
[10/10] n_tok= 57 AR= 62.19 DFlash= 109.76 AL= 7.31
Math500 mean: AR=66.62 DFlash=103.15 AL=7.03 1.55x
[bench] === SUMMARY ===
Task AR DFlash AL Speedup
HumanEval 73.21 110.31 7.55 1.51x
GSM8K 53.77 88.10 6.10 1.64x
Math500 66.62 103.15 7.03 1.55x`