forked from nlzy/vllm-gfx906
-
Notifications
You must be signed in to change notification settings - Fork 0
vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60
License
emuchogu/vllm-gfx906
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
vLLM for gfx906
===================
This is a modified version of vLLM, works with (and only works with) AMD gfx906
GPUs such as Radeon VII / Radeon Pro VII / Instinct MI50 / Instinct MI60.
This modified version of vLLM does two things:
1. Makes vLLM could run on gfx906 GPUs.
2. Optimizes quantization GEMV kernels for gfx906 GPUs.
RUN WITH DOCKER
-------------------
Please install ROCm 6.3 first, only kernel-mode driver is required. Refer to
the official documentation by AMD.
```
docker pull nalanzeyu/vllm-gfx906
docker run -it --rm --shm-size=2g --device=/dev/kfd --device=/dev/dri \
--group-add video -p 8000:8000 -v <YOUR_MODEL_PATH>:/model \
nalanzeyu/vllm-gfx906 vllm serve /model
```
NOTES
-------------------
GPTQ and AWQ are the first recommended quantization formats.
GGUF is functional and should work. I recommend using q4_1 or q8_0 quants.
K-quants (e.g. q4_K / q6_K) should work. I-quants (e.g. IQ4 / IQ3) have not
tested yet.
Any MoE models with quantization are not expecting to work.
NEWS
-------------------
2025-07-08:
Update vLLM to 0.9.2
From this version, V1 engine is default. Startup takes longer time than V0.
Automatic Prefix Caching is off by default due to performance issues. Needs
investigating.
2025-06-10:
I made some optimization on GPTQ and AWQ kernel. Now, single batch got about 5%
faster, and batch sizes between 8-32 got about 30%-50% faster.
Update vLLM to 0.9.1
2025-05-27:
Add support for AWQ quantization on gfx906 without triton, use the same kernel
with GPTQ, so the performance of AWQ should be on par with GPTQ.
Update vLLM to 0.9.0
2025-05-02:
Update vLLM to 0.8.5
Upstream vLLM 0.8.5 has lots of issues with ROCm platform, which have already
been fixed in the main branch. I cherry-picked those fixes.
I also fixed the issue of garbled output for GPTQ desc_act=True models.
2025-04-29:
I have fixed GGUF batched request performance issue. Now it's usable, but still
not as fast as GPTQ.
I also added some autotune configs to `triton_flash_attention.py` by increasing
`num_stages`.
2025-04-28:
Update rocm to 6.3
Update torch to 2.7.0
Update triton to 3.3.0
2025-04-22:
I have fixed GPTQ Int4/Int8 GEMV kernel, by changing dot product accumulator
type from FP16 to FP32 to avoid calculation overflow. Thanks to the fdot2
intrinsic introduced in Vega 12/20, using FP32 accumulators remains fast and
guarantees no overflow.
2025-04-21:
Update vLLM to v0.8.4
2025-04-20:
I changed the reconstruct threshold in GPTQ GEMM kernel. This is a temporary
fix for Qwen2 GPTQ models outputting infinite "!!!!!!!!!!!!!!!!!!!"
2025-04-19:
I attempted to optimize AWQ, by adding '@triton.autotune' to triton_awq.py.
This improved performance by about 50%, but it's still very slow on gfx906 GPUs.
2025-04-01:
Optimized the GEMV kernel for GGUF q4_1 and q8_0 quantization, achieving
10%~20% performance improve.
BUILD
-------------------
Please install ROCm 6.3 first. You need to install both kernel-mode driver and
ROCm packages. Refer to the official documentation by AMD.
You MUST INSTALL triton-gfx906 v3.3.0+gfx906 first, see:
https://github.com/nlzy/triton-gfx906/tree/v3.3.0+gfx906
```
cd vllm-gfx906
python3 -m venv vllmenv
source vllmenv/bin/activate
pip3 install 'torch==2.7' torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.3
pip3 install -r requirements/rocm-build.txt
pip3 install -r requirements/rocm.txt
pip3 install --no-build-isolation .
```
CREDITS
-------------------
https://github.com/Said-Akbar/vllm-rocm
About
vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published
Languages
- Python 85.0%
- Cuda 9.0%
- C++ 4.4%
- Shell 0.7%
- C 0.5%
- CMake 0.3%
- Other 0.1%