Fast RL training with Quantized Rollouts (Blog)
What is FlashRL? • Quick Start • Usage Guide • Examples • Road Map • Citation
FlashRL patches the inference backend to generate RL rollouts in INT8 & FP8, with accurate rollout logprob.
Figure 1. Left: AIME accuracy of Qwen2.5-32B DAPO training with INT8 and BF16 precisions for rollout generation using vLLM engine. Right: Training throughput (updates per hour) in the DAPO training (vLLM + BF16 FSDP).
pip install flash-llm-rl # need to be installed in all nodes in multi-node training
(Optional) there are two options to verify the FlashRL install: 1) set FLASHRL_LOGGING_LEVEL
to DEBUG
and compare the log with the provided ones; 2) for more details / debugging, please follow the Tutorial.
FlashRL is implemented as a plug-in-and-play manner, using environment variables FLASHRL_CONFIG
to control the quantization precision. Note that, due to quantization overhead, it is only recommended to use quantized rollout in the combination of large model (i.e., 14B+, preferrable 32B+) and long cot generation (DAPO training instead of GSM8K training).
# for single-node job
export FLASHRL_CONFIG=fp8
bash verl/examples/ppo_trainer/run_qwen2.5-32b.sh
# alternatively, for multi-node jobs via `ray submit`, fp8 online quantization will be turned on via
# > echo " FLASHRL_CONFIG: 'fp8'" | tee -a verl/trainer/runtime_env.yaml # add `FLASHRL_CONFIG: 'fp8'` to runtime env
# > bash verl/recipe/dapo/run_dapo_qwen2.5_32b.sh # this can be any scripts
Setting the config to bf16
to extract precise logprob used in sampling without rollout quantization. This is useful for applying the Truncated Importance Sampling.
# for single-node job
export FLASHRL_CONFIG=bf16
bash verl/examples/ppo_trainer/run_qwen2.5-32b.sh
# alternatively, for multi-node jobs via `ray submit`, RL Logprob Patch Only will be turned on via
# > echo " FLASHRL_CONFIG: 'bf16'" | tee -a verl/trainer/runtime_env.yaml # add `FLASHRL_CONFIG: 'fp8'` to runtime env
# > bash verl/recipe/dapo/run_dapo_qwen2.5_32b.sh # this can be any scripts
FlashRL has 3 major functionality, profiling
, configure helper
, and patcher
.
This step is not needed for the native fp8
online quantization supported by vLLM
, and the logprog-only path bf16
, and is needed for int8
or fp8_channel
quantization. Specifically, profilling compares a bf16
model and a quantized model to decide how the online quantization should be performed for an updated model. Please find below an example for Qwen/Qwen2.5-32B
and Qwen/Qwen2.5-0.5B-Instruct
. The quantized model can be any w8a8
/fp8
model produced by llm-compressor
. Note that, Redhat AI provides various quantized models, and can be used here as in the 0.5B-Instruct example.
# for `Qwen/Qwen2.5-32B`
flashrl profile -m Qwen/Qwen2.5-32B -qm LiyuanLucasLiu/Qwen2.5-32B-quantized.w8a8 -o ${PROFILE_PATH:-"$HOME/profile.32b.pt"} --fn int8
# for `Qwen/Qwen2.5-0.5B-Instruct`
flashrl profile -m Qwen/Qwen2.5-0.5B-Instruct -qm RedHatAI/Qwen2.5-0.5B-Instruct-quantized.w8a8 -o ${PROFILE_PATH:-"$HOME/profile.0_5b.pt"} --fn int8
This step is not needed for the native fp8
online quantization supported by vLLM
, and the logprog-only path bf16
, and is needed for int8
or fp8_channel
quantization. Specifically, configure helper creates a yaml file for the patcher to use. Please find below an example for Qwen/Qwen2.5-32B
and Qwen/Qwen2.5-0.5B-Instruct
.
# for `Qwen/Qwen2.5-32B`
flashrl setup -m LiyuanLucasLiu/Qwen2.5-32B-quantized.w8a8 -p $HOME/profile.32b.pt --fn int8 -o ${CONFIG_PATH:-"$HOME/.flashrl_config.32b.yaml"}
# for `Qwen/Qwen2.5-0.5B-Instruct`
flashrl setup -m RedHatAI/Qwen2.5-0.5B-Instruct-quantized.w8a8 -p $HOME/profile.0_5b.pt --fn int8 -o ${CONFIG_PATH:-"$HOME/.flashrl_config.0_5b.yaml"}
(Optional) to reduce the gap between rollout generation and gradient computation, FlashRL provides the functionality to conduct rollout generation in 16bits and 8bits in a hybrid manner across DP workers. Particularly, running the below command will append a second config to the existing config yaml.
flashrl setup -a --fn bf16 -o ${CONFIG_PATH:-"$HOME/.flashrl_config.0_5b.yaml"}
Then, when FlashRL loading the appended the config yaml file, FlashRL will force vllm to conduct bf16 generation in half of DP workers, and 8bit generation in the other half. Futher appending two bf16
profiles will make the ratio to be bf16 generation in 3/4 of DP workers and 8bit generation in 1/4 of DP workers.
Patcher would check the environment variable and operates accordingly. Please find the supported environment variables as below.
Environment Variable | Usage |
---|---|
FLASHRL_CONFIG |
applies patcher if configured, supports bf16 , fp8 , local profile paths (e.g., $HOME/.flashrl_config.32b.yaml ), and uploaded profiles (e.g., LiyuanLucasLiu/Qwen2.5-0.5B-Instruct-quantized.w8a8-RedHatAI/flashrl_config.yaml ) |
FLASHRL_LMHEAD_FP32 |
if set to 1 , forcing vLLM conducting lm head compute in bf16 |
FLASHRL_LOGGING_LEVEL |
set to DEBUG to turn on verbose logging for FlashRL functions |
FLASHRL_LOGGING_FILE |
if set, will save the log to files as well |
FLASHRL_TEST_RELOAD |
functionality provided to test FlashRL install, check this guide for more details |
Run Detail | Script | Command | Log |
---|---|---|---|
INT8 Rollout for Qwen2.5-0.5B-Instruct on GSM8K | Script | bash recipe/flash_rl/gsm8k_qwen0_5b_int8.sh flash-int8-TIS-2 2 |
Wandb Log |
INT8 Rollout for Qwen2.5-32B-Instruct on DAPO | Script | bash recipe/flash_rl/dapo_qwen32b_int8.sh flash-int8-TIS-8 8 |
Wandb |
FP8 Rollout for Qwen2.5-0.5B-Instruct on DAPO | Script | bash recipe/flash_rl/gsm8k_qwen0_5b_fp8.sh flash-fp8-TIS-2 2 |
Wandb Log |
FP8 Rollout for Qwen2.5-32B-Instruct on DAPO | Script | bash recipe/flash_rl/dapo_qwen32b_fp8.sh flash-fp8-TIS-8 8 |
IN Progress |
Below are the combinations of the environments that we have tested on.
Image | CUDA | Ray | vLLM | verl | flash-rl | GSM8K 8bit example | DAPO INT8 example |
---|---|---|---|---|---|---|---|
hiyouga/verl:ngc-th2.6.0-cu126-vllm0.8.3-flashinfer0.2.2-cxx11abi0 |
12.6 | 2.43.0 | 0.8.3 | flash-rl | 1.0.1 | ✅ Tested | ✅ Tested |
hiyouga/verl:ngc-th2.6.0-cu126-vllm0.8.4-flashinfer0.2.2-cxx11abi0 |
12.6 | 2.43.0 | 0.8.4 | flash-rl | 1.0.1 | ✅ Tested | |
hiyouga/verl:ngc-th2.7.0-cu12.6-vllm0.9.1 |
12.6 | 2.43.0 | 0.9.1 | flash-rl-vllm0.9.1 | 1.0.2 | ✅ Tested | |
hiyouga/verl:ngc-th2.7.1-cu12.6-vllm0.10.0 |
12.6 | 2.48.0 | 0.10.0 | flash-rl-vllm0.9.1 | 1.0.2 | ✅ Tested |
We're working on several improvements to Flash-RL:
- Support of Other RL Toolkits: Currently Flash-RL only supports
VeRL
, we are working on rolloing out support for other packages likeOpenRLHF
- Support of Other LLM Inference Toolkits: Currently Flash-RL only supports
vLLM
, we are working on rolloing out support for other tollkits likeSgLang
- Further Throughput Optimization: We are working on implementing efficient GPU kernels to accelerate online quantization
If you find our work useful, please cite us:
@misc{yao2025offpolicy,
title = {Your Efficient RL Framework Secretly Brings You Off-Policy RL Training},
url = {https://fengyao.notion.site/off-policy-rl},
author = {Yao, Feng and Liu, Liyuan and Zhang, Dinghuai and Dong, Chengyu and Shang, Jingbo and Gao, Jianfeng},
journal = {Feng Yao's Notion},
year = {2025},
month = aug,
}
@misc{yao2025flashrl,
title = {FlashRL: 8Bit Rollouts, Full Power RL},
url = {https://fengyao.notion.site/flash-rl},
author = {Liu, Liyuan and Yao, Feng and Zhang, Dinghuai and Dong, Chengyu and Shang, Jingbo and Gao, Jianfeng},
journal = {Feng Yao's Notion},
year = {2025},
month = aug,
}
If you have any questions related to the code or the blog, feel free to reach out to us at Liyuan Liu and Feng Yao.