This repo is RadialAttention ported to ComfyUI native workflows. If you're using kijai's ComfyUI-WanVideoWrapper rather than native workflows, then you can use their WanVideoSetRadialAttention node rather than this repo, and you still need to install the pip packages below.
This supports Wan 2.1 14B, Wan 2.2 14B, Wan 2.2 5B, Wan 2.2 Animate, HuMo, both T2V and I2V.
This does not give speedup if you only generate a single frame image.
- Install SpargeAttention
- git clone ComfyUI-RadialAttn to your
ComfyUI/custom_nodes/
It's also recommended to install SageAttention, and add --use-sage-attention when starting ComfyUI. When RadialAttention is not applicable, SageAttention will be used.
Just connect your model to the PatchRadialAttn node. There's an example workflow for Wan 2.2 14B I2V + GGUF + LightX2V LoRA + RadialAttention + torch.compile.
It's believed that disabling RadialAttention on the first layer (dense_block = 1), the first time step (dense_timestep = 1), and the last time step (last_dense_timestep = 1) improves the quality.
Don't blindly use torch.compile. To start with, you can disable the TorchCompileModel node and run the workflow. Only when you're sure that the workflow runs but it's not fast enough, then you can try to enable TorchCompileModel.