-
Notifications
You must be signed in to change notification settings - Fork 70
Building fuser project
Ryan Spring edited this page Apr 28, 2025
·
17 revisions
- Install PyTorch with CUDA support (either build from source or install via pip wheel);
- Clone Fuser code to your local machine
git clone --recursive https://github.com/NVIDIA/Fuser.git; - Install required pip modules:
pip install -r requirements.txt; - Build NvFuser with
pip install -v --no-build-isolation python;
- Use
-eor--editableflag for editable install - Use environment flags starting with
NVFUSER_BUILD_to configurepip install. See Comprehensive List of NVFUSER_BUILD environment variables.
After finish standalone build step 1-5 above, you can run python setup.py bdist_wheel to build a pip wheel. Which can be distributed and used on top of a pip installed pytorch wheel pacakge.
A few notes:
- Build against upstream pip package - if you need to work with upstream distributed pytorch https://pytorch.org/, you need to make sure that you nvfuser is built against a pytorch library with the same CXX ABI flag, otherwise, you are going to see undefined symbols. The safest bet is just build against upstream pip package directly!
-
Specify additional pip packages - currently upstream pytorch pip package can run on system with no cuda installation, there's some complicated story on how we link against libnvrtc. Long story short, you need to specify the proper nvrtc as required pacakge. This can be done via specify
-install_requires=...in the setup.py script. i.e.python setup.py bdist_wheel --no-test --no-benchmark -install_requires=nvidia-cuda-nvrtc-cu12(note that I'm also skipping test & benchmark during the build since those are not packaged by pip wheel neither). -
Patch nvfuser binary for pip installation - after nvfuser pip package is installed on your local machine, we need to swap nvfuser library targets with what upstream pytorch ships. Simply running
patch-nvfuser(which is defined as a wheel entry_point) would be enough