Collection of step-by-step playbooks for setting up AI/ML workloads on NVIDIA DGX Spark devices with Blackwell architecture.
These playbooks provide detailed instructions for:
- Installing and configuring popular AI frameworks
- Running inference with optimized models
- Setting up development environments
- Connecting and managing your DGX Spark device
Each playbook includes prerequisites, step-by-step instructions, troubleshooting guidance, and example code.
- Comfy UI
- Set Up Local Network Access
- Connect Two Sparks
- CUDA-X Data Science
- DGX Dashboard
- FLUX.1 Dreambooth LoRA Fine-tuning
- Optimized JAX
- LLaMA Factory
- Build and Deploy a Multi-Agent Chatbot
- Multi-modal Inference
- NCCL for Two Sparks
- Fine-tune with NeMo
- NIM on Spark
- NVFP4 Quantization
- Ollama
- Open WebUI with Ollama
- Fine-tune with Pytorch
- RAG Application in AI Workbench
- SGLang Inference Server
- Speculative Decoding
- Set up Tailscale on Your Spark
- TRT LLM for Inference
- Text to Knowledge Graph
- Unsloth on DGX Spark
- Vibe Coding in VS Code
- Install and Use vLLM for Inference
- VS Code
- Build a Video Search and Summarization (VSS) Agent
- Documentation: https://www.nvidia.com/en-us/products/workstations/dgx-spark/
- Developer Forum: https://forums.developer.nvidia.com/c/accelerated-computing/dgx-spark-gb10
- Terms of Service: https://assets.ngc.nvidia.com/products/api-catalog/legal/NVIDIA%20API%20Trial%20Terms%20of%20Service.pdf
See:
- LICENSE for licensing information.
- LICENSE-3rd-party for third-party licensing information.
