A lightweight, efficient tool designed for managing AI model storage. By leveraging "Centralized Vault + Logical Mapping", it solves common pain points like redundant model downloads, OS drive (root) overflow, and complex Docker volume configurations.
The script initializes your storage based on the following professional hierarchy:
/mnt/models (Vault Root)
βββ base/
β βββ llm # Language Models (Llama 3, Qwen, etc.)
β βββ diffusion # Image Models (SDXL, Flux, etc.)
β βββ vlm # Vision Language Models
βββ lora # LoRA / Adapter weights
βββ tensorrt # Optimized FP4/FP8 engines for NVIDIA NIM
βββ cache # Unified HuggingFace/ModelScope cache
Clone or download imaphelper.sh and grant execution permissions:
chmod +x imaphelper.sh