project_root/
│
├─ main.py # entry script (calls the training code)
├─ custom_dataloaders.py # defines `mydataloader`, `get_train_transforms`
├─ utils.py # defines `to_tensor`, `bin_dice`, `SegmentationLoss`
│
└─ data_interpolated/ # your NIfTI data
├─ patient_001_img.nii
├─ patient_001_mask.nii
├─ patient_002_img.nii
├─ patient_002_mask.nii
└─ ...
Use Python 3.9+ (3.10 is fine).
You can use either conda or plain venv. Example with conda:
conda create -n monai_seg python=3.10
conda activate monai_segThen install the dependencies:
# Core (adjust CUDA version if needed)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124
# MONAI + NIfTI
pip install "monai[nibabel]"
# NumPy
pip install numpyIf monai[nibabel] fails, you can do:
pip install monai nibabelSummary of required packages:
- torch (PyTorch)
- monai
- numpy
- nibabel (if you load NIfTI in
custom_dataloaders) - (Optional)
tqdmor others for nicer progress, but not required by this script
GPU is optional:
- If you have CUDA installed and a GPU, PyTorch will use it automatically (
device = "cuda"). - If not, it will fall back to CPU.
From the project_root folder (where main.py lives), run:
python main.py --mode_type monai_unetIf you would like to use data augmentation:
python main.py --mode_type monai_unet --augmentationBy default, it will:
- Look for data in
./data_interpolated - Train for 100 epochs
- Use batch size 2
- Use MONAI 3D UNet with
in_channels = 1,out_channels = 9 - Save checkpoints and logs under
./checkpoint/monai_unet/
You can override arguments from the command line. For example:
python train_monai_unet.py \
--img_path ./data_interpolated \
--checkpoint ./checkpoints_demo \
--epoch 50 \
--batch_size 1 \
--lr 0.0005 \
--augmentation-
--img_pathFolder containing your*_img.niiand*_mask.niipairs. -
--checkpointWhere to savebest_model.pthandloss_log.txt. -
--epochNumber of training epochs. -
--batch_sizeBatch size (adjust based on GPU memory). -
--lrLearning rate. -
--augmentationTurn ON data augmentation (MONAI transforms). -
--iters_per_epochMax number of training batches per epoch. -
--test_batchesNumber of batches for validation. -
--save_epochSave best model every N epochs (if mean Dice improved).