Skip to content

yanwenCi/OrganSegmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project structure

project_root/
│
├─ main.py                    # entry script (calls the training code)
├─ custom_dataloaders.py      # defines `mydataloader`, `get_train_transforms`
├─ utils.py                   # defines `to_tensor`, `bin_dice`, `SegmentationLoss`
│
└─ data_interpolated/         # your NIfTI data
    ├─ patient_001_img.nii
    ├─ patient_001_mask.nii
    ├─ patient_002_img.nii
    ├─ patient_002_mask.nii
    └─ ...

Environment

Use Python 3.9+ (3.10 is fine).

You can use either conda or plain venv. Example with conda:

conda create -n monai_seg python=3.10
conda activate monai_seg

Then install the dependencies:

# Core (adjust CUDA version if needed)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124

# MONAI + NIfTI
pip install "monai[nibabel]"

# NumPy
pip install numpy

If monai[nibabel] fails, you can do:

pip install monai nibabel

Summary of required packages:

  • torch (PyTorch)
  • monai
  • numpy
  • nibabel (if you load NIfTI in custom_dataloaders)
  • (Optional) tqdm or others for nicer progress, but not required by this script

GPU is optional:

  • If you have CUDA installed and a GPU, PyTorch will use it automatically (device = "cuda").
  • If not, it will fall back to CPU.

How to run

From the project_root folder (where main.py lives), run:

python main.py --mode_type monai_unet

If you would like to use data augmentation:

python main.py --mode_type monai_unet --augmentation

By default, it will:

  • Look for data in ./data_interpolated
  • Train for 100 epochs
  • Use batch size 2
  • Use MONAI 3D UNet with in_channels = 1, out_channels = 9
  • Save checkpoints and logs under ./checkpoint/monai_unet/

Custom arguments

You can override arguments from the command line. For example:

python train_monai_unet.py \
  --img_path ./data_interpolated \
  --checkpoint ./checkpoints_demo \
  --epoch 50 \
  --batch_size 1 \
  --lr 0.0005 \
  --augmentation

Important flags

  • --img_path Folder containing your *_img.nii and *_mask.nii pairs.

  • --checkpoint Where to save best_model.pth and loss_log.txt.

  • --epoch Number of training epochs.

  • --batch_size Batch size (adjust based on GPU memory).

  • --lr Learning rate.

  • --augmentation Turn ON data augmentation (MONAI transforms).

  • --iters_per_epoch Max number of training batches per epoch.

  • --test_batches Number of batches for validation.

  • --save_epoch Save best model every N epochs (if mean Dice improved).

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages