An evolutionary algorithm-based training system for YOLO object detection models. This project automatically optimises both model architecture and training parameters to achieve the best trade-off between accuracy and inference speed.
- Evolutionary optimisation of YOLO model architecture and hyperparameters
- Hardware-aware training optimised for both NVIDIA (CUDA) and AMD (ROCm) GPUs
- Automatic resource management to prevent VRAM exhaustion
- Comprehensive checkpointing system for resuming training
- Population-based approach with configurable selection, crossover, and mutation
- Built on top of the TrainingAutomation framework for YOLO model training
- Python 3.8 or higher
- PyTorch 1.7 or higher
- GPU with CUDA or ROCm support (optional but recommended)
-
Clone the repository:
git clone git@github.com:SethBennett2523/EvolutionaryTrainingManager.git cd EvolutionaryTrainingManager -
Install dependencies:
pip install -r requirements.txt
-
Update the configuration in
config.yamlto match your environment.
python main.py --config config.yaml--resume: Resume training from the last checkpoint--output-dir OUTPUT_DIR: Specify output directory--generations N: Set maximum number of generations--population N: Set population size--device {cuda,rocm,cpu,auto}: Specify device to use--debug: Enable debug logging
The main configuration file (config.yaml) includes settings for:
- Evolution parameters (population size, mutation rate, etc.)
- Model parameters (base model type)
- Hardware settings (device, memory threshold)
- Hyperparameter ranges for evolution
- The algorithm initialises a population of YOLO models with random hyperparameters and architecture settings
- Each model is trained on the dataset and evaluated for accuracy (mAP) and speed (inference time)