This project implements a self-learning Snake Game AI using Deep Q-Learning, a reinforcement learning technique. The AI learns to play the game through trial and error by interacting with the environment, receiving rewards, and improving over time.
- Train an agent using Deep Q-Learning (DQN)
- Real-time game environment using Pygame
- Neural network powered by PyTorch
- Save/load models (
model.pth
) - Fully customizable for further research or improvements
- Tracks performance metrics (score, mean score, record)
-
State Representation: 11-dimensional vector representing:
- Danger in straight/right/left direction
- Current direction
- Food location relative to snake
-
Action Space:
[straight, right, left]
turns from current direction -
Reward Mechanism:
- +10 for eating food
- -10 for dying
- Small penalty each frame to encourage faster solutions
-
Neural Network:
- Input: 11-dim state
- Output: Q-values for each possible action
- Loss: Mean squared error
- Optimizer: Adam
.
├── main.py # Training loop
├── model.py # Neural network and saving logic
├── agent.py # Reinforcement learning agent
├── snake_game.py # Pygame-based snake game environment
├── model/ # Saved models (auto-created)
│ └── model.pth
└── README.md
git clone https://github.com/ahmedyar7/Snake-Game-AI.git
cd Snake-Game-AI
pip install pygame torch numpy matplotlib
python main.py
Modify main.py
to render the game in real time (turn off plot
if needed).
The training loop uses matplotlib to plot the score over time:
- 📊 Score per game
- 📉 Mean score
- 🏅 High score record
Tip: You can stop training early and it will save the best model automatically.
Trained models are saved to:
./model/model.pth
To load a saved model, add logic in agent.py
or main.py
to load the state dict.