A powerful Django-based web application that uses Qwen3-VL AI model to detect and recognize license plates in images with advanced OCR capabilities.
Quick Start β’ Demo β’ Documentation β’ API Reference β’ Docker
Try the live demo of Open LPR at: https://rest-openlpr.computedsynergy.com/
Experience the license plate recognition system in action without any installation required!
- π€ AI-Powered Detection: Uses qwen3-vl-4b-instruct vision-language model for accurate license plate recognition
- π Advanced OCR Integration: Extracts text from detected license plates with confidence scores
- π― Bounding Box Visualization: Draws colored boxes around detected plates and OCR text
- π€ Drag & Drop Upload: Modern, user-friendly file upload interface
- πΎ Permanent Storage: All uploaded and processed images are saved permanently
- π Side-by-Side Comparison: View original and processed images together
- π Search & Filter: Browse and search through processing history
- π± Responsive Design: Works on desktop, tablet, and mobile devices
- π³ Docker Support: Easy deployment with Docker and Docker Compose
- π REST API: Full API for programmatic access
The quickest way to get started is with Docker using one of the LlamaCpp compose files, which include everything needed for local inference without requiring any external API endpoints.
For users with AMD GPUs that support Vulkan:
# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr
# Create environment file from template
cp .env.llamacpp.example .env.llamacpp
# Edit the environment file with your settings
nano .env.llamacpp
# Create necessary directories
mkdir -p model_files model_files_cache container-data container-media staticfiles
# Start the application with AMD Vulkan GPU support
docker-compose -f docker-compose-llamacpp-amd-vulcan.yml up -d
# Check the logs to ensure everything is running correctly
docker-compose -f docker-compose-llamacpp-amd-vulcan.yml logs -fFor users without compatible GPUs or for testing purposes:
# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr
# Create environment file from template
cp .env.llamacpp.example .env.llamacpp
# Edit the environment file with your settings
nano .env.llamacpp
# Create necessary directories
mkdir -p model_files model_files_cache container-data container-media staticfiles
# Start the application with CPU support
docker-compose -f docker-compose-llamacpp-cpu.yml up -d
# Check the logs to ensure everything is running correctly
docker-compose -f docker-compose-llamacpp-cpu.yml logs -fFor users who want to use an external OpenAI-compatible API endpoint:
# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr
# Create environment file from template
cp .env.example .env
# Edit the environment file with your API settings
nano .env
# Create necessary directories
mkdir -p container-data container-media staticfiles
# Start the application
docker-compose up -d
# Check the logs to ensure everything is running correctly
docker-compose logs -fThis project provides multiple Docker Compose files for different deployment scenarios:
-
docker-compose-llamacpp-amd-vulcan.yml
- Purpose: Full local deployment with AMD GPU acceleration using Vulkan
- Services: OpenLPR + LlamaCpp server + optional Nginx
- Prerequisites:
- AMD GPU with Vulkan support
- ROCm drivers installed
- Sufficient GPU memory (8GB+ recommended)
- Performance: Fastest inference with GPU acceleration
- Use Case: Production deployment with AMD hardware
-
docker-compose-llamacpp-cpu.yml
- Purpose: Full local deployment using CPU for inference
- Services: OpenLPR + LlamaCpp server + optional Nginx
- Prerequisites:
- Sufficient RAM (16GB+ recommended)
- Multi-core CPU for better performance
- Performance: Slower but universal compatibility
- Use Case: Testing, development, or hardware without GPU support
- docker-compose.yml
- Purpose: OpenLPR deployment with external API endpoint
- Services: OpenLPR only
- Prerequisites:
- Access to an OpenAI-compatible API endpoint
- Valid API credentials
- Performance: Depends on external API
- Use Case: When using cloud-based AI services or existing inference infrastructure
For development or custom deployments:
-
Prerequisites
- Python 3.8+
- pip package manager
- Qwen3-VL API access
-
Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git cd open-lpr -
Create virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Configure environment variables
cp .env.example .env # Edit .env with your settings -
Set up database
python manage.py makemigrations python manage.py migrate
-
Create superuser (optional)
python manage.py createsuperuser
-
Run development server
python manage.py runserver
-
Access the application Open http://127.0.0.1:8000 in your browser
For local development (running Django directly):
Create a .env file based on .env.example:
# Django Settings
SECRET_KEY=your-secret-key-here
DEBUG=True
ALLOWED_HOSTS=localhost,127.0.0.1
# Qwen3-VL API Configuration
QWEN_API_KEY=your-qwen-api-key
QWEN_BASE_URL=https://your-open-api-compatible-endpoint.com/v1
QWEN_MODEL=qwen3-vl-4b-instruct
# File Upload Settings
UPLOAD_FILE_MAX_SIZE=10485760 # 10MB
MAX_BATCH_SIZE=10For local LlamaCpp inference deployment:
Create a .env.llamacpp file based on .env.llamacpp.example:
# HuggingFace Token
HF_TOKEN=hf_your_huggingface_token_here
# Model Configuration
MODEL_REPO=unsloth/Qwen3-VL-4B-Instruct-GGUF
MODEL_FILE=Qwen3-VL-4B-Instruct-Q5_K_M.gguf
MMPROJ_URL=https://huggingface.co/unsloth/Qwen3-VL-4B-Instruct-GGUF/resolve/main/mmproj-BF16.gguf
# Django Settings
SECRET_KEY=your-secret-key-here
DEBUG=False
ALLOWED_HOSTS=localhost,127.0.0.1,0.0.0.0
# File Upload Settings
UPLOAD_FILE_MAX_SIZE=10485760 # 10MB
MAX_BATCH_SIZE=10
# Database Configuration
DATABASE_PATH=/app/data/db.sqlite3
# Optional: Superuser creation
DJANGO_SUPERUSER_USERNAME=admin
DJANGO_SUPERUSER_EMAIL=[email protected]
DJANGO_SUPERUSER_PASSWORD=your-secure-passwordFor detailed LlamaCpp deployment instructions, see README-llamacpp.md.
- Drag & Drop: Simply drag an image file onto the upload area
- Click to Browse: Click the upload area to select a file
- File Validation:
- Supported formats: JPEG, PNG, BMP
- Maximum size: 10MB
- Processing: Click "Analyze License Plates" to start detection
After processing, you'll see:
- Detection Summary: Number of plates and OCR texts found
- Image Comparison: Side-by-side view of original and processed images
- Detection Details:
- License plate coordinates and confidence
- OCR text results with confidence scores
- Bounding box coordinates for all detections
- Download Options: Download both original and processed images
Access the "History" page to:
- Search: Filter by filename
- Date Range: Filter by upload date
- Status Filter: View by processing status
- Pagination: Navigate through large numbers of uploads
GET /- Home page with upload formPOST /upload/- Upload and process imageGET /result/<int:image_id>/- View processing results for a specific imageGET /images/- Browse image history with search and filteringGET /image/<int:image_id>/- View detailed information about a specific imagePOST /progress/- Check processing status (AJAX endpoint)GET /download/<int:image_id>/<str:image_type>/- Download original or processed imagesGET /health/- API health check endpoint
POST /api/v1/ocr/- Upload an image and receive OCR results synchronously
The LPR REST API returns JSON in this format:
{
"success": true,
"image_id": 123,
"filename": "example.jpg",
"processing_time_ms": 2450,
"results": {
"detections": [
{
"plate_id": "plate1",
"plate": {
"confidence": 0.85,
"coordinates": {
"x1": 100,
"y1": 200,
"x2": 250,
"y2": 250
}
},
"ocr": [
{
"text": "ABC123",
"confidence": 0.92,
"coordinates": {
"x1": 105,
"y1": 210,
"x2": 245,
"y2": 240
}
}
]
}
]
},
"summary": {
"total_plates": 1,
"total_ocr_texts": 1
},
"processing_timestamp": "2023-12-07T15:30:45.123456"
}{
"success": false,
"error": "No image file provided",
"error_code": "MISSING_IMAGE"
}import requests
# API endpoint
url = "http://localhost:8000/api/v1/ocr/"
# Image file to upload
image_path = "license_plate.jpg"
# Read and upload the image
with open(image_path, 'rb') as f:
files = {'image': f}
response = requests.post(url, files=files)
# Check response
if response.status_code == 200:
result = response.json()
if result['success']:
print(f"Found {result['summary']['total_plates']} license plates")
for detection in result['results']['detections']:
for ocr in detection['ocr']:
print(f"License plate text: {ocr['text']} (confidence: {ocr['confidence']:.2f})")
else:
print(f"Processing failed: {result['error']}")
else:
print(f"HTTP Error: {response.status_code}")
print(response.text)# Upload image and get OCR results
curl -X POST \
-F "image=@license_plate.jpg" \
http://localhost:8000/api/v1/ocr/The project includes automated Docker image building and publishing to GitHub Container Registry (ghcr.io).
The Docker image is automatically built and published to GitHub Container Registry when code is pushed to the main branch or when tags are created.
# Pull the latest image
docker pull ghcr.io/faisalthaheem/open-lpr:latest
# Pull a specific version
docker pull ghcr.io/faisalthaheem/open-lpr:v1.0.0This project provides multiple Docker Compose files for different deployment scenarios. For detailed deployment instructions, see the Quick Start section and Docker Deployment Guide.
# For AMD GPU with Vulkan support
docker-compose -f docker-compose-llamacpp-amd-vulcan.yml up -d
# For CPU-only deployment
docker-compose -f docker-compose-llamacpp-cpu.yml up -d
# For external API endpoint
docker-compose up -dFor comprehensive deployment instructions, including production configurations, see DOCKER_DEPLOYMENT.md.
The project includes a GitHub Actions workflow (.github/workflows/docker-publish.yml) that:
-
Triggers on:
- Push to main/master branch
- Creation of version tags (v*)
- Pull requests to main/master
-
Builds the Docker image for multiple architectures:
- linux/amd64
- linux/arm64
-
Publishes to GitHub Container Registry with tags:
- Branch name (e.g.,
main) - Semantic version tags (e.g.,
v1.0.0,v1.0,v1) latesttag for the main branch
- Branch name (e.g.,
-
Generates SBOM (Software Bill of Materials) for security scanning
open-lpr/
βββ manage.py # Django management script
βββ requirements.txt # Python dependencies
βββ .env.example # Environment variables template
βββ .env # Environment variables (create from .env.example)
βββ .env.llamacpp.example # LlamaCpp environment variables template
βββ .env.llamacpp # LlamaCpp environment variables (create from .env.llamacpp.example)
βββ .gitignore # Git ignore file
βββ .dockerignore # Docker ignore file
βββ API_DOCUMENTATION.md # Detailed REST API documentation
βββ README_API.md # REST API implementation summary
βββ README-llamacpp.md # LlamaCpp deployment guide
βββ DOCKER_DEPLOYMENT.md # Docker deployment guide
βββ test_api.py # API testing script
βββ test_setup.py # Test setup utilities
βββ test-llamacpp-integration.py # LlamaCpp integration test script
βββ docker-compose.yml # Standard Docker Compose configuration
βββ docker-compose-llamacpp-cpu.yml # CPU-based LlamaCpp Docker Compose
βββ docker-compose-llamacpp-amd-vulcan.yml # AMD Vulkan GPU LlamaCpp Docker Compose
βββ docker-entrypoint.sh # Docker entrypoint script
βββ Dockerfile # Docker image definition
βββ start-llamacpp-cpu.sh # LlamaCpp CPU startup script
βββ lpr_project/ # Django project settings
β βββ __init__.py
β βββ settings.py # Django configuration
β βββ urls.py # Project URL patterns
β βββ wsgi.py # WSGI configuration
βββ lpr_app/ # Main application
β βββ __init__.py
β βββ admin.py # Django admin configuration
β βββ apps.py # Django app configuration
β βββ models.py # Database models
β βββ views.py # View functions and API endpoints
β βββ urls.py # App URL patterns
β βββ forms.py # Django forms
β βββ services/ # Business logic
β β βββ __init__.py
β β βββ qwen_client.py # Qwen3-VL API client
β β βββ image_processor.py # Image processing utilities
β β βββ bbox_visualizer.py # Bounding box visualization
β βββ management/ # Django management commands
β β βββ __init__.py
β β βββ commands/
β β βββ __init__.py
β β βββ setup_project.py
β βββ static/ # Static files
β βββ migrations/ # Database migrations
β βββ __init__.py
β βββ 0001_initial.py
βββ media/ # Uploaded images
β βββ uploads/ # Original images
β βββ processed/ # Processed images
βββ container-data/ # Docker container data persistence
βββ container-media/ # Docker container media persistence
βββ staticfiles/ # Collected static files
βββ templates/ # HTML templates
β βββ base.html # Base template
β βββ lpr_app/ # App-specific templates
β βββ base.html
β βββ image_detail.html
β βββ image_list.html
β βββ results.html
β βββ upload.html
βββ docs/ # Documentation
β βββ LLAMACPP_RESOURCES.md # LlamaCpp and ROCm resources
β βββ open-lpr-index.png
β βββ open-lpr-detection-result.png
β βββ open-lpr-detection-details.png
β βββ open-lpr-processed-image.png
βββ nginx/ # Nginx configuration
β βββ nginx.conf # Nginx reverse proxy configuration
βββ logs/ # Application logs
βββ .github/ # GitHub workflows
βββ workflows/ # CI/CD configurations
Use the provided test script to verify API functionality:
# Test with default image locations
python test_api.py
# Test with specific image
python test_api.py /path/to/your/image.jpg# Run Django tests
python manage.py test
# Run with coverage
pip install coverage
coverage run --source='.' manage.py test
coverage report# Create new migrations
python manage.py makemigrations lpr_app
# Apply migrations
python manage.py migrate# Collect static files for production
python manage.py collectstatic --noinput- Set DEBUG=False in
.env - Configure ALLOWED_HOSTS with your domain
- Set up production database (PostgreSQL recommended)
- Configure static file serving (nginx/AWS S3)
- Set up media file serving (nginx/AWS S3)
- Use HTTPS with SSL certificate
- Development: SQLite database, DEBUG=True
- Staging: PostgreSQL, DEBUG=False, limited hosts
- Production: PostgreSQL, DEBUG=False, HTTPS required
-
API Connection Failed
- Check QWEN_API_KEY in
.env - Verify QWEN_BASE_URL is accessible
- Check network connectivity
- Check QWEN_API_KEY in
-
Image Upload Failed
- Verify file format (JPEG/PNG/BMP only)
- Check file size (< 10MB)
- Ensure media directory permissions
-
Processing Errors
- Check Django logs:
tail -f django.log - Verify API response format
- Check image processing dependencies
- Check Django logs:
-
Static Files Not Loading
- Run
python manage.py collectstatic - Check STATIC_URL in settings
- Verify web server static file configuration
- Run
Application logs are written to:
- Development: Console and
django.log - Production: Configured logging destination
Log levels:
INFO: General application flowERROR: API failures and processing errorsDEBUG: Detailed debugging information
We welcome contributions! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests if applicable
- Ensure all tests pass (
python manage.py test) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PEP 8 for Python code
- Use meaningful variable and function names
- Add docstrings to functions and classes
- Keep commits small and focused
When reporting issues, please include:
- Detailed description of the problem
- Steps to reproduce
- Expected vs. actual behavior
- Environment details (OS, Python version, etc.)
- Relevant logs or error messages
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
For issues and questions:
- Check the troubleshooting section
- Review application logs
- Create an issue with detailed information
- Include error messages and steps to reproduce
- Qwen3-VL for the powerful vision-language model
- Django for the robust web framework
- Bootstrap for the responsive UI components
- All contributors who help improve this project
For specialized deployment scenarios and additional resources:
- LlamaCpp and ROCm Resources - Important URLs for local LlamaCpp deployment
- README-llamacpp.md - Local inference with LlamaCpp server
- Docker Deployment Guide - Comprehensive Docker deployment instructions
- API Documentation - Complete REST API reference
Made with β€οΈ by Open LPR Team



