Skip to content

faisalthaheem/open-lpr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

21 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš— OPEN LPR - License Plate Recognition System

GitHub release GitHub stars GitHub forks GitHub issues GitHub Container Registry License: Apache 2.0

A powerful Django-based web application that uses Qwen3-VL AI model to detect and recognize license plates in images with advanced OCR capabilities.

Quick Start β€’ Demo β€’ Documentation β€’ API Reference β€’ Docker

πŸš€ Live Demo

Try the live demo of Open LPR at: https://rest-openlpr.computedsynergy.com/

Experience the license plate recognition system in action without any installation required!

🌟 Visual Showcase

Main Interface

Open LPR Main Interface

Detection Results

Detection Results

Detection Details

Detection Details

Processed Image

Processed Image with Bounding Boxes

✨ Features

  • πŸ€– AI-Powered Detection: Uses qwen3-vl-4b-instruct vision-language model for accurate license plate recognition
  • πŸ” Advanced OCR Integration: Extracts text from detected license plates with confidence scores
  • 🎯 Bounding Box Visualization: Draws colored boxes around detected plates and OCR text
  • πŸ“€ Drag & Drop Upload: Modern, user-friendly file upload interface
  • πŸ’Ύ Permanent Storage: All uploaded and processed images are saved permanently
  • πŸ”„ Side-by-Side Comparison: View original and processed images together
  • πŸ”Ž Search & Filter: Browse and search through processing history
  • πŸ“± Responsive Design: Works on desktop, tablet, and mobile devices
  • 🐳 Docker Support: Easy deployment with Docker and Docker Compose
  • πŸ”Œ REST API: Full API for programmatic access

πŸ› οΈ Technology Stack

Backend AI Model Frontend Database Deployment
Django Qwen3-VL Bootstrap SQLite Docker
Python OpenAI API HTML5 PostgreSQL GitHub Actions

πŸš€ Quick Start

Docker Deployment (Recommended)

The quickest way to get started is with Docker using one of the LlamaCpp compose files, which include everything needed for local inference without requiring any external API endpoints.

Option 1: AMD Vulkan GPU Version (Fastest Local Inference)

For users with AMD GPUs that support Vulkan:

# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr

# Create environment file from template
cp .env.llamacpp.example .env.llamacpp

# Edit the environment file with your settings
nano .env.llamacpp

# Create necessary directories
mkdir -p model_files model_files_cache container-data container-media staticfiles

# Start the application with AMD Vulkan GPU support
docker-compose -f docker-compose-llamacpp-amd-vulcan.yml up -d

# Check the logs to ensure everything is running correctly
docker-compose -f docker-compose-llamacpp-amd-vulcan.yml logs -f

Option 2: CPU Version (Universal Compatibility)

For users without compatible GPUs or for testing purposes:

# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr

# Create environment file from template
cp .env.llamacpp.example .env.llamacpp

# Edit the environment file with your settings
nano .env.llamacpp

# Create necessary directories
mkdir -p model_files model_files_cache container-data container-media staticfiles

# Start the application with CPU support
docker-compose -f docker-compose-llamacpp-cpu.yml up -d

# Check the logs to ensure everything is running correctly
docker-compose -f docker-compose-llamacpp-cpu.yml logs -f

Option 3: Standard Docker (External API)

For users who want to use an external OpenAI-compatible API endpoint:

# Clone the repository
git clone https://github.com/faisalthaheem/open-lpr.git
cd open-lpr

# Create environment file from template
cp .env.example .env

# Edit the environment file with your API settings
nano .env

# Create necessary directories
mkdir -p container-data container-media staticfiles

# Start the application
docker-compose up -d

# Check the logs to ensure everything is running correctly
docker-compose logs -f

Docker Compose Files

This project provides multiple Docker Compose files for different deployment scenarios:

LlamaCpp Compose Files (Recommended for Quick Start)

  1. docker-compose-llamacpp-amd-vulcan.yml

    • Purpose: Full local deployment with AMD GPU acceleration using Vulkan
    • Services: OpenLPR + LlamaCpp server + optional Nginx
    • Prerequisites:
      • AMD GPU with Vulkan support
      • ROCm drivers installed
      • Sufficient GPU memory (8GB+ recommended)
    • Performance: Fastest inference with GPU acceleration
    • Use Case: Production deployment with AMD hardware
  2. docker-compose-llamacpp-cpu.yml

    • Purpose: Full local deployment using CPU for inference
    • Services: OpenLPR + LlamaCpp server + optional Nginx
    • Prerequisites:
      • Sufficient RAM (16GB+ recommended)
      • Multi-core CPU for better performance
    • Performance: Slower but universal compatibility
    • Use Case: Testing, development, or hardware without GPU support

Standard Compose File

  1. docker-compose.yml
    • Purpose: OpenLPR deployment with external API endpoint
    • Services: OpenLPR only
    • Prerequisites:
      • Access to an OpenAI-compatible API endpoint
      • Valid API credentials
    • Performance: Depends on external API
    • Use Case: When using cloud-based AI services or existing inference infrastructure

Manual Installation

For development or custom deployments:

  1. Prerequisites

    • Python 3.8+
    • pip package manager
    • Qwen3-VL API access
  2. Clone the repository

    git clone https://github.com/faisalthaheem/open-lpr.git
    cd open-lpr
  3. Create virtual environment

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  4. Install dependencies

    pip install -r requirements.txt
  5. Configure environment variables

    cp .env.example .env
    # Edit .env with your settings
  6. Set up database

    python manage.py makemigrations
    python manage.py migrate
  7. Create superuser (optional)

    python manage.py createsuperuser
  8. Run development server

    python manage.py runserver
  9. Access the application Open http://127.0.0.1:8000 in your browser

βš™οΈ Configuration

Development Environment

For local development (running Django directly):

Create a .env file based on .env.example:

# Django Settings
SECRET_KEY=your-secret-key-here
DEBUG=True
ALLOWED_HOSTS=localhost,127.0.0.1

# Qwen3-VL API Configuration
QWEN_API_KEY=your-qwen-api-key
QWEN_BASE_URL=https://your-open-api-compatible-endpoint.com/v1
QWEN_MODEL=qwen3-vl-4b-instruct

# File Upload Settings
UPLOAD_FILE_MAX_SIZE=10485760  # 10MB
MAX_BATCH_SIZE=10

Docker Environment with LlamaCpp

For local LlamaCpp inference deployment:

Create a .env.llamacpp file based on .env.llamacpp.example:

# HuggingFace Token
HF_TOKEN=hf_your_huggingface_token_here

# Model Configuration
MODEL_REPO=unsloth/Qwen3-VL-4B-Instruct-GGUF
MODEL_FILE=Qwen3-VL-4B-Instruct-Q5_K_M.gguf
MMPROJ_URL=https://huggingface.co/unsloth/Qwen3-VL-4B-Instruct-GGUF/resolve/main/mmproj-BF16.gguf

# Django Settings
SECRET_KEY=your-secret-key-here
DEBUG=False
ALLOWED_HOSTS=localhost,127.0.0.1,0.0.0.0

# File Upload Settings
UPLOAD_FILE_MAX_SIZE=10485760  # 10MB
MAX_BATCH_SIZE=10

# Database Configuration
DATABASE_PATH=/app/data/db.sqlite3

# Optional: Superuser creation
DJANGO_SUPERUSER_USERNAME=admin
DJANGO_SUPERUSER_EMAIL=[email protected]
DJANGO_SUPERUSER_PASSWORD=your-secure-password

For detailed LlamaCpp deployment instructions, see README-llamacpp.md.

πŸ“– Usage

Uploading Images

  1. Drag & Drop: Simply drag an image file onto the upload area
  2. Click to Browse: Click the upload area to select a file
  3. File Validation:
    • Supported formats: JPEG, PNG, BMP
    • Maximum size: 10MB
  4. Processing: Click "Analyze License Plates" to start detection

Viewing Results

After processing, you'll see:

  • Detection Summary: Number of plates and OCR texts found
  • Image Comparison: Side-by-side view of original and processed images
  • Detection Details:
    • License plate coordinates and confidence
    • OCR text results with confidence scores
    • Bounding box coordinates for all detections
  • Download Options: Download both original and processed images

Browsing History

Access the "History" page to:

  • Search: Filter by filename
  • Date Range: Filter by upload date
  • Status Filter: View by processing status
  • Pagination: Navigate through large numbers of uploads

πŸ”Œ API Endpoints

Web Endpoints

  • GET / - Home page with upload form
  • POST /upload/ - Upload and process image
  • GET /result/<int:image_id>/ - View processing results for a specific image
  • GET /images/ - Browse image history with search and filtering
  • GET /image/<int:image_id>/ - View detailed information about a specific image
  • POST /progress/ - Check processing status (AJAX endpoint)
  • GET /download/<int:image_id>/<str:image_type>/ - Download original or processed images
  • GET /health/ - API health check endpoint

REST API Endpoints

  • POST /api/v1/ocr/ - Upload an image and receive OCR results synchronously

Response Format

REST API Response Format

The LPR REST API returns JSON in this format:

{
    "success": true,
    "image_id": 123,
    "filename": "example.jpg",
    "processing_time_ms": 2450,
    "results": {
        "detections": [
            {
                "plate_id": "plate1",
                "plate": {
                    "confidence": 0.85,
                    "coordinates": {
                        "x1": 100,
                        "y1": 200,
                        "x2": 250,
                        "y2": 250
                    }
                },
                "ocr": [
                    {
                        "text": "ABC123",
                        "confidence": 0.92,
                        "coordinates": {
                            "x1": 105,
                            "y1": 210,
                            "x2": 245,
                            "y2": 240
                        }
                    }
                ]
            }
        ]
    },
    "summary": {
        "total_plates": 1,
        "total_ocr_texts": 1
    },
    "processing_timestamp": "2023-12-07T15:30:45.123456"
}

Error Response Format

{
    "success": false,
    "error": "No image file provided",
    "error_code": "MISSING_IMAGE"
}

Usage Examples

Python Example

import requests

# API endpoint
url = "http://localhost:8000/api/v1/ocr/"

# Image file to upload
image_path = "license_plate.jpg"

# Read and upload the image
with open(image_path, 'rb') as f:
    files = {'image': f}
    response = requests.post(url, files=files)

# Check response
if response.status_code == 200:
    result = response.json()
    if result['success']:
        print(f"Found {result['summary']['total_plates']} license plates")
        for detection in result['results']['detections']:
            for ocr in detection['ocr']:
                print(f"License plate text: {ocr['text']} (confidence: {ocr['confidence']:.2f})")
    else:
        print(f"Processing failed: {result['error']}")
else:
    print(f"HTTP Error: {response.status_code}")
    print(response.text)

cURL Example

# Upload image and get OCR results
curl -X POST \
  -F "image=@license_plate.jpg" \
  http://localhost:8000/api/v1/ocr/

🐳 Docker Deployment

The project includes automated Docker image building and publishing to GitHub Container Registry (ghcr.io).

Using the Pre-built Docker Image

The Docker image is automatically built and published to GitHub Container Registry when code is pushed to the main branch or when tags are created.

# Pull the latest image
docker pull ghcr.io/faisalthaheem/open-lpr:latest

# Pull a specific version
docker pull ghcr.io/faisalthaheem/open-lpr:v1.0.0

Docker Compose Deployment

This project provides multiple Docker Compose files for different deployment scenarios. For detailed deployment instructions, see the Quick Start section and Docker Deployment Guide.

Quick Reference

# For AMD GPU with Vulkan support
docker-compose -f docker-compose-llamacpp-amd-vulcan.yml up -d

# For CPU-only deployment
docker-compose -f docker-compose-llamacpp-cpu.yml up -d

# For external API endpoint
docker-compose up -d

For comprehensive deployment instructions, including production configurations, see DOCKER_DEPLOYMENT.md.

CI/CD Workflow

The project includes a GitHub Actions workflow (.github/workflows/docker-publish.yml) that:

  1. Triggers on:

    • Push to main/master branch
    • Creation of version tags (v*)
    • Pull requests to main/master
  2. Builds the Docker image for multiple architectures:

    • linux/amd64
    • linux/arm64
  3. Publishes to GitHub Container Registry with tags:

    • Branch name (e.g., main)
    • Semantic version tags (e.g., v1.0.0, v1.0, v1)
    • latest tag for the main branch
  4. Generates SBOM (Software Bill of Materials) for security scanning

πŸ“ File Structure

open-lpr/
β”œβ”€β”€ manage.py                    # Django management script
β”œβ”€β”€ requirements.txt              # Python dependencies
β”œβ”€β”€ .env.example                # Environment variables template
β”œβ”€β”€ .env                         # Environment variables (create from .env.example)
β”œβ”€β”€ .env.llamacpp.example       # LlamaCpp environment variables template
β”œβ”€β”€ .env.llamacpp               # LlamaCpp environment variables (create from .env.llamacpp.example)
β”œβ”€β”€ .gitignore                   # Git ignore file
β”œβ”€β”€ .dockerignore               # Docker ignore file
β”œβ”€β”€ API_DOCUMENTATION.md        # Detailed REST API documentation
β”œβ”€β”€ README_API.md               # REST API implementation summary
β”œβ”€β”€ README-llamacpp.md         # LlamaCpp deployment guide
β”œβ”€β”€ DOCKER_DEPLOYMENT.md        # Docker deployment guide
β”œβ”€β”€ test_api.py                 # API testing script
β”œβ”€β”€ test_setup.py               # Test setup utilities
β”œβ”€β”€ test-llamacpp-integration.py # LlamaCpp integration test script
β”œβ”€β”€ docker-compose.yml           # Standard Docker Compose configuration
β”œβ”€β”€ docker-compose-llamacpp-cpu.yml    # CPU-based LlamaCpp Docker Compose
β”œβ”€β”€ docker-compose-llamacpp-amd-vulcan.yml # AMD Vulkan GPU LlamaCpp Docker Compose
β”œβ”€β”€ docker-entrypoint.sh         # Docker entrypoint script
β”œβ”€β”€ Dockerfile                  # Docker image definition
β”œβ”€β”€ start-llamacpp-cpu.sh     # LlamaCpp CPU startup script
β”œβ”€β”€ lpr_project/               # Django project settings
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ settings.py             # Django configuration
β”‚   β”œβ”€β”€ urls.py                 # Project URL patterns
β”‚   └── wsgi.py                 # WSGI configuration
β”œβ”€β”€ lpr_app/                   # Main application
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ admin.py                # Django admin configuration
β”‚   β”œβ”€β”€ apps.py                 # Django app configuration
β”‚   β”œβ”€β”€ models.py               # Database models
β”‚   β”œβ”€β”€ views.py                # View functions and API endpoints
β”‚   β”œβ”€β”€ urls.py                 # App URL patterns
β”‚   β”œβ”€β”€ forms.py                # Django forms
β”‚   β”œβ”€β”€ services/               # Business logic
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ qwen_client.py      # Qwen3-VL API client
β”‚   β”‚   β”œβ”€β”€ image_processor.py  # Image processing utilities
β”‚   β”‚   └── bbox_visualizer.py  # Bounding box visualization
β”‚   β”œβ”€β”€ management/             # Django management commands
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   └── commands/
β”‚   β”‚       β”œβ”€β”€ __init__.py
β”‚   β”‚       └── setup_project.py
β”‚   β”œβ”€β”€ static/                # Static files
β”‚   └── migrations/            # Database migrations
β”‚       β”œβ”€β”€ __init__.py
β”‚       └── 0001_initial.py
β”œβ”€β”€ media/                     # Uploaded images
β”‚   β”œβ”€β”€ uploads/               # Original images
β”‚   └── processed/             # Processed images
β”œβ”€β”€ container-data/             # Docker container data persistence
β”œβ”€β”€ container-media/            # Docker container media persistence
β”œβ”€β”€ staticfiles/               # Collected static files
β”œβ”€β”€ templates/                 # HTML templates
β”‚   β”œβ”€β”€ base.html              # Base template
β”‚   └── lpr_app/               # App-specific templates
β”‚       β”œβ”€β”€ base.html
β”‚       β”œβ”€β”€ image_detail.html
β”‚       β”œβ”€β”€ image_list.html
β”‚       β”œβ”€β”€ results.html
β”‚       └── upload.html
β”œβ”€β”€ docs/                      # Documentation
β”‚   β”œβ”€β”€ LLAMACPP_RESOURCES.md  # LlamaCpp and ROCm resources
β”‚   β”œβ”€β”€ open-lpr-index.png
β”‚   β”œβ”€β”€ open-lpr-detection-result.png
β”‚   β”œβ”€β”€ open-lpr-detection-details.png
β”‚   └── open-lpr-processed-image.png
β”œβ”€β”€ nginx/                     # Nginx configuration
β”‚   └── nginx.conf             # Nginx reverse proxy configuration
β”œβ”€β”€ logs/                      # Application logs
└── .github/                  # GitHub workflows
    └── workflows/             # CI/CD configurations

πŸ§ͺ Testing

Use the provided test script to verify API functionality:

# Test with default image locations
python test_api.py

# Test with specific image
python test_api.py /path/to/your/image.jpg

πŸ”§ Development

Running Tests

# Run Django tests
python manage.py test

# Run with coverage
pip install coverage
coverage run --source='.' manage.py test
coverage report

Database Migrations

# Create new migrations
python manage.py makemigrations lpr_app

# Apply migrations
python manage.py migrate

Static Files

# Collect static files for production
python manage.py collectstatic --noinput

πŸš€ Production Deployment

Production Settings

  1. Set DEBUG=False in .env
  2. Configure ALLOWED_HOSTS with your domain
  3. Set up production database (PostgreSQL recommended)
  4. Configure static file serving (nginx/AWS S3)
  5. Set up media file serving (nginx/AWS S3)
  6. Use HTTPS with SSL certificate

Environment-Specific Settings

  • Development: SQLite database, DEBUG=True
  • Staging: PostgreSQL, DEBUG=False, limited hosts
  • Production: PostgreSQL, DEBUG=False, HTTPS required

πŸ› Troubleshooting

Common Issues

  1. API Connection Failed

    • Check QWEN_API_KEY in .env
    • Verify QWEN_BASE_URL is accessible
    • Check network connectivity
  2. Image Upload Failed

    • Verify file format (JPEG/PNG/BMP only)
    • Check file size (< 10MB)
    • Ensure media directory permissions
  3. Processing Errors

    • Check Django logs: tail -f django.log
    • Verify API response format
    • Check image processing dependencies
  4. Static Files Not Loading

    • Run python manage.py collectstatic
    • Check STATIC_URL in settings
    • Verify web server static file configuration

Logging

Application logs are written to:

  • Development: Console and django.log
  • Production: Configured logging destination

Log levels:

  • INFO: General application flow
  • ERROR: API failures and processing errors
  • DEBUG: Detailed debugging information

🀝 Contributing

We welcome contributions! Please follow these guidelines:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Add tests if applicable
  5. Ensure all tests pass (python manage.py test)
  6. Commit your changes (git commit -m 'Add some amazing feature')
  7. Push to the branch (git push origin feature/amazing-feature)
  8. Open a Pull Request

Code Style

  • Follow PEP 8 for Python code
  • Use meaningful variable and function names
  • Add docstrings to functions and classes
  • Keep commits small and focused

Issue Reporting

When reporting issues, please include:

  • Detailed description of the problem
  • Steps to reproduce
  • Expected vs. actual behavior
  • Environment details (OS, Python version, etc.)
  • Relevant logs or error messages

πŸ“„ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

πŸ†˜ Support

For issues and questions:

  • Check the troubleshooting section
  • Review application logs
  • Create an issue with detailed information
  • Include error messages and steps to reproduce

πŸ™ Acknowledgments

  • Qwen3-VL for the powerful vision-language model
  • Django for the robust web framework
  • Bootstrap for the responsive UI components
  • All contributors who help improve this project

πŸ“š Additional Documentation

For specialized deployment scenarios and additional resources:


⬆ Back to top

Made with ❀️ by Open LPR Team