An AI-powered online background removal tool built with FastAPI, PyTorch, and the UΒ²-Net deep learning model. Features a beautiful, modern UI with drag-and-drop functionality and real-time preview.
- π― High-Quality Background Removal using UΒ²-Net AI model
- πΌοΈ Drag & Drop image upload
- ποΈ Live Preview with before/after comparison
- π¨ Background Options (transparent, white, black, custom colors)
- π₯ One-Click Download with PNG transparency
- π± Responsive Design works on all devices
- β‘ Fast Processing optimized for performance
The project uses a modern, modular architecture with clear separation of concerns:
removals-background/
βββ app/ # Main application package
β βββ main.py # FastAPI application
β βββ config.py # Configuration management
β βββ api/ # API routes
β βββ core/ # Core logic (model management)
β βββ services/ # Business services
β βββ models/ # AI model architectures
β βββ utils/ # Utilities
βββ client/ # Frontend
β βββ index.html
β βββ styles.css
β βββ script.js
βββ models/ # Model weights directory
βββ requirements.txt # Python dependencies
βββ README.md # This file
For detailed architecture documentation, see ARCHITECTURE.md.
- Python 3.8 or higher
- pip (Python package manager)
- 4GB+ RAM recommended
- GPU (optional, but faster)
- Clone or navigate to the project directory:
cd remove-bg- Create a virtual environment (recommended):
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Download the UΒ²-Net model weights:
The model will be automatically downloaded on first run, or you can download it manually:
# Automatic download
python app/utils/download_model.py
# Or manually create models directory and download
mkdir -p models
# Download from: https://drive.google.com/uc?id=1ao1ovG1Qtx4b7EoskHXmi2E9rp5CHLcZ
# Save as: models/u2net.pth- Start the backend server:
Option 1: Using the startup script (recommended)
# Windows
start.bat
# Linux/Mac
chmod +x start.sh
./start.shOption 2: Using main.py
python main.pyOption 3: Using uvicorn directly
python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000- Start the frontend (React + Vite):
cd client
npm install # or bun install
npm run dev # or bun run devThe frontend will be available at http://localhost:5173 (or the port shown in terminal)
Note: Make sure to set VITE_API_URL in client/.env if your backend is running on a different port.
- Try it out!
- Drag and drop an image or click to upload
- Wait for processing (usually 3-10 seconds)
- View the result and try different backgrounds
- Download your image with transparent background
The frontend is configured to connect to http://localhost:8000. If you change the backend port, update the VITE_API_URL in client/.env:
VITE_API_URL=http://localhost:8000You can configure the model through environment variables or .env file:
# Use lighter UΒ²-Net-P model for faster processing
MODEL_TYPE=u2netp
# Or use full UΒ²-Net for better quality
MODEL_TYPE=u2net
# Force CPU usage
DEVICE=cpu
# Enable/disable advanced features
USE_MULTI_SCALE=true # Better quality, slower
MASK_SMOOTHING=true # Smoother edges
EDGE_REFINEMENT=true # Better edge detectionSee .env.example for all configuration options.
Once the server is running, visit:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
Health check endpoint
{
"message": "Background Removal API is running",
"model_loaded": true,
"device": "cpu"
}Remove background from an image
- Input: multipart/form-data with image file
- Output: PNG image with transparent background
Example with curl:
curl -X POST "http://localhost:8000/remove-background" \
-H "accept: image/png" \
-F "[email protected]" \
--output result.pngThe frontend uses React + Vite + Tailwind CSS. To customize:
- Edit
client/tailwind.config.jsfor theme customization - Edit
client/src/assets/style/index.cssfor custom styles - Components are in
client/src/App.tsx
Edit service/app.py to:
- Adjust image preprocessing
- Change model parameters
- Add new endpoints
- Make sure all dependencies are installed:
pip install -r requirements.txt - Check if port 8000 is already in use
- Verify Python version:
python --version(should be 3.8+)
- Ensure model file exists at
service/models/u2net.pth - Check file size (should be ~176MB)
- Verify download completed successfully
- Check that backend is running on port 8000
- Verify CORS is enabled in
service/app.py - Check browser console for errors
- Use GPU if available (CUDA-enabled PyTorch)
- Switch to UΒ²-Net-P (lighter model)
- Reduce input image size
- Reduce image size before processing
- Use UΒ²-Net-P instead of UΒ²-Net
- Close other applications
- Docker (recommended)
- Heroku with buildpack
- AWS EC2 or Google Cloud
- DigitalOcean App Platform
- GitHub Pages (static hosting)
- Netlify or Vercel
- AWS S3 + CloudFront
Note: Update the API_URL in script.js to your deployed backend URL.
UΒ²-Net (U Square Net)
- Paper: UΒ²-Net: Going Deeper with Nested U-Structure for Salient Object Detection
- Architecture: Nested U-structure for accurate segmentation
- Size: ~176MB (full model), ~4.7MB (portable)
- Performance: High-quality results on portraits and objects
Contributions are welcome! Feel free to:
- Report bugs
- Suggest features
- Submit pull requests
This project is open source and available under the MIT License.
- UΒ²-Net model by Xuebin Qin et al.
- FastAPI framework
- PyTorch team
If you encounter issues:
- Check the troubleshooting section
- Review the API documentation
- Check browser console and server logs
- Ensure all dependencies are correctly installed
Built with β€οΈ using FastAPI, PyTorch, and UΒ²-Net