OpenAI-compatible API middleware for n8n workflows. Use your n8n agents and workflows as OpenAI models in any OpenAI-compatible client.
- Full OpenAI Chat Completion API compatibility
- Streaming and non-streaming responses
- Multi-model support via JSON configuration
- Supports both Chat Trigger and Webhook nodes in n8n
- Session tracking for conversation memory
- User context forwarding (ID, email, name, role)
- Rate limiting with configurable thresholds per endpoint
- Request ID tracking for distributed tracing
- Docker ready with health checks
- Hot-reload models without restart
- Webhook notifications on model changes
- Detection of automated tasks from OpenWebUI and LibreChat
Works with any OpenAI-compatible client or middleware.
Chat Frontends (tested):
Middleware (tested):
Compatible with OpenRouter and other OpenAI-compatible services.
See Integration Guide for setup.
┌─────────────────────────────────────────────┐
│ OpenAI Clients (Open WebUI, LibreChat...) │
└────────────────────┬────────────────────────┘
│ OpenAI API Format
│ /v1/chat/completions
▼
┌─────────────────────┐
│ n8n OpenAI Bridge │
│ • Auth & Routing │
│ • Session Tracking │
│ • Format Translation│
└──────────┬──────────┘
│ n8n Webhook
┌──────────┼──────────┐
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ n8n │ │ n8n │ │ n8n │
│ Agent │ │ Agent │ │ Agent │
│(Claude)│ │ (GPT-4)│ │(Custom)│
└────────┘ └────────┘ └────────┘
│ │ │
└──────────┴──────────┘
│
AI Response
(Streaming/Non-streaming)
# Create models configuration
cat > models.json << 'EOF'
{
"chat-trigger-agent": "https://n8n.example.com/webhook/abc123/chat",
"webhook-agent": "https://n8n.example.com/webhook/xyz789"
}
EOF
# Run container
docker run -d \
--name n8n-openai-bridge \
-p 3333:3333 \
-e BEARER_TOKEN=your-secret-api-key-here \
-v $(pwd)/models.json:/app/models.json:ro \
ghcr.io/sveneisenschmidt/n8n-openai-bridge:latest
# Test the API
curl http://localhost:3333/health
curl -H "Authorization: Bearer your-secret-api-key-here" \
http://localhost:3333/v1/models- Installation Guide - Docker and source installation
- Configuration Guide - Environment variables and models setup
- ModelLoader Documentation - Model loading system architecture and configuration
- n8n Workflow Setup - Configure n8n workflows
- API Usage Guide - API endpoints and code examples
- Integration Guide - Open WebUI and LibreChat integration
- API Documentation - Complete OpenAPI 3.1 specification
- Auto-Restart Containers on Model Changes - Automate service restarts using bridge webhook events and n8n Data Tables (universal pattern for any container)
- Development Guide - Project structure, make commands, and workflow
- Testing Guide - Unit tests and image tests
- Troubleshooting Guide - Common issues and solutions
- Release Documentation - Release process and versioning
- CI/CD Workflows - Automated testing and releases
- Logging Guide - Logging configuration and debugging
The bridge uses a flexible ModelLoader architecture to load models from different sources. Two approaches are available:
| Loader | Type | Use Case |
|---|---|---|
| JsonFileModelLoader | File-based | Manual configuration in models.json, hot-reload on changes |
| N8nApiModelLoader | Auto-discovery | Workflows tagged with a specific tag are automatically discovered as models |
{
"chat-trigger-agent": "https://n8n.example.com/webhook/abc123/chat",
"webhook-agent": "https://n8n.example.com/webhook/xyz789"
}Note:
- Chat Trigger nodes: URLs end with
/chat - Webhook nodes: URLs without
/chatsuffix
Save to models.json. Changes are automatically detected and reloaded. No additional configuration required.
From simple single-agent workflows exposed as a model ...
... to complex agent teams that work together:
Complete workflow examples are included in the n8n-examples/ directory:
n8n_workflow_chat.json- Using Chat Trigger node (recommended)n8n_workflow_webhook.json- Using Webhook node (advanced)n8n_workflow_task_model.json- Using Task Detection To Route Tasks to Different Modeln8n_workflow_http_loader.json- HTTP endpoint returning model configurations for JSON HTTP Loader
Example models configuration: models.json.example
See n8n Workflow Setup Guide for detailed setup instructions.
MODEL_LOADER_TYPE=n8n-api
N8N_BASE_URL=https://your-n8n-instance.com
N8N_API_BEARER_TOKEN=n8n_api_xxxxxxxxxxxxx
AUTO_DISCOVERY_TAG=n8n-openai-bridge
AUTO_DISCOVERY_POLL_INTERVAL=300Tag your n8n workflows with n8n-openai-bridge (configurable) and they are automatically discovered and exposed as models. Polling interval can be configured (default: 300 seconds).
For detailed setup, configuration, and troubleshooting, see:
- ModelLoader Documentation - Architecture, all loaders, setup guides
- Configuration Guide - Environment variables and options
n8n-openai-bridge/
├── src/
│ ├── server.js # Express server setup
│ ├── Bootstrap.js # Application lifecycle orchestration
│ ├── n8nClient.js # n8n webhook client
│ ├── config/ # Configuration
│ │ └── Config.js # ENV parsing & server settings
│ ├── repositories/ # Data repositories
│ │ └── ModelRepository.js # Model state management
│ ├── factories/ # Factory classes
│ │ ├── ModelLoaderFactory.js # Create model loaders
│ │ └── WebhookNotifierFactory.js # Create webhook notifiers
│ ├── routes/ # API endpoints
│ ├── handlers/ # Request handlers
│ ├── middleware/ # Express middleware
│ ├── services/ # Business logic services
│ ├── loaders/ # Model loader architecture
│ ├── notifiers/ # Webhook notifiers
│ └── utils/ # Utility functions
├── tests/ # Unit tests (403+ tests)
├── docker/ # Docker configuration
├── docs/ # Documentation
├── models.json # Model configuration (git-ignored)
├── .env # Environment variables (git-ignored)
├── Makefile # Build automation
└── package.json # Node.js dependencies
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add: amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Please ensure:
- All tests pass (
make test) - Code passes linting (
make lint) - Code is properly formatted (
make format) - Docker build succeeds
- Update documentation as needed
See Development Guide for details.
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
- You can use, modify, and distribute this software freely
- You must share your modifications under the same license
- If you run a modified version as a web service, you must make the source code available
- Original author attribution is required
See the LICENSE file for full details.
- v0.0.7+: AGPL-3.0 (current)
- v0.0.1 - v0.0.6: Apache 2.0 (previous versions remain under Apache 2.0)
See LICENSE-TRANSITION.md for migration details.
Issues: GitHub Issues | Releases: GitHub Releases


