Skip to content

OpenAI-compatible API middleware for n8n workflows. Use your n8n agents and workflows as OpenAI models in any OpenAI-compatible client.

License

AGPL-3.0, Unknown licenses found

Licenses found

AGPL-3.0
LICENSE
Unknown
LICENSE-TRANSITION.md
Notifications You must be signed in to change notification settings

sveneisenschmidt/n8n-openai-bridge

Repository files navigation

n8n OpenAI Bridge

OpenAI-compatible API middleware for n8n workflows. Use your n8n agents and workflows as OpenAI models in any OpenAI-compatible client.

Demo

Features

  • Full OpenAI Chat Completion API compatibility
  • Streaming and non-streaming responses
  • Multi-model support via JSON configuration
  • Supports both Chat Trigger and Webhook nodes in n8n
  • Session tracking for conversation memory
  • User context forwarding (ID, email, name, role)
  • Rate limiting with configurable thresholds per endpoint
  • Request ID tracking for distributed tracing
  • Docker ready with health checks
  • Hot-reload models without restart
  • Webhook notifications on model changes
  • Detection of automated tasks from OpenWebUI and LibreChat

Compatibility

Works with any OpenAI-compatible client or middleware.

Chat Frontends (tested):

Middleware (tested):

Compatible with OpenRouter and other OpenAI-compatible services.

See Integration Guide for setup.

Architecture

   ┌─────────────────────────────────────────────┐
   │  OpenAI Clients (Open WebUI, LibreChat...)  │
   └────────────────────┬────────────────────────┘
                        │ OpenAI API Format
                        │ /v1/chat/completions
                        ▼
              ┌─────────────────────┐
              │ n8n OpenAI Bridge   │
              │ • Auth & Routing    │
              │ • Session Tracking  │
              │ • Format Translation│
              └──────────┬──────────┘
                         │ n8n Webhook
              ┌──────────┼──────────┐
              ▼          ▼          ▼
         ┌────────┐ ┌────────┐ ┌────────┐
         │  n8n   │ │  n8n   │ │  n8n   │
         │ Agent  │ │ Agent  │ │ Agent  │
         │(Claude)│ │ (GPT-4)│ │(Custom)│
         └────────┘ └────────┘ └────────┘
              │          │          │
              └──────────┴──────────┘
                         │
                    AI Response
                (Streaming/Non-streaming)

Quick Start

# Create models configuration
cat > models.json << 'EOF'
{
  "chat-trigger-agent": "https://n8n.example.com/webhook/abc123/chat",
  "webhook-agent": "https://n8n.example.com/webhook/xyz789"
}
EOF

# Run container
docker run -d \
  --name n8n-openai-bridge \
  -p 3333:3333 \
  -e BEARER_TOKEN=your-secret-api-key-here \
  -v $(pwd)/models.json:/app/models.json:ro \
  ghcr.io/sveneisenschmidt/n8n-openai-bridge:latest

# Test the API
curl http://localhost:3333/health
curl -H "Authorization: Bearer your-secret-api-key-here" \
  http://localhost:3333/v1/models

Documentation

Getting Started

Usage

How-To Guides

Development

Additional Resources

Model Loading System

The bridge uses a flexible ModelLoader architecture to load models from different sources. Two approaches are available:

Loaders

Loader Type Use Case
JsonFileModelLoader File-based Manual configuration in models.json, hot-reload on changes
N8nApiModelLoader Auto-discovery Workflows tagged with a specific tag are automatically discovered as models

Quick Start: File-based (Default)

{
  "chat-trigger-agent": "https://n8n.example.com/webhook/abc123/chat",
  "webhook-agent": "https://n8n.example.com/webhook/xyz789"
}

Note:

  • Chat Trigger nodes: URLs end with /chat
  • Webhook nodes: URLs without /chat suffix

Save to models.json. Changes are automatically detected and reloaded. No additional configuration required.

From simple single-agent workflows exposed as a model ...

... to complex agent teams that work together:

Workflow Examples

Complete workflow examples are included in the n8n-examples/ directory:

Example models configuration: models.json.example

See n8n Workflow Setup Guide for detailed setup instructions.

Quick Start: Auto-Discovery (Recommended)

MODEL_LOADER_TYPE=n8n-api
N8N_BASE_URL=https://your-n8n-instance.com
N8N_API_BEARER_TOKEN=n8n_api_xxxxxxxxxxxxx
AUTO_DISCOVERY_TAG=n8n-openai-bridge
AUTO_DISCOVERY_POLL_INTERVAL=300

Tag your n8n workflows with n8n-openai-bridge (configurable) and they are automatically discovered and exposed as models. Polling interval can be configured (default: 300 seconds).

For detailed setup, configuration, and troubleshooting, see:

Project Structure

n8n-openai-bridge/
├── src/
│   ├── server.js          # Express server setup
│   ├── Bootstrap.js       # Application lifecycle orchestration
│   ├── n8nClient.js       # n8n webhook client
│   ├── config/            # Configuration
│   │   └── Config.js      # ENV parsing & server settings
│   ├── repositories/      # Data repositories
│   │   └── ModelRepository.js  # Model state management
│   ├── factories/         # Factory classes
│   │   ├── ModelLoaderFactory.js     # Create model loaders
│   │   └── WebhookNotifierFactory.js # Create webhook notifiers
│   ├── routes/            # API endpoints
│   ├── handlers/          # Request handlers
│   ├── middleware/        # Express middleware
│   ├── services/          # Business logic services
│   ├── loaders/           # Model loader architecture
│   ├── notifiers/         # Webhook notifiers
│   └── utils/             # Utility functions
├── tests/                 # Unit tests (403+ tests)
├── docker/                # Docker configuration
├── docs/                  # Documentation
├── models.json            # Model configuration (git-ignored)
├── .env                   # Environment variables (git-ignored)
├── Makefile               # Build automation
└── package.json           # Node.js dependencies

Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add: amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Please ensure:

  • All tests pass (make test)
  • Code passes linting (make lint)
  • Code is properly formatted (make format)
  • Docker build succeeds
  • Update documentation as needed

See Development Guide for details.

License

This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).

What this means

  • You can use, modify, and distribute this software freely
  • You must share your modifications under the same license
  • If you run a modified version as a web service, you must make the source code available
  • Original author attribution is required

See the LICENSE file for full details.

License History

  • v0.0.7+: AGPL-3.0 (current)
  • v0.0.1 - v0.0.6: Apache 2.0 (previous versions remain under Apache 2.0)

See LICENSE-TRANSITION.md for migration details.


Issues: GitHub Issues | Releases: GitHub Releases

About

OpenAI-compatible API middleware for n8n workflows. Use your n8n agents and workflows as OpenAI models in any OpenAI-compatible client.

Resources

License

AGPL-3.0, Unknown licenses found

Licenses found

AGPL-3.0
LICENSE
Unknown
LICENSE-TRANSITION.md

Stars

Watchers

Forks

Packages

 
 
 

Contributors 3

  •  
  •  
  •