Skip to content

Commit 54cf67e

Browse files
dariuszkowalski-comForgeCode
andcommitted
feat: implement hybrid ProviderId system to support custom providers
- Replace static ProviderId enum with flexible hybrid system supporting both built-in and custom providers - Add BuiltInProviderId enum for type-safe built-in provider identification - Add Custom(String) variant for runtime-defined custom providers - Implement comprehensive helper methods for provider identification and creation - Add custom Serialize/Deserialize implementations for backward compatibility - Update all ProviderId usage across codebase to use helper methods - Fix move/borrow checker errors by adding proper .clone() calls - Add comprehensive test coverage for new ProviderId functionality - Resolve issue #1816: custom providers now visible and selectable in UI This change enables users to define custom providers in provider.json configuration and have them appear in provider selection menu as fully functional options. Co-Authored-By: ForgeCode <[email protected]>
1 parent 0971125 commit 54cf67e

File tree

25 files changed

+1151
-202
lines changed

25 files changed

+1151
-202
lines changed

.forge/README.md

Lines changed: 141 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,141 @@
1+
# Forge Custom Providers Configuration
2+
3+
This directory contains configuration files and examples for setting up custom AI providers in Forge.
4+
5+
## Quick Start
6+
7+
1. **Choose an example** that matches your setup:
8+
- `provider-example-vllm.json` - For VLLM local instances
9+
- `provider-example-ollama.json` - For Ollama local instances
10+
- `provider-template.json` - Comprehensive template with all options
11+
12+
2. **Copy and customize**:
13+
```bash
14+
cp provider-example-vllm.json ~/.forge/provider.json
15+
# Edit ~/.forge/provider.json with your specific configuration
16+
```
17+
18+
3. **Configure in Forge**:
19+
```bash
20+
forge provider add
21+
# Select your custom provider and enter the required credentials
22+
```
23+
24+
## Configuration Options
25+
26+
### Required Fields
27+
- `id`: Unique provider identifier (appears in Forge's menu)
28+
- `api_key_vars`: Environment variable name for API key storage
29+
- `url_param_vars`: Array of environment variables used in URLs
30+
- `response_type`: "OpenAI" or "Anthropic"
31+
- `url`: Chat completions endpoint URL template
32+
- `models`: Either URL template or hardcoded model array
33+
34+
### Model Definition Options
35+
36+
**Option 1: Dynamic Model Fetching**
37+
```json
38+
"models": "{{VLLM_LOCAL_URL}}/v1/models"
39+
```
40+
- Forge automatically fetches available models from the API
41+
- Best for APIs with changing model lists
42+
43+
**Option 2: Hardcoded Models**
44+
```json
45+
"models": [
46+
{
47+
"id": "llama2:7b",
48+
"name": "Llama 2 7B (Local)",
49+
"description": "Local Llama 2 model",
50+
"context_length": 4096,
51+
"tools_supported": true,
52+
"supports_parallel_tool_calls": false,
53+
"supports_reasoning": false
54+
}
55+
]
56+
```
57+
- Manually defined model list
58+
- Best for stable environments with known models
59+
60+
### Model Fields
61+
- `id`: API model identifier (required)
62+
- `name`: Display name in Forge (required)
63+
- `description`: Brief model description (optional)
64+
- `context_length`: Maximum tokens (optional, default: 4096)
65+
- `tools_supported`: Function calling support (optional, default: false)
66+
- `supports_parallel_tool_calls`: Multiple tool calls (optional, default: false)
67+
- `supports_reasoning`: Reasoning/chain-of-thought (optional, default: false)
68+
69+
## URL Templates
70+
71+
Use `{{VARIABLE_NAME}}` syntax for environment variable substitution:
72+
73+
```json
74+
"url": "{{OLLAMA_URL}}/v1/chat/completions"
75+
```
76+
77+
When configured with `OLLAMA_URL=http://127.0.0.1:11434`, this becomes:
78+
`http://127.0.0.1:11434/v1/chat/completions`
79+
80+
## Common Examples
81+
82+
### VLLM Local Instance
83+
```json
84+
{
85+
"id": "vllm_local",
86+
"api_key_vars": "VLLM_LOCAL_API_KEY",
87+
"url_param_vars": ["VLLM_LOCAL_URL"],
88+
"response_type": "OpenAI",
89+
"url": "{{VLLM_LOCAL_URL}}/v1/chat/completions",
90+
"models": "{{VLLM_LOCAL_URL}}/v1/models"
91+
}
92+
```
93+
94+
### Ollama Local Instance
95+
```json
96+
{
97+
"id": "ollama_local",
98+
"api_key_vars": "OLLAMA_API_KEY",
99+
"url_param_vars": ["OLLAMA_URL"],
100+
"response_type": "OpenAI",
101+
"url": "{{OLLAMA_URL}}/v1/chat/completions",
102+
"models": [
103+
{
104+
"id": "llama2:7b",
105+
"name": "Llama 2 7B (Ollama)",
106+
"context_length": 4096,
107+
"tools_supported": true
108+
}
109+
]
110+
}
111+
```
112+
113+
### Custom API Provider
114+
```json
115+
{
116+
"id": "my_api",
117+
"api_key_vars": "MY_API_KEY",
118+
"url_param_vars": ["MY_API_URL"],
119+
"response_type": "OpenAI",
120+
"url": "{{MY_API_URL}}/v1/chat/completions",
121+
"models": "{{MY_API_URL}}/v1/models"
122+
}
123+
```
124+
125+
## Tips
126+
127+
- Use descriptive provider names: `vllm_local`, `ollama_work`, `custom_openai`
128+
- Include full paths in URLs: `/v1/chat/completions`
129+
- Test endpoints with `curl` before adding to Forge
130+
- Include non-standard ports: `http://127.0.0.1:8888`
131+
- Use dynamic model fetching for cloud APIs, hardcoded for local setups
132+
133+
## Troubleshooting
134+
135+
If your provider shows `[unavailable]`:
136+
1. Check that the API endpoint is accessible
137+
2. Verify environment variables are set correctly
138+
3. Ensure API keys are valid
139+
4. Test the endpoint manually: `curl {{URL}}`
140+
141+
For more detailed examples, see `provider-template.json`.
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
{
2+
"_comment": "Ollama Local Provider with hardcoded models - Ready to use configuration",
3+
"_description": "Copy this file to ~/.forge/provider.json and modify as needed for your Ollama setup",
4+
5+
"id": "ollama_local",
6+
"api_key_vars": "OLLAMA_API_KEY",
7+
"url_param_vars": ["OLLAMA_URL"],
8+
"response_type": "OpenAI",
9+
"url": "{{OLLAMA_URL}}/v1/chat/completions",
10+
"models": [
11+
{
12+
"id": "llama2:7b",
13+
"name": "Llama 2 7B (Ollama Local)",
14+
"description": "Llama 2 7B parameter model running locally via Ollama",
15+
"context_length": 4096,
16+
"tools_supported": true,
17+
"supports_parallel_tool_calls": false,
18+
"supports_reasoning": false
19+
},
20+
{
21+
"id": "codellama:7b",
22+
"name": "CodeLlama 7B (Ollama Local)",
23+
"description": "CodeLlama 7B parameter model optimized for code generation",
24+
"context_length": 16384,
25+
"tools_supported": true,
26+
"supports_parallel_tool_calls": false,
27+
"supports_reasoning": false
28+
},
29+
{
30+
"id": "mistral:7b",
31+
"name": "Mistral 7B (Ollama Local)",
32+
"description": "Mistral 7B parameter model for general tasks",
33+
"context_length": 8192,
34+
"tools_supported": true,
35+
"supports_parallel_tool_calls": false,
36+
"supports_reasoning": false
37+
}
38+
]
39+
}

.forge/provider-example-vllm.json

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
{
2+
"_comment": "VLLM Local Provider Example - Ready to use configuration",
3+
"_description": "Copy this file to ~/.forge/provider.json and modify as needed for your VLLM setup",
4+
5+
"id": "vllm_local",
6+
"api_key_vars": "VLLM_LOCAL_API_KEY",
7+
"url_param_vars": ["VLLM_LOCAL_URL"],
8+
"response_type": "OpenAI",
9+
"url": "{{VLLM_LOCAL_URL}}/v1/chat/completions",
10+
"models": "{{VLLM_LOCAL_URL}}/v1/models"
11+
}

.forge/provider-template.json

Lines changed: 128 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,128 @@
1+
{
2+
"_comment": "Forge Custom Provider Configuration Template",
3+
"_description": "This file allows you to define custom AI providers that will appear in Forge's provider selection menu. Copy this template and modify it for your specific provider setup.",
4+
"_note": "After adding a provider here, run 'forge provider add' to configure credentials and make it available in Forge.",
5+
6+
"providers": [
7+
{
8+
"_example_comment": "EXAMPLE 1: Custom VLLM Local Provider with API-based models",
9+
"_example_description": "This provider connects to a local VLLM instance and automatically fetches available models from the API endpoint",
10+
11+
"id": "vllm_local",
12+
"api_key_vars": "VLLM_LOCAL_API_KEY",
13+
"url_param_vars": ["VLLM_LOCAL_URL"],
14+
"response_type": "OpenAI",
15+
"url": "{{VLLM_LOCAL_URL}}/v1/chat/completions",
16+
"models": "{{VLLM_LOCAL_URL}}/v1/models"
17+
},
18+
19+
{
20+
"_example_comment": "EXAMPLE 2: Ollama Local Provider with hardcoded models",
21+
"_example_description": "This provider connects to a local Ollama instance with manually defined models",
22+
23+
"id": "ollama_local",
24+
"api_key_vars": "OLLAMA_API_KEY",
25+
"url_param_vars": ["OLLAMA_URL"],
26+
"response_type": "OpenAI",
27+
"url": "{{OLLAMA_URL}}/v1/chat/completions",
28+
"models": [
29+
{
30+
"id": "llama2:7b",
31+
"name": "Llama 2 7B (Ollama Local)",
32+
"description": "Llama 2 7B parameter model running locally via Ollama",
33+
"context_length": 4096,
34+
"tools_supported": true,
35+
"supports_parallel_tool_calls": false,
36+
"supports_reasoning": false
37+
},
38+
{
39+
"id": "codellama:7b",
40+
"name": "CodeLlama 7B (Ollama Local)",
41+
"description": "CodeLlama 7B parameter model optimized for code generation",
42+
"context_length": 16384,
43+
"tools_supported": true,
44+
"supports_parallel_tool_calls": false,
45+
"supports_reasoning": false
46+
},
47+
{
48+
"id": "mistral:7b",
49+
"name": "Mistral 7B (Ollama Local)",
50+
"description": "Mistral 7B parameter model for general tasks",
51+
"context_length": 8192,
52+
"tools_supported": true,
53+
"supports_parallel_tool_calls": false,
54+
"supports_reasoning": false
55+
}
56+
]
57+
},
58+
59+
{
60+
"_example_comment": "EXAMPLE 3: Custom API Provider with authentication",
61+
"_example_description": "This provider connects to a custom API endpoint with API key authentication",
62+
63+
"id": "my_custom_api",
64+
"api_key_vars": "MY_CUSTOM_API_KEY",
65+
"url_param_vars": ["CUSTOM_API_URL", "CUSTOM_API_VERSION"],
66+
"response_type": "OpenAI",
67+
"url": "{{CUSTOM_API_URL}}/v{{CUSTOM_API_VERSION}}/chat/completions",
68+
"models": "{{CUSTOM_API_URL}}/v{{CUSTOM_API_VERSION}}/models"
69+
},
70+
71+
{
72+
"_example_comment": "EXAMPLE 4: Anthropic-Compatible Provider",
73+
"_example_description": "This provider connects to an Anthropic-compatible API endpoint",
74+
75+
"id": "anthropic_compatible_custom",
76+
"api_key_vars": "ANTHROPIC_COMPAT_KEY",
77+
"url_param_vars": ["ANTHROPIC_COMPAT_URL"],
78+
"response_type": "Anthropic",
79+
"url": "{{ANTHROPIC_COMPAT_URL}}/v1/messages",
80+
"models": [
81+
{
82+
"id": "claude-3-haiku-20240307",
83+
"name": "Claude 3 Haiku (Custom)",
84+
"description": "Fast and efficient Claude model for quick responses",
85+
"context_length": 200000,
86+
"tools_supported": true,
87+
"supports_parallel_tool_calls": true,
88+
"supports_reasoning": false
89+
}
90+
]
91+
}
92+
],
93+
94+
"_field_explanations": {
95+
"id": "Unique identifier for this provider. Use lowercase letters, numbers, and underscores only. This will appear in Forge's provider selection menu.",
96+
"api_key_vars": "Environment variable name that will store your API key. Forge will prompt you to enter the API key value when configuring this provider.",
97+
"url_param_vars": "Array of environment variable names used in URL templates. These variables will be replaced with values you provide during provider configuration.",
98+
"response_type": "API format to use. Options: 'OpenAI' for OpenAI-compatible APIs, 'Anthropic' for Anthropic-compatible APIs.",
99+
"url": "Template for the chat completions endpoint URL. Use {{VARIABLE_NAME}} syntax to insert environment variables.",
100+
"models": "Either a URL template to fetch models dynamically (string) or an array of hardcoded model definitions (array).",
101+
102+
"model_fields": {
103+
"id": "Unique model identifier used by the API (required)",
104+
"name": "Human-readable model name displayed in Forge (required)",
105+
"description": "Brief description of the model's capabilities (optional)",
106+
"context_length": "Maximum number of tokens the model can process (optional, defaults to 4096)",
107+
"tools_supported": "Whether the model supports function calling (optional, defaults to false)",
108+
"supports_parallel_tool_calls": "Whether the model supports multiple simultaneous tool calls (optional, defaults to false)",
109+
"supports_reasoning": "Whether the model supports reasoning/chain-of-thought (optional, defaults to false)"
110+
}
111+
},
112+
113+
"_setup_instructions": {
114+
"step1": "Copy this template to a new file and modify the provider configuration for your needs",
115+
"step2": "Save the file as ~/.forge/provider.json",
116+
"step3": "Run 'forge provider add' and select your custom provider from the list",
117+
"step4": "Enter the required environment variable values when prompted (API keys, URLs, etc.)",
118+
"step5": "Your custom provider will appear in Forge's provider selection menu and be fully functional"
119+
},
120+
121+
"_tips": {
122+
"naming": "Use descriptive provider names like 'vllm_local', 'ollama_work', 'custom_openai_compatible'",
123+
"urls": "Always include the full path including /v1/chat/completions or appropriate endpoint",
124+
"models": "Use dynamic model fetching (URL) for APIs with changing model lists, or hardcoded models for stable environments",
125+
"testing": "Test your API endpoints with curl or similar tools before adding them to Forge",
126+
"ports": "For local providers, remember to include non-standard ports (e.g., http://127.0.0.1:8888)"
127+
}
128+
}

crates/forge_api/src/forge_api.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ impl<A: Services, F: CommandInfra + EnvironmentInfra> API for ForgeAPI<A, F> {
8282
Ok(providers
8383
.into_iter()
8484
.find(|p| p.id() == *id)
85-
.ok_or_else(|| Error::provider_not_available(*id))?)
85+
.ok_or_else(|| Error::provider_not_available(id.clone()))?)
8686
}
8787

8888
async fn chat(

crates/forge_app/src/command_generator.rs

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -159,14 +159,14 @@ mod tests {
159159

160160
async fn get_provider(&self, _id: ProviderId) -> Result<Provider<Url>> {
161161
Ok(Provider {
162-
id: ProviderId::OpenAI,
162+
id: ProviderId::BuiltIn(BuiltInProviderId::OpenAI),
163163
response: ProviderResponse::OpenAI,
164164
url: Url::parse("https://api.test.com").unwrap(),
165165
models: Models::Url(Url::parse("https://api.test.com/models").unwrap()),
166166
auth_methods: vec![AuthMethod::ApiKey],
167167
url_params: vec![],
168168
credential: Some(AuthCredential {
169-
id: ProviderId::OpenAI,
169+
id: ProviderId::BuiltIn(BuiltInProviderId::OpenAI),
170170
auth_details: AuthDetails::ApiKey("test-key".to_string().into()),
171171
url_params: Default::default(),
172172
}),
@@ -189,7 +189,8 @@ mod tests {
189189
#[async_trait::async_trait]
190190
impl AppConfigService for MockServices {
191191
async fn get_default_provider(&self) -> Result<Provider<Url>> {
192-
self.get_provider(ProviderId::OpenAI).await
192+
self.get_provider(ProviderId::BuiltIn(BuiltInProviderId::OpenAI))
193+
.await
193194
}
194195

195196
async fn set_default_provider(&self, _provider_id: ProviderId) -> Result<()> {

0 commit comments

Comments
 (0)