Feature Request: Agent-Specific Reasoning Level Configuration
Summary
On modern OpenCode (v1.0.210+), there is no clear or working method to configure specific agents to use specific reasoning levels (e.g., reasoningEffort: "high"). The documentation and codebase were reviewed by both the user and an AI assistant (Claude Code), and neither could determine a working approach.
Environment
- OpenCode version: 1.1.12 (modern, v1.0.210+)
- Plugin version: opencode-openai-codex-auth@4.4.0
- Platform: Windows
Problem Description
Goal
Configure agents in opencode.json to always use a specific reasoning level when invoked as subagents (where TUI variant selection is not available).
Example use case:
"principal-chatgpt": {
"model": "openai/gpt-5.2",
"prompt": "{file:./agent/principal-chatgpt.md}"
// Need this agent to ALWAYS use reasoningEffort: "high"
}
What We Tried
-
Created separate model entry with options block (per docs/configuration.md Pattern 2):
"gpt-5.2-high": {
"name": "GPT 5.2 High (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed"
}
}
Then referenced in agent: "model": "openai/gpt-5.2-high"
Result: OpenCode reports model "is not valid"
-
Added id field to map custom model to base model (similar to working Google provider pattern):
"gpt-5.2-high": {
"id": "gpt-5.2",
"name": "GPT 5.2 High (OAuth)",
...
}
Result: Still reports model "is not valid"
-
Reviewed config/opencode-legacy.json which defines separate model entries like gpt-5.2-high with options blocks - but documentation states this config is for OpenCode v1.0.209 and below only.
Documentation Findings
The documentation provides conflicting or unclear guidance:
-
config/README.md states:
- Modern OpenCode (v1.0.210+) should use
opencode-modern.json with variants
- Legacy OpenCode (v1.0.209-) should use
opencode-legacy.json with separate model entries
- "Use the config file appropriate for your OpenCode version"
-
docs/configuration.md shows "Per-Agent Models" example:
"agent": {
"commit": { "model": "openai/gpt-5.1-codex-low" },
"review": { "model": "openai/gpt-5.1-codex-high" }
}
But doesn't specify which OpenCode version this works on.
-
docs/configuration.md Pattern 2 shows per-model options overrides, but these don't work on modern OpenCode when referenced by agents.
-
The variants system works for interactive use (--variant=high or TUI selection), but agents/subagents cannot access TUI and there's no variant field in agent configuration.
Codebase Review
Reviewed the following files:
index.ts - Plugin loader extracts userConfig from provider config
lib/request/request-transformer.ts - getModelConfig() merges global + model-specific options
lib/request/helpers/model-map.ts - Maps model names including gpt-5.2-high → gpt-5.2
lib/types.ts - UserConfig structure supports per-model options
The plugin code appears to support per-model options, but OpenCode's validation rejects custom model names before the plugin can process them.
Request
One of the following:
-
Documentation clarification: If there IS a working method to configure agent-specific reasoning levels on modern OpenCode, please document it clearly with a complete example.
-
Feature addition: If no method currently exists, please add support for configuring reasoning levels per-agent. Implementation approach is at developer discretion.
Impact
This limitation affects users who:
- Use multi-agent configurations with different reasoning requirements
- Run agents as subagents (no TUI access for variant selection)
- Want consistent, predictable reasoning behavior for specific agents
- Are on modern OpenCode and cannot downgrade to legacy
Additional Context
The Google provider's authentication plugin (opencode-google-antigravity-auth) appears to work with custom model entries that have an id field mapping to base models (e.g., gemini-3-flash-high with id: "gemini-3-flash"). The same pattern does not work for the OpenAI provider.
Feature Request: Agent-Specific Reasoning Level Configuration
Summary
On modern OpenCode (v1.0.210+), there is no clear or working method to configure specific agents to use specific reasoning levels (e.g.,
reasoningEffort: "high"). The documentation and codebase were reviewed by both the user and an AI assistant (Claude Code), and neither could determine a working approach.Environment
Problem Description
Goal
Configure agents in
opencode.jsonto always use a specific reasoning level when invoked as subagents (where TUI variant selection is not available).Example use case:
What We Tried
Created separate model entry with
optionsblock (perdocs/configuration.mdPattern 2):Then referenced in agent:
"model": "openai/gpt-5.2-high"Result: OpenCode reports model "is not valid"
Added
idfield to map custom model to base model (similar to working Google provider pattern):Result: Still reports model "is not valid"
Reviewed
config/opencode-legacy.jsonwhich defines separate model entries likegpt-5.2-highwithoptionsblocks - but documentation states this config is for OpenCode v1.0.209 and below only.Documentation Findings
The documentation provides conflicting or unclear guidance:
config/README.mdstates:opencode-modern.jsonwith variantsopencode-legacy.jsonwith separate model entriesdocs/configuration.mdshows "Per-Agent Models" example:But doesn't specify which OpenCode version this works on.
docs/configuration.mdPattern 2 shows per-modeloptionsoverrides, but these don't work on modern OpenCode when referenced by agents.The variants system works for interactive use (
--variant=highor TUI selection), but agents/subagents cannot access TUI and there's novariantfield in agent configuration.Codebase Review
Reviewed the following files:
index.ts- Plugin loader extractsuserConfigfrom provider configlib/request/request-transformer.ts-getModelConfig()merges global + model-specific optionslib/request/helpers/model-map.ts- Maps model names includinggpt-5.2-high→gpt-5.2lib/types.ts-UserConfigstructure supports per-modeloptionsThe plugin code appears to support per-model options, but OpenCode's validation rejects custom model names before the plugin can process them.
Request
One of the following:
Documentation clarification: If there IS a working method to configure agent-specific reasoning levels on modern OpenCode, please document it clearly with a complete example.
Feature addition: If no method currently exists, please add support for configuring reasoning levels per-agent. Implementation approach is at developer discretion.
Impact
This limitation affects users who:
Additional Context
The Google provider's authentication plugin (
opencode-google-antigravity-auth) appears to work with custom model entries that have anidfield mapping to base models (e.g.,gemini-3-flash-highwithid: "gemini-3-flash"). The same pattern does not work for the OpenAI provider.