Open
Conversation
…ay fix - Rewrite warmup handler: send minimal 1-token requests with full req/resp logging - Fix scheduler: add scheduled_warmup.enabled check - Enhanced quota.rs warmup logging with complete request/response bodies - Merge upstream v4.1.22 model mappings with local GPT-OSS & Claude 4.6 models - Fix PinnedQuotaModels: remove thinking filter that hid all Claude models - Update modelConfig.ts with i18n fields, sortModels export, and IconComponent type fix - Adopt dynamic MODEL_CONFIG-driven approach in useProxyModels Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- AccountCard: remove hardcoded thinking variant filter that hid all Claude models,
replace with shortLabel-based dedup
- AccountTable: remove `id.includes('thinking')` filter that blocked Claude display
- Add missing `proxy.model.gpt_oss` translation to zh.json and en.json
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
AccountTable now shows concise model keywords (e.g. "Opus 4.6 TK",
"G3 Flash", "OSS 120B") instead of verbose i18n descriptions
("最强思维", "极速预览", "开源大模型").
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Root causes and fixes: - Claude models (400/404): Added missing `anthropic-beta` header by switching from call_v1_internal to call_v1_internal_with_headers - GPT-OSS models (500): Skip warmup for non-Google models (gpt-oss, gpt-4, gpt-3) since they can't be sent to v1internal API - Gemini models (500): wrap_request() auto-injects thinkingConfig and overrides maxOutputTokens to 32768+; now force-reset to 1 and remove thinkingConfig after wrapping to keep warmup minimal Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Root cause: transform_claude_request_in() auto-injects ThinkingConfig with budget=10000 for thinking models, but warmup's max_tokens=1 is less than the budget, causing Google v1internal API to return 400. Fix: - Explicitly set thinking.type="disabled" in warmup ClaudeRequest to prevent auto-injection of ThinkingConfig - After transform, force remove thinkingConfig and reset maxOutputTokens to 1 as double safety measure Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Google v1internal API naming rules differ between Sonnet and Opus: - Sonnet: `claude-sonnet-4-6` (NO -thinking suffix) - Opus: `claude-opus-4-6-thinking` (WITH -thinking suffix) Updated all references across: - model_mapping.rs: core model + all alias mappings - opencode_sync.rs: ModelDef + ANTIGRAVITY_MODEL_IDS - config.rs: default_pinned_models - modelConfig.ts: frontend MODEL_CONFIG key and labels Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Google v1internal API checks x-client-version header and rejects
Gemini 3.1 Pro requests from clients reporting version < 1.18.x
("Gemini 3.1 Pro is not available on this version").
Updated KNOWN_STABLE constants to match Antigravity 1.18.4:
- Version: 1.16.5 -> 1.18.4
- Chrome: 132.0.6834.160 -> 142.0.7444.175
- Electron: 39.2.3 (unchanged)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add sonnet-4-6 variants to should_enable_thinking_by_default() so that ThinkingConfig is auto-injected for claude-sonnet-4-6. Without this, the Google v1internal API rejects requests because Claude models require thinking configuration. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…t-4-6-thinking - Map gemini-3.1-flash -> gemini-3-flash (invalid model ID from clients) - Map claude-sonnet-4-6-thinking -> claude-sonnet-4-6 (deprecated name) Both were returning 429 due to missing mapping entries. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Change system instruction role from "Antigravity" to "Aether" - Use concise unrestricted assistant prompt across all 3 mappers (Claude, OpenAI, Gemini) - Update duplicate detection strings and tests accordingly - Bump known stable version to 1.107.0 with version validation Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Prevents "thinking.signature: Field required" and similar upstream API rejections by adding three layers of defense: 1. Pre-deserialization JSON sanitizer that fills missing/null fields with defaults (signature, thinking, text, tool_use, tool_result) 2. #[serde(default)] on thinking text field to prevent deser failures 3. Strip thinking blocks with None signature instead of passing through (None was being omitted by skip_serializing_if, triggering upstream error) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…i requests - Add capabilities (vision, function_calling) to /v1/models endpoint so clients like Cherry Studio correctly detect multimodal support - Preserve images in tool_result for current turn instead of stripping all images indiscriminately (Claude protocol path) - Inject warning text instead of silently dropping unreadable file:// images (OpenAI protocol path) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…sion Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Test plan
🤖 Generated with Claude Code