feat: migrate from Claude CLI subprocess to Vercel AI SDK#479
feat: migrate from Claude CLI subprocess to Vercel AI SDK#479AnExiledDev wants to merge 14 commits intomainfrom
Conversation
Replace the CLI subprocess model (cli-process.js + anthropicClient.js) with direct Vercel AI SDK calls via a new aiClient.js wrapper and aiCost.js for automatic token cost tracking. Key changes: - Add generate() and stream() wrappers with retry logic (3-attempt exponential backoff), timeout/abort handling, and provider caching - Automatic cost calculation via token-costs library with model ID normalization - Split triage config: separate classifier (Haiku) and responder (Sonnet) models with per-role budgets, API keys, and base URLs - Fix 19 issues found in migration review: CostClient race condition, cached token extraction, truncated classification confidence, thinkingTokens default, async onChunk safety, double-delete in automod, budget enforcement logging, timer cleanup, config validation gaps, and TypeScript type corrections - Add 17 new tests covering retry logic, error classification, abort handling, race conditions, and all fixed edge cases - Remove cli-process.js (22KB subprocess state machine) and anthropicClient.js
Any provider:model string now resolves automatically using <PROVIDER>_API_KEY and <PROVIDER>_BASE_URL env vars, with ANTHROPIC_API_KEY as fallback. Minimax and Codex work out of the box — no provider-specific code needed.
- Add MiniMax local pricing table (token-costs doesn't cover it yet) - Track cacheCreationInputTokens alongside cacheReadInputTokens - Use authToken instead of apiKey for non-Anthropic providers - Fix provider metadata extraction to check provider-specific key before falling back to anthropic - Fix MiniMax base URL to include /v1 suffix - Harden JSON fence stripping for providers that prepend whitespace - Genericize error messages from "Anthropic" to "AI provider" - Remove stale DISABLE_PROMPT_CACHING references - Update tests for all of the above
- Convert aiClient.js to lazy-load @ai-sdk/anthropic and ai modules - Add preloadSDK() for background pre-warming at startup - Add detailed timing breakdowns to generate() and stream() - Add timing instrumentation to triage classification and response - Simplify aiCost.js to use local model-pricing.json lookup - Update config to use MiniMax models and reduce contextMessages to 5 - Fix various test mocks for new SDK patterns Performance improvements: - Bot startup no longer blocked by ~6s SDK import - SDK preloads in background while Discord connects - Timing logs identify latency sources (API vs SDK vs Discord) Known issue: Response latency of 8-16s with MiniMax provider needs further optimization (API-side, not SDK).
|
This PR was not deployed automatically as @AnExiledDev does not have access to the Railway project. In order to get automatic PR deploys, please add @AnExiledDev to your workspace on Railway. |
|
Preview deployment for your docs. Learn more about Mintlify Previews.
💡 Tip: Enable Workflows to automatically generate PRs for you. |
|
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 5 minutes and 35 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (4)
📝 WalkthroughWalkthroughThis pull request migrates the application from using headless Claude CLI processes to the Vercel AI SDK with multi-provider support. It refactors the triage system, auto-moderation, and summarization features to use the new AI client, updates configuration schemas and environment setup, replaces error handling, and provides comprehensive test coverage for the new architecture. Changes
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Migrates bot AI inference from a Claude CLI subprocess to the Vercel AI SDK, adding multi-provider routing, in-process cost tracking, and improved instrumentation.
Changes:
- Replaced CLI subprocess inference with an in-process
aiClientwrapper around Vercel AI SDK (ai,@ai-sdk/anthropic) including provider/model routing. - Added local token-based cost calculation via
model-pricing.jsonand expanded timing instrumentation/logging. - Updated config schema/UI and refactored tests to match the new SDK result shapes and error handling.
Reviewed changes
Copilot reviewed 42 out of 43 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| web/src/types/config.ts | Updates dashboard config types for new triage fields and optional prompts. |
| web/src/components/ui/discord-markdown-editor.tsx | Minor robustness/formatting improvements in editor cursor handling and preview rendering. |
| web/src/components/dashboard/config-workspace/config-categories.ts | Removes “streaming” from triage debug search metadata. |
| web/src/components/dashboard/config-categories/ai-automation.tsx | Removes triage “Streaming” toggle from UI. |
| tests/utils/errors.test.js | Extends error classification tests for SDK error shapes and AIClientError reasons. |
| tests/utils/debugFooter.test.js | Updates debug footer stats extraction tests to new SDK result structure. |
| tests/utils/aiCost.test.js | Adds tests for local pricing map + token cost calculation. |
| tests/utils/aiClient.test.js | Adds comprehensive unit tests for the new aiClient wrapper (providers, retry, streaming, cost). |
| tests/modules/triage.test.js | Refactors triage tests from CLI process mocks to aiClient generate/stream mocks. |
| tests/modules/triage.coverage.test.js | Updates triage coverage tests for aiClient and AIClientError behavior. |
| tests/modules/triage-respond.test.js | Adjusts triage-respond tests for updated extractStats signature and cost fields. |
| tests/modules/triage-parse.test.js | Updates parser tests for .text/finishReason shape and code-fence trimming behavior. |
| tests/modules/triage-config.test.js | Updates triage-config tests to reflect removal of tokenRecycleLimit. |
| tests/modules/triage-budget.test.js | Refactors budget-gate tests for aiClient usage/cost shapes. |
| tests/modules/cli-process.test.js | Removes CLI subprocess tests now that CLIProcess is removed. |
| tests/modules/aiAutoMod.test.js | Moves auto-mod AI calls to aiClient and updates mocks/expectations. |
| tests/config.test.js | Updates config.json structural expectations after triage config changes. |
| tests/commands/tldr.test.js | Refactors TLDR command tests to use aiClient generate() instead of CLIProcess. |
| src/utils/errors.js | Replaces CLIProcessError with AIClientError and updates classification/suggestions for provider-agnostic messaging. |
| src/utils/debugFooter.js | Updates usage/cost extraction to SDK result shape and providerMetadata cache stats. |
| src/utils/anthropicClient.js | Removes the shared Anthropic SDK singleton (no longer used). |
| src/utils/aiCost.js | Adds local pricing-based cost computation utility. |
| src/utils/aiClient.js | Adds Vercel AI SDK wrapper with lazy loading, provider routing, retries, streaming, and instrumentation. |
| src/prompts/triage-classify-system.md | Extends classifier schema with needsThinking/needsSearch hints. |
| src/modules/triage.js | Replaces CLIProcess orchestration with aiClient calls, adds warmConnection, budgets, and timing logs. |
| src/modules/triage-respond.js | Parallelizes context fetching and passes provider name into extractStats. |
| src/modules/triage-parse.js | Updates parsing to use SDK .text, improves code-fence stripping, and adds truncated-response recovery fields. |
| src/modules/triage-config.js | Removes streaming/tokenRecycleLimit and changes thinkingTokens default behavior. |
| src/modules/cli-process.js | Removes CLIProcess implementation now that inference is in-process via SDK. |
| src/modules/aiAutoMod.js | Migrates moderation inference to aiClient generate(); fixes double-delete behavior when autoDelete is enabled. |
| src/modules/ai.js | Updates module header comment to reflect Vercel AI SDK usage. |
| src/logger.js | Updates sensitive-field filtering list (removes CLAUDE_CODE_OAUTH_TOKEN). |
| src/index.js | Preloads Vercel AI SDK in background during startup. |
| src/data/model-pricing.json | Adds offline per-model pricing table for cost estimation. |
| src/commands/tldr.js | Migrates TLDR summarization from CLIProcess to aiClient generate() with error handling. |
| src/api/utils/configValidation.js | Updates schema validation for new/removed triage fields. |
| package.json | Replaces @anthropic-ai/sdk dependency usage with Vercel AI SDK packages. |
| docs/test-coverage-analysis.md | Updates documentation to reflect aiClient/AIClientError terminology. |
| docker-compose.yml | Removes DISABLE_PROMPT_CACHING env var from compose config. |
| config.json | Updates default triage models and context message count for the new provider setup. |
| .github/copilot-instructions.md | Updates contributor docs to reference Vercel AI SDK packages. |
| .env.example | Removes Claude CLI OAuth env docs; adds provider env var conventions for multi-provider routing. |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
| try { | ||
| const result = await withRetry( | ||
| () => | ||
| streamText({ | ||
| model, | ||
| system, | ||
| prompt, | ||
| ...(hasTools && { tools, stopWhen: stepCountIs(5) }), | ||
| maxTokens, | ||
| providerOptions, | ||
| abortSignal: signal, | ||
| onChunk: ({ chunk }) => { | ||
| if (chunk.type === 'tool-call' && userOnChunk) { | ||
| try { | ||
| const cbResult = userOnChunk(chunk.toolName, chunk.args); | ||
| if (cbResult?.catch) | ||
| cbResult.catch((err) => | ||
| logError('onChunk callback error', { error: err.message }), | ||
| ); | ||
| } catch (err) { | ||
| logError('onChunk callback error (sync)', { error: err.message }); | ||
| } | ||
| } | ||
| }, | ||
| }), | ||
| { maxRetries: 3, signal }, | ||
| ); | ||
| timings.streamStarted = Date.now(); | ||
|
|
||
| // Await final results from the stream | ||
| const text = await result.text; | ||
| timings.streamComplete = Date.now(); | ||
| const usage = (await result.totalUsage) ?? (await result.usage) ?? {}; | ||
| const finishReason = await result.finishReason; | ||
| const sources = (await result.sources) ?? []; | ||
| const providerMetadata = (await result.providerMetadata) ?? {}; |
There was a problem hiding this comment.
withRetry()only wraps thestreamText()call, but most failures for streaming (timeouts/network/API errors) can surface when awaitingresult.text, result.usage, etc. In that case, retries will never trigger even if isRetryable()` would allow them. Wrap the entire streaming consumption (creating the stream + awaiting final promises) inside the retry function, or restructure so the retry function awaits the final stream output before returning.
| const apiKey = | ||
| overrides.apiKey || | ||
| (isAnthropic ? process.env.ANTHROPIC_API_KEY : undefined) || | ||
| process.env[`${envPrefix}_API_KEY`] || | ||
| process.env.ANTHROPIC_API_KEY; | ||
|
|
||
| const baseUrl = | ||
| overrides.baseUrl || | ||
| (isAnthropic ? undefined : process.env[`${envPrefix}_BASE_URL`]) || | ||
| KNOWN_BASE_URLS[providerName]; | ||
|
|
||
| providerCache.set( | ||
| cacheKey, | ||
| createAnthropic({ | ||
| ...(isAnthropic ? { apiKey } : { authToken: apiKey }), | ||
| ...(baseUrl && { baseURL: baseUrl }), | ||
| }), | ||
| ); |
There was a problem hiding this comment.
This hard-codes Anthropic auth to { apiKey } only. If any deployments previously used OAuth-style tokens (or a future auth mode requiring bearer auth), passing them via apiKey is likely to fail. A concrete compatibility improvement is to detect token type (e.g., known OAuth prefixes) and pass { authToken } for those cases, while keeping { apiKey } for standard API keys.
| const SENSITIVE_FIELDS = [ | ||
| 'DISCORD_TOKEN', | ||
| 'ANTHROPIC_API_KEY', | ||
| 'ANTHROPIC_AUTH_TOKEN', |
There was a problem hiding this comment.
Removing CLAUDE_CODE_OAUTH_TOKEN from the sensitive-field redaction list can leak it if it remains set in existing deployments or logs (even if no longer used by the app). Keeping it in SENSITIVE_FIELDS is a low-cost defense-in-depth change that prevents accidental secret exposure.
| 'ANTHROPIC_AUTH_TOKEN', | |
| 'ANTHROPIC_AUTH_TOKEN', | |
| 'CLAUDE_CODE_OAUTH_TOKEN', |
| debug('runResponder timing', { | ||
| channelId, | ||
| promptBuildMs: timings.start, // already logged separately | ||
| streamApiMs: timings.streamDone - timings.start, | ||
| }); |
There was a problem hiding this comment.
promptBuildMs is logged as an absolute timestamp (timings.start) rather than a duration, so the emitted metric is incorrect. Either remove this field or log a real duration (e.g., time spent building prompt or time from start to stream init).
| * The Anthropic API uses hyphen-separated version numbers and optional date | ||
| * suffixes (e.g. 'claude-haiku-4-5-20250514'), while the pricing JSON uses | ||
| * short names (e.g. 'claude-haiku-4-5'). | ||
| * | ||
| * @param {string} modelId - API model ID | ||
| * @returns {string} Normalised model ID for pricing lookup | ||
| */ | ||
| function normaliseModelId(modelId) { | ||
| // Strip date suffix (e.g. '-20250514', '-20250929') | ||
| const stripped = modelId.replace(/-\d{8}$/, ''); | ||
|
|
||
| // Family-last: 'claude-sonnet-4-6' → 'claude-sonnet-4.6' (for dotted input) | ||
| // Not needed for our JSON keys (which use hyphens), but keeps compat if | ||
| // someone passes a dotted version like 'claude-sonnet-4.6'. | ||
| return stripped; |
There was a problem hiding this comment.
The comment mentions converting hyphenated versions to dotted versions, but the function currently only strips a date suffix and returns stripped unchanged. Update the comment to reflect the actual behavior (date suffix stripping only), or implement the described dotted-version normalization so the comment stays accurate.
| * The Anthropic API uses hyphen-separated version numbers and optional date | |
| * suffixes (e.g. 'claude-haiku-4-5-20250514'), while the pricing JSON uses | |
| * short names (e.g. 'claude-haiku-4-5'). | |
| * | |
| * @param {string} modelId - API model ID | |
| * @returns {string} Normalised model ID for pricing lookup | |
| */ | |
| function normaliseModelId(modelId) { | |
| // Strip date suffix (e.g. '-20250514', '-20250929') | |
| const stripped = modelId.replace(/-\d{8}$/, ''); | |
| // Family-last: 'claude-sonnet-4-6' → 'claude-sonnet-4.6' (for dotted input) | |
| // Not needed for our JSON keys (which use hyphens), but keeps compat if | |
| // someone passes a dotted version like 'claude-sonnet-4.6'. | |
| return stripped; | |
| * The Anthropic API may append an optional date suffix to model IDs | |
| * (e.g. 'claude-haiku-4-5-20250514'), while the pricing JSON uses the base | |
| * model name without that suffix (e.g. 'claude-haiku-4-5'). | |
| * | |
| * @param {string} modelId - API model ID | |
| * @returns {string} Model ID with any trailing date suffix removed | |
| */ | |
| function normaliseModelId(modelId) { | |
| // Strip trailing date suffix (e.g. '-20250514', '-20250929'). | |
| return modelId.replace(/-\d{8}$/, ''); |
| const classifyProvider = resolved.classifyModel?.split(':')[1] | ||
| ? resolved.classifyModel.split(':')[0] | ||
| : 'anthropic'; | ||
| const respondProvider = resolved.respondModel?.split(':')[1] | ||
| ? resolved.respondModel.split(':')[0] |
There was a problem hiding this comment.
This parses providers by calling split(':') multiple times and relying on [1] truthiness. Consider parsing once (e.g., via indexOf(':') or a single split) to avoid duplicated work and make the intent clearer/less error-prone, especially if model IDs ever contain additional colons or if you extend the model-string grammar.
| const classifyProvider = resolved.classifyModel?.split(':')[1] | |
| ? resolved.classifyModel.split(':')[0] | |
| : 'anthropic'; | |
| const respondProvider = resolved.respondModel?.split(':')[1] | |
| ? resolved.respondModel.split(':')[0] | |
| const classifyModel = resolved.classifyModel; | |
| const classifySeparatorIndex = classifyModel?.indexOf(':') ?? -1; | |
| const classifyProvider = classifySeparatorIndex >= 0 && classifySeparatorIndex < classifyModel.length - 1 | |
| ? classifyModel.slice(0, classifySeparatorIndex) | |
| : 'anthropic'; | |
| const respondModel = resolved.respondModel; | |
| const respondSeparatorIndex = respondModel?.indexOf(':') ?? -1; | |
| const respondProvider = respondSeparatorIndex >= 0 && respondSeparatorIndex < respondModel.length - 1 | |
| ? respondModel.slice(0, respondSeparatorIndex) |
|
| Filename | Overview |
|---|---|
| src/utils/aiClient.js | New core AI client wrapper — well-structured with lazy loading, provider caching, retry logic, and abort support; stream() timing marks are both set after full stream consumption, making streamConsumeMs always ~0ms; rejected SDK load promise is never reset |
| src/utils/aiCost.js | New offline cost calculator using model-pricing.json; correctly handles cache-aware pricing and date-suffixed model ID normalization |
| src/modules/triage.js | Major refactor from CLIProcess to per-call generate()/stream() SDK calls; adds abortSignal propagation, timing instrumentation, and budget-overage warnings; logic is sound |
| src/modules/aiAutoMod.js | Migrated from CLIProcess to generate(); generate() call is correctly wrapped in the existing try/catch in checkAiAutoMod() |
| src/data/model-pricing.json | New static pricing data for Anthropic, OpenAI, and MiniMax models; well-commented with correct USD-per-1M-token format |
| src/utils/debugFooter.js | Updated extractStats() to read from AI SDK result shape (camelCase usage fields, providerMetadata for cache tokens); old multi-key fallback logic correctly removed |
| src/utils/errors.js | CLIProcessError renamed to AIClientError with updated reason enum; classifyError() updated to handle AIClientError.statusCode (not .status) |
| src/modules/triage-parse.js | Updated to consume sdkResult.text instead of sdkMessage.result; adds whitespace trimming before fence stripping and normalises needsThinking/needsSearch defaults |
| tests/utils/aiClient.test.js | New comprehensive test suite covering generate(), stream(), withRetry, provider routing, credential handling, and onChunk callbacks |
| tests/utils/aiCost.test.js | New tests covering cost calculation, cache-aware pricing, model normalization, and unknown-model fallback |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[Discord Message] --> B[triage.js: evaluateNow]
B --> C[runClassification]
C --> D[generate — aiClient.js]
D --> E{resolveModel}
E -->|provider:model string| F{Provider?}
F -->|anthropic| G[createAnthropic\napiKey / authToken]
F -->|minimax| H[createAnthropic\nauthToken + baseURL]
F -->|other| I[createAnthropic\nENV: PROVIDER_API_KEY\n+ PROVIDER_BASE_URL]
G & H & I --> J[generateText — Vercel AI SDK]
J --> K[parseClassifyResult]
K -->|ignore| L[Done — no response]
K -->|respond| M[runResponder]
M --> N[stream — aiClient.js]
N --> O[streamText — Vercel AI SDK]
O -->|onChunk: tool-call| P[onChunk callback\nsearchCount++\nDiscord reaction]
O --> Q[Await .text / .usage / .providerMetadata]
Q --> R[calculateCost — aiCost.js\nmodel-pricing.json lookup]
R --> S[parseRespondResult]
S --> T[Send to Discord]
Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/utils/aiClient.js
Line: 499-500
Comment:
**Stream timing marks set after stream is fully consumed**
Both `timings.streamStarted` and `timings.streamComplete` are assigned `Date.now()` back-to-back _after_ `withRetry` resolves — but the full stream (all awaited promises: `.text`, `.usage`, etc.) finishes _inside_ `withRetry`. As a result `streamConsumeMs` is always ~0ms and `streamInitMs` absorbs the entire API + stream time, defeating the purpose of the split.
To get meaningful numbers, capture the timestamps inside the `withRetry` callback, wrapping the `await streamResult.text` call.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: src/utils/aiClient.js
Line: 39-52
Comment:
**Rejected SDK load promise is never reset**
If either `import('@ai-sdk/anthropic')` or `import('ai')` rejects (e.g. missing package in a misconfigured environment), `_sdkLoadPromise` is set to a permanently rejected promise. Because the guard `if (!_sdkLoadPromise)` is never re-entered after a failure, every subsequent call to `getSDK()` returns the same rejected promise with no retry. The bot can only recover with a process restart.
Adding `.catch((err) => { _sdkLoadPromise = null; throw err; })` to the promise chain would allow callers to retry after transient load errors.
How can I resolve this? If you propose a fix, please make it concise.Reviews (5): Last reviewed commit: "fix(ci): unblock CI after merge — schema..." | Re-trigger Greptile
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
src/modules/aiAutoMod.js (1)
212-214:⚠️ Potential issue | 🔴 Critical
autoDeletecurrently flips the delete semantics.The unconditional delete on Lines 212-214 runs before the action switch, so
warn,timeout,kick, andbanall remove the source message wheneverautoDeleteistrue. Then thedeletebranch on Lines 285-288 is inverted and still deletes whenautoDeleteisfalse. That effectively breaks the flag and also removes the evidence beforesendFlagEmbed()runs.🛠️ Minimal fix
- if (autoModConfig.autoDelete) { - await message.delete().catch(() => {}); - } - switch (result.action) { @@ case 'delete': - if (!autoModConfig.autoDelete) { + if (autoModConfig.autoDelete) { await message.delete().catch(() => {}); } break;Also applies to: 285-288
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/aiAutoMod.js` around lines 212 - 214, Remove the unconditional message.delete() call and instead perform deletions inside the action switch: delete the message unconditionally in the 'delete' action handler (use message.delete().catch(() => {})), and for other actions (warn, timeout, kick, ban) only call message.delete() if autoModConfig.autoDelete is true; also fix the inverted logic in the existing 'delete' branch so it does not delete when autoModConfig.autoDelete is false. Update references in the function handling the mod action (look for autoModConfig.autoDelete, message.delete(), and sendFlagEmbed()) so deletion happens in the correct branches.src/modules/triage-respond.js (1)
79-103:⚠️ Potential issue | 🟠 MajorKeep
fetchChannelContext()within the caller'slimit.This path can now return up to
limit + RECENT_LIMITunique messages, so a configured cap like15can silently become20. It also does a duplicate latest-message fetch whenbufferSnapshotis empty. That inflates prompt size/cost and adds avoidable Discord API traffic.♻️ Proposed fix
- const oldest = bufferSnapshot[0]; - const historyOptions = { limit }; - if (oldest) historyOptions.before = oldest.messageId; - const RECENT_LIMIT = 5; + const oldest = bufferSnapshot[0]; + const RECENT_LIMIT = 5; + const recentLimit = oldest ? Math.min(RECENT_LIMIT, limit) : 0; + const historyOptions = { limit: Math.max(limit - recentLimit, 0) }; + if (oldest) historyOptions.before = oldest.messageId; const [historyFetched, recentFetched] = await Promise.all([ - channel.messages.fetch(historyOptions), - channel.messages.fetch({ limit: RECENT_LIMIT }), + historyOptions.limit > 0 ? channel.messages.fetch(historyOptions) : Promise.resolve(new Map()), + recentLimit > 0 ? channel.messages.fetch({ limit: recentLimit }) : Promise.resolve(new Map()), ]); @@ return merged .sort((a, b) => a.createdTimestamp - b.createdTimestamp) // chronological + .slice(-limit) .map(toContextEntry);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/triage-respond.js` around lines 79 - 103, fetchChannelContext currently can return up to limit + RECENT_LIMIT and duplicates the latest fetch when bufferSnapshot is empty; fix it by (1) computing recentFetchLimit = Math.min(RECENT_LIMIT, Math.max(0, limit)) and if there is no oldest (bufferSnapshot empty) set recentFetchLimit = 0 so you don't re-fetch the same messages, (2) pass recentFetchLimit to channel.messages.fetch for the recentFetched call instead of a fixed RECENT_LIMIT, and (3) after building merged (using historyFetched, recentFetched, seen, bufferIds, toContextEntry) enforce the caller limit by returning merged.slice(0, limit). Ensure you reference historyOptions, RECENT_LIMIT, bufferSnapshot, historyFetched, recentFetched, merged, and toContextEntry when making these changes.src/modules/triage.js (1)
307-315:⚠️ Potential issue | 🟡 Minor
promptBuildMsis logging an epoch timestamp, not a duration.
timings.startwill produce a huge absolute value here, so the new latency breakdown is misleading. Capture a prompt-built timestamp or drop the field.Also applies to: 376-380
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/triage.js` around lines 307 - 315, The timings.start currently stores Date.now() and is being logged as promptBuildMs (an epoch) instead of a duration; change the timing measurement around buildRespondPrompt (and the similar block at the later lines) to record a high-level start timestamp before calling buildRespondPrompt and then compute promptBuildMs as Date.now() - start (i.e., a duration in milliseconds) before adding it to the timings object, or alternatively remove the field; update references to timings.start/promptBuildMs so promptBuildMs is a computed duration for buildRespondPrompt (refer to timings.start, buildRespondPrompt, respondPrompt) and do the same for the other occurrence around the later block.tests/modules/triage.test.js (1)
1178-1203: 🧹 Nitpick | 🔵 TrivialAssert the responder call options, not just the reaction side effects.
These tests can still pass if
runResponder()stops forwardingneedsSearch/needsThinkingintostream(), because the mock manually triggersonChunk()and the emoji state is derived from the same classifier flag. Add direct assertions onmockStream.mock.calls[0][0].toolsand.thinking.Also applies to: 1256-1276
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/modules/triage.test.js` around lines 1178 - 1203, The test currently only asserts mockReact side-effects but must also assert the responder was invoked with correct stream options; update the test case(s) that simulate streaming (the one using mockStream and mockGenerate around the 'web_search' onChunk and the similar case at lines 1256-1276) to assert mockStream.mock.calls[0][0].tools includes the expected 'web_search' tool and that mockStream.mock.calls[0][0].thinking matches the classifier intent (e.g., true for needsThinking or includes the proper flag for needsSearch); reference the runResponder/stream invocation by checking mockStream's first call options rather than relying solely on mockReact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.env.example:
- Around line 27-42: Update .env.example to match the new default triage model
in config.json by making MINIMAX_API_KEY (and optionally MINIMAX_BASE_URL)
documented as required for out-of-box triage, or revert config.json defaults to
use ANTHROPIC_API_KEY; specifically align the .env example with the config.json
keys triage.classifyModel and triage.respondModel by either adding
MINIMAX_API_KEY as a required credential alongside ANTHROPIC_API_KEY or changing
the defaults in config.json away from "minimax:MiniMax-M2.7" so the existing
ANTHROPIC_API_KEY remains the required out-of-box provider.
In @.github/copilot-instructions.md:
- Line 31: Update the terminology in the test-mocking guidance by replacing the
phrase "Claude SDK" with "Vercel AI SDK" (or "Vercel AI SDK (`ai`,
`@ai-sdk/anthropic`)") so it matches the header "AI features | Vercel AI SDK
(`ai`, `@ai-sdk/anthropic`)" and any examples or code snippets that reference
Claude; ensure mock setup names, import examples, and README/test comments
consistently use "Vercel AI SDK" and the correct package identifiers instead of
"Claude SDK".
In `@docs/test-coverage-analysis.md`:
- Around line 96-100: Update the outdated phrase "CLI timeout/parse failure
tests" to use AI client terminology (e.g., "AI client timeout/parse failure
tests" or "AI Client timeout/parse failure tests") so it matches the renamed
section "AI Client Timeout and Parse Failures — MEDIUM PRIORITY"; search for the
exact string "CLI timeout/parse failure tests" in the document and replace it
with the consistent AI client wording to keep section names and recommendations
aligned.
In `@package.json`:
- Around line 36-38: Update the dependency versions for "@ai-sdk/anthropic" and
"ai" in package.json to the latest stable releases by changing
"@ai-sdk/anthropic" from ^3.0.66 to ^3.0.68 and "ai" from ^6.0.146 to ^6.0.158;
verify any engine/CI Node.js requirement is >=18 (project targets Node 22+), run
npm install (or yarn) and the test suite to confirm no regressions after
updating these packages.
In `@src/modules/triage-config.js`:
- Around line 19-23: The current selection logic for respondModel uses || which
treats an explicit empty string as missing and silently falls back to the
legacy/default; change the operator back to nullish coalescing so an explicit
respondModel: "" remains surfaced (i.e., use respondModel ?? fallback) and add
or ensure a config validation step rejects empty-string values for
triageConfig.respondModel/triageConfig.model so bad configs are surfaced rather
than routing traffic to triageConfig.models?.default ('claude-sonnet-4-6').
In `@src/modules/triage.js`:
- Around line 348-351: The WebSearch notification currently calls
evalClient.channels.fetch(channelId) directly, bypassing the shared cache;
replace that fetch with the cache helper fetchChannelCached(channelId,
evalClient) (or the existing fetchChannelCached usage pattern in this file) so
the channel lookup uses src/utils/discordCache.js, then call safeSend on the
returned channel as before; update the surrounding try/catch to handle null from
fetchChannelCached the same way you handled the previous fetch.
In `@src/utils/aiClient.js`:
- Around line 366-371: The catch blocks in generate() and stream() currently map
any aborted signal to a 'timeout' AIClientError, hiding caller-initiated
cancellations; update both catch handlers to check externalSignal?.aborted and,
if true, throw new AIClientError('Request was cancelled', 'aborted') (or
similar) while keeping the existing 'timeout' error when the internal signal
fired (i.e., signal.aborted && !externalSignal?.aborted); preserve existing
behavior for other errors (rethrow AIClientError and wrap others with 'api'
including statusCode).
- Around line 249-267: createAbortController currently ignores a pre-aborted
externalSignal and later code (generate and stream) cannot tell whether an abort
came from the caller or the timeout; fix createAbortController to immediately
abort the local controller if externalSignal && externalSignal.aborted (and
clear the timer), only attach externalSignal.addEventListener if it wasn't
already aborted, and keep the existing cleanup to remove the listener and
clearTimeout (refer to createAbortController, timer, onExternalAbort). Then
update generate() and stream() to distinguish abort reasons: detect if abort was
caused by the externalSignal (externalSignal?.aborted or
controller.signal.reason tied to externalSignal) and throw AIClientError(...,
'aborted') for caller cancellations, but throw AIClientError(..., 'timeout')
when the timer fired; use the same controller/signal returned by
createAbortController to determine which condition occurred.
In `@src/utils/errors.js`:
- Around line 223-224: Update the message for ErrorType.API_SERVER_ERROR in
src/utils/errors.js to remove the stale "restart the AI service" guidance and
instead instruct operators to check provider status and outbound connectivity;
locate the ErrorType.API_SERVER_ERROR entry and replace its remediation string
with a concise note suggesting provider status pages, upstream provider errors,
and network/outbound connectivity checks.
In `@tests/modules/triage-parse.test.js`:
- Around line 135-149: The test's fixture for parseClassifyResult omits the new
boolean flags so it won't catch regressions; update the sdkMessage payload in
the 'should return parsed classification on success' test to include
needsThinking: true and needsSearch: true, then add assertions that
result.needsThinking === true and result.needsSearch === true so
parseClassifyResult (and downstream triage.js usage) preserves those fields;
locate the test exercising parseClassifyResult and modify the sdkMessage and
expectations accordingly.
In `@tests/utils/aiClient.test.js`:
- Around line 541-562: Rename the test case title to accurately describe its
behavior: change the it(...) description for the test that calls generate({
model: 'claude-haiku-4-5', prompt: 'cancelled', abortSignal: controller.signal
}) and asserts the caught error is an AIClientError with reason 'timeout' so it
reads something like "should treat external abort as timeout" (reference
symbols: generate, AIClientError, mockGenerateText).
In `@tests/utils/errors.test.js`:
- Around line 345-363: The tests use CommonJS require() to load AIClientError
which breaks in an ESM test file; add a single ESM import for AIClientError at
the top of the test file (alongside existing imports) and remove the three
require() calls inside the tests so the specs use the imported AIClientError
when constructing error instances for classifyError and ErrorType assertions.
In `@web/src/types/config.ts`:
- Around line 239-242: The front-end type for CONFIG_SCHEMA.triage must accept
nulls: update the properties statusReactions, dailyBudgetUsd,
confidenceThreshold, and responseCooldownMs in web/src/types/config.ts so they
allow null values (e.g., boolean|null and number|null) in addition to being
optional, ensuring API payloads that send null are correctly typed; keep the
existing optionality and reference CONFIG_SCHEMA.triage and those exact property
names when making the change.
---
Outside diff comments:
In `@src/modules/aiAutoMod.js`:
- Around line 212-214: Remove the unconditional message.delete() call and
instead perform deletions inside the action switch: delete the message
unconditionally in the 'delete' action handler (use message.delete().catch(() =>
{})), and for other actions (warn, timeout, kick, ban) only call
message.delete() if autoModConfig.autoDelete is true; also fix the inverted
logic in the existing 'delete' branch so it does not delete when
autoModConfig.autoDelete is false. Update references in the function handling
the mod action (look for autoModConfig.autoDelete, message.delete(), and
sendFlagEmbed()) so deletion happens in the correct branches.
In `@src/modules/triage-respond.js`:
- Around line 79-103: fetchChannelContext currently can return up to limit +
RECENT_LIMIT and duplicates the latest fetch when bufferSnapshot is empty; fix
it by (1) computing recentFetchLimit = Math.min(RECENT_LIMIT, Math.max(0,
limit)) and if there is no oldest (bufferSnapshot empty) set recentFetchLimit =
0 so you don't re-fetch the same messages, (2) pass recentFetchLimit to
channel.messages.fetch for the recentFetched call instead of a fixed
RECENT_LIMIT, and (3) after building merged (using historyFetched,
recentFetched, seen, bufferIds, toContextEntry) enforce the caller limit by
returning merged.slice(0, limit). Ensure you reference historyOptions,
RECENT_LIMIT, bufferSnapshot, historyFetched, recentFetched, merged, and
toContextEntry when making these changes.
In `@src/modules/triage.js`:
- Around line 307-315: The timings.start currently stores Date.now() and is
being logged as promptBuildMs (an epoch) instead of a duration; change the
timing measurement around buildRespondPrompt (and the similar block at the later
lines) to record a high-level start timestamp before calling buildRespondPrompt
and then compute promptBuildMs as Date.now() - start (i.e., a duration in
milliseconds) before adding it to the timings object, or alternatively remove
the field; update references to timings.start/promptBuildMs so promptBuildMs is
a computed duration for buildRespondPrompt (refer to timings.start,
buildRespondPrompt, respondPrompt) and do the same for the other occurrence
around the later block.
In `@tests/modules/triage.test.js`:
- Around line 1178-1203: The test currently only asserts mockReact side-effects
but must also assert the responder was invoked with correct stream options;
update the test case(s) that simulate streaming (the one using mockStream and
mockGenerate around the 'web_search' onChunk and the similar case at lines
1256-1276) to assert mockStream.mock.calls[0][0].tools includes the expected
'web_search' tool and that mockStream.mock.calls[0][0].thinking matches the
classifier intent (e.g., true for needsThinking or includes the proper flag for
needsSearch); reference the runResponder/stream invocation by checking
mockStream's first call options rather than relying solely on mockReact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 467d03fe-1f3f-4910-a4b7-80df94c78445
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (42)
.env.example.github/copilot-instructions.mdconfig.jsondocker-compose.ymldocs/test-coverage-analysis.mdpackage.jsonsrc/api/utils/configValidation.jssrc/commands/tldr.jssrc/data/model-pricing.jsonsrc/index.jssrc/logger.jssrc/modules/ai.jssrc/modules/aiAutoMod.jssrc/modules/cli-process.jssrc/modules/triage-config.jssrc/modules/triage-parse.jssrc/modules/triage-respond.jssrc/modules/triage.jssrc/prompts/triage-classify-system.mdsrc/utils/aiClient.jssrc/utils/aiCost.jssrc/utils/anthropicClient.jssrc/utils/debugFooter.jssrc/utils/errors.jstests/commands/tldr.test.jstests/config.test.jstests/modules/aiAutoMod.test.jstests/modules/cli-process.test.jstests/modules/triage-budget.test.jstests/modules/triage-config.test.jstests/modules/triage-parse.test.jstests/modules/triage-respond.test.jstests/modules/triage.coverage.test.jstests/modules/triage.test.jstests/utils/aiClient.test.jstests/utils/aiCost.test.jstests/utils/debugFooter.test.jstests/utils/errors.test.jsweb/src/components/dashboard/config-categories/ai-automation.tsxweb/src/components/dashboard/config-workspace/config-categories.tsweb/src/components/ui/discord-markdown-editor.tsxweb/src/types/config.ts
💤 Files with no reviewable changes (8)
- tests/modules/triage-config.test.js
- src/logger.js
- docker-compose.yml
- web/src/components/dashboard/config-categories/ai-automation.tsx
- tests/config.test.js
- src/utils/anthropicClient.js
- src/modules/cli-process.js
- tests/modules/cli-process.test.js
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Greptile Review
🧰 Additional context used
📓 Path-based instructions (17)
**/*.{js,ts,jsx,tsx,mjs,mts}
📄 CodeRabbit inference engine (AGENTS.md)
Use ESM only; do not use CommonJS
Files:
src/index.jssrc/modules/ai.jsweb/src/components/ui/discord-markdown-editor.tsxweb/src/components/dashboard/config-workspace/config-categories.tstests/modules/triage-budget.test.jstests/modules/triage-respond.test.jssrc/modules/aiAutoMod.jssrc/commands/tldr.jssrc/modules/triage-respond.jstests/commands/tldr.test.jssrc/modules/triage-config.jstests/utils/errors.test.jstests/utils/debugFooter.test.jsweb/src/types/config.tstests/modules/triage.coverage.test.jssrc/utils/debugFooter.jstests/modules/aiAutoMod.test.jssrc/api/utils/configValidation.jssrc/utils/errors.jstests/utils/aiCost.test.jstests/modules/triage-parse.test.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jssrc/utils/aiClient.jstests/modules/triage.test.jstests/utils/aiClient.test.js
**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{js,ts,jsx,tsx}: Use src/logger.js; do not use console.*
Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Use parameterized SQL only; do not use string concatenation for SQL queries
Community features should be gated behind config..enabled; moderation commands are the exception
Files:
src/index.jssrc/modules/ai.jsweb/src/components/ui/discord-markdown-editor.tsxweb/src/components/dashboard/config-workspace/config-categories.tstests/modules/triage-budget.test.jstests/modules/triage-respond.test.jssrc/modules/aiAutoMod.jssrc/commands/tldr.jssrc/modules/triage-respond.jstests/commands/tldr.test.jssrc/modules/triage-config.jstests/utils/errors.test.jstests/utils/debugFooter.test.jsweb/src/types/config.tstests/modules/triage.coverage.test.jssrc/utils/debugFooter.jstests/modules/aiAutoMod.test.jssrc/api/utils/configValidation.jssrc/utils/errors.jstests/utils/aiCost.test.jstests/modules/triage-parse.test.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jssrc/utils/aiClient.jstests/modules/triage.test.jstests/utils/aiClient.test.js
**/*.{js,ts,jsx,tsx,json}
📄 CodeRabbit inference engine (AGENTS.md)
Welcome-message variables use double braces only, like {{user}}; single braces are plain text and should not be documented, inserted, or parsed as variables
Files:
src/index.jspackage.jsonsrc/modules/ai.jsweb/src/components/ui/discord-markdown-editor.tsxweb/src/components/dashboard/config-workspace/config-categories.tssrc/data/model-pricing.jsonconfig.jsontests/modules/triage-budget.test.jstests/modules/triage-respond.test.jssrc/modules/aiAutoMod.jssrc/commands/tldr.jssrc/modules/triage-respond.jstests/commands/tldr.test.jssrc/modules/triage-config.jstests/utils/errors.test.jstests/utils/debugFooter.test.jsweb/src/types/config.tstests/modules/triage.coverage.test.jssrc/utils/debugFooter.jstests/modules/aiAutoMod.test.jssrc/api/utils/configValidation.jssrc/utils/errors.jstests/utils/aiCost.test.jstests/modules/triage-parse.test.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jssrc/utils/aiClient.jstests/modules/triage.test.jstests/utils/aiClient.test.js
**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use ESM only — import/export, never require() or module.exports
Files:
src/index.jssrc/modules/ai.jstests/modules/triage-budget.test.jstests/modules/triage-respond.test.jssrc/modules/aiAutoMod.jssrc/commands/tldr.jssrc/modules/triage-respond.jstests/commands/tldr.test.jssrc/modules/triage-config.jstests/utils/errors.test.jstests/utils/debugFooter.test.jstests/modules/triage.coverage.test.jssrc/utils/debugFooter.jstests/modules/aiAutoMod.test.jssrc/api/utils/configValidation.jssrc/utils/errors.jstests/utils/aiCost.test.jstests/modules/triage-parse.test.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jssrc/utils/aiClient.jstests/modules/triage.test.jstests/utils/aiClient.test.js
**/*.{js,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use single quotes for strings, never double quotes (except in JSON files)
Always include semicolons at the end of statements
Always use trailing commas in multi-line constructs
Keep line width to 100 characters maximum
Never use console.* (console.log, console.error, etc.) — use Winston logger instead
Files:
src/index.jssrc/modules/ai.jsweb/src/components/ui/discord-markdown-editor.tsxweb/src/components/dashboard/config-workspace/config-categories.tstests/modules/triage-budget.test.jstests/modules/triage-respond.test.jssrc/modules/aiAutoMod.jssrc/commands/tldr.jssrc/modules/triage-respond.jstests/commands/tldr.test.jssrc/modules/triage-config.jstests/utils/errors.test.jstests/utils/debugFooter.test.jsweb/src/types/config.tstests/modules/triage.coverage.test.jssrc/utils/debugFooter.jstests/modules/aiAutoMod.test.jssrc/api/utils/configValidation.jssrc/utils/errors.jstests/utils/aiCost.test.jstests/modules/triage-parse.test.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jssrc/utils/aiClient.jstests/modules/triage.test.jstests/utils/aiClient.test.js
**/*.{js,ts,tsx,json}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use 2-space indentation with spaces, never tabs
Files:
src/index.jspackage.jsonsrc/modules/ai.jsweb/src/components/ui/discord-markdown-editor.tsxweb/src/components/dashboard/config-workspace/config-categories.tssrc/data/model-pricing.jsonconfig.jsontests/modules/triage-budget.test.jstests/modules/triage-respond.test.jssrc/modules/aiAutoMod.jssrc/commands/tldr.jssrc/modules/triage-respond.jstests/commands/tldr.test.jssrc/modules/triage-config.jstests/utils/errors.test.jstests/utils/debugFooter.test.jsweb/src/types/config.tstests/modules/triage.coverage.test.jssrc/utils/debugFooter.jstests/modules/aiAutoMod.test.jssrc/api/utils/configValidation.jssrc/utils/errors.jstests/utils/aiCost.test.jstests/modules/triage-parse.test.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jssrc/utils/aiClient.jstests/modules/triage.test.jstests/utils/aiClient.test.js
src/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Import and use Winston logger from '../logger.js' instead of console methods
Use safeReply(), safeSend(), or safeEditReply() for Discord message operations instead of raw Discord.js methods
Use parameterized queries in database operations — never string interpolation
Use getConfig(guildId) from src/modules/config.js to retrieve merged global + guild-level configuration
Gate community features behind config..enabled; moderation commands are always available
Use src/utils/cache.js for generic caching with Redis primary + in-memory fallback
Use src/utils/discordCache.js for caching Discord API responses (channels, roles, members)
Use src/utils/reputationCache.js for caching leaderboard and user reputation data
Use discord.js v14 for Discord bot interactions
Files:
src/index.jssrc/modules/ai.jssrc/modules/aiAutoMod.jssrc/commands/tldr.jssrc/modules/triage-respond.jssrc/modules/triage-config.jssrc/utils/debugFooter.jssrc/api/utils/configValidation.jssrc/utils/errors.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jssrc/utils/aiClient.js
**/package.json
📄 CodeRabbit inference engine (AGENTS.md)
Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing
Files:
package.json
src/modules/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use Vercel AI SDK (ai,
@ai-sdk/anthropic) for Claude AI features
Files:
src/modules/ai.jssrc/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/modules/triage-config.jssrc/modules/triage-parse.jssrc/modules/triage.js
web/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
web/src/**/*.{ts,tsx}: New dashboard routes need title wiring in web/src/lib/page-titles.ts; use createPageMetadata() for SSR and keep DashboardTitleSync aligned for client navigation
Dashboard clients that need the guild list should consume GuildDirectoryProvider; do not stack extra /api/guilds fetch loops in leaf components
Recharts dashboard views should use web/src/components/ui/stable-responsive-container.tsx; raw ResponsiveContainer mounts can spam width(-1)/height(-1) warnings when panels render before layout settles
Any visual dashboard or landing page change must be verified with Chrome DevTools MCP before calling it done; take a screenshot after the change
Check both light and dark themes if colors or theming changed
Check responsive behavior on mobile, tablet, and desktop if layout changedPrefer TypeScript interface over type for defining object shapes in Next.js web dashboard
Files:
web/src/components/ui/discord-markdown-editor.tsxweb/src/components/dashboard/config-workspace/config-categories.tsweb/src/types/config.ts
web/src/**/*.{ts,tsx,css,scss}
📄 CodeRabbit inference engine (AGENTS.md)
Follow the design system when making changes to the UI/UX; see DESIGN.md for the design system and color palette
Files:
web/src/components/ui/discord-markdown-editor.tsxweb/src/components/dashboard/config-workspace/config-categories.tsweb/src/types/config.ts
web/src/**/*.tsx
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use Tailwind CSS 4 for styling in web dashboard components
Files:
web/src/components/ui/discord-markdown-editor.tsx
web/src/components/**/*.tsx
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use Radix UI for accessible component primitives in web dashboard
Files:
web/src/components/ui/discord-markdown-editor.tsx
{config.json,src/api/utils/configAllowlist.js}
📄 CodeRabbit inference engine (AGENTS.md)
Config-backed features must be added to config.json and src/api/utils/configAllowlist.js; if a key is missing from SAFE_CONFIG_KEYS, the dashboard cannot save it
Files:
config.json
tests/**/*.test.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place bot test files in tests/ directory mirroring the src/ structure with .test.js extension
Maintain bot test coverage thresholds: statements 85%, branches 82%, functions 85%, lines 85%
Mock external dependencies (Discord.js, database, Redis, Claude SDK) in tests using vi.mock() and vi.fn()
Use vi.mock() for module mocking and vi.fn() for function stubs in tests
Use Vitest 4 for bot tests with node environment
Test timeout is 10 seconds for Vitest
Files:
tests/modules/triage-budget.test.jstests/modules/triage-respond.test.jstests/commands/tldr.test.jstests/utils/errors.test.jstests/utils/debugFooter.test.jstests/modules/triage.coverage.test.jstests/modules/aiAutoMod.test.jstests/utils/aiCost.test.jstests/modules/triage-parse.test.jstests/modules/triage.test.jstests/utils/aiClient.test.js
src/commands/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Create slash commands in src/commands/ exporting command builder + execute function with corresponding tests in tests/commands/
Files:
src/commands/tldr.js
src/api/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use Express 5 for REST API server
Files:
src/api/utils/configValidation.js
🧠 Learnings (12)
📓 Common learnings
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Create new bot features as modules in src/modules/ with corresponding config section, slash command, and tests
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use pnpm as the package manager; do not use npm or yarn (engine-strict=true enforced)
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Require Node.js 22+ as enforced by engines field in package.json
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Run pnpm lint and pnpm test before making changes and pnpm lint and pnpm test:coverage after making changes
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use Conventional Commits format for commit messages: feat:, fix:, chore:, docs:, refactor:, style:, test:, ci:
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use pnpm changeset to create release notes for user-facing changes
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use branch naming convention: feat/, fix/, chore/, docs/, refactor/, style/, test/, ci/
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Always target PRs to the main branch
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use Biome 2 for linting and formatting the entire project
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use pnpm workspaces + Turborepo for monorepo management
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use Changesets for versioning across the monorepo
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Run pnpm migrate to apply database migrations, pnpm migrate:down to rollback, pnpm migrate:create to create new migrations
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Auto-fix formatting and lint issues by running pnpm lint:fix
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Run pnpm deploy to register slash commands with Discord
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/package.json : Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing
Applied to files:
package.json
📚 Learning: 2026-03-10T23:21:49.730Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T23:21:49.730Z
Learning: Applies to web/src/components/dashboard/config-workspace/**/*.{ts,tsx} : Web dashboard config editor should use category workspace navigation with reusable SettingsFeatureCard pattern (header + master toggle + Basic/Advanced blocks)
Applied to files:
web/src/components/dashboard/config-workspace/config-categories.ts
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/*.{js,ts,jsx,tsx} : Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Applied to files:
src/modules/aiAutoMod.jssrc/commands/tldr.jstests/commands/tldr.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to src/commands/**/*.js : Create slash command definitions in `src/commands/`, exporting a slash command builder and an `execute` function
Applied to files:
src/commands/tldr.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to src/**/*.js : Always use `safeReply()`, `safeSend()`, or `safeEditReply()` instead of raw Discord.js methods for safe Discord messaging that handles errors gracefully
Applied to files:
src/commands/tldr.jstests/commands/tldr.test.jstests/modules/triage.coverage.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Maintain test coverage thresholds: statements 85%, branches 82%, functions 85%, lines 85%; never lower thresholds—add tests to cover new code instead
Applied to files:
docs/test-coverage-analysis.mdtests/modules/triage.coverage.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to config.json : All community features in `config.json` should be gated behind a `config.<feature>.enabled` flag; moderation commands are always available regardless of config settings
Applied to files:
web/src/types/config.ts
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/*.{js,ts,jsx,tsx} : Community features should be gated behind config.<feature>.enabled; moderation commands are the exception
Applied to files:
web/src/types/config.ts
📚 Learning: 2026-03-10T23:29:51.063Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T23:29:51.063Z
Learning: Applies to src/api/routes/*.js : Apply requireGuildModerator authorization check to DELETE endpoints that modify guild-scoped resources
Applied to files:
tests/modules/aiAutoMod.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Write bot tests using Vitest 4 with the `node` environment, matching the `src/` structure in the `tests/` directory
Applied to files:
tests/utils/aiCost.test.jstests/modules/triage.test.jstests/utils/aiClient.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to web/tests/**/*.test.{ts,tsx} : Write web dashboard tests using Vitest 4 with the `jsdom` environment and React Testing Library, matching the `web/src/` structure
Applied to files:
tests/utils/aiCost.test.js
🪛 GitHub Check: CodeQL
src/utils/aiClient.js
[failure] 158-158: Use of password hash with insufficient computational effort
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
🪛 GitHub Check: SonarCloud Code Analysis
tests/utils/aiCost.test.js
[warning] 24-24: Remove this useless assignment to variable "key".
[warning] 24-24: Remove the declaration of the unused 'key' variable.
src/modules/triage.js
[failure] 435-435: Refactor this function to reduce its Cognitive Complexity from 48 to the 15 allowed.
[failure] 520-526: Unexpected await of a non-Promise (non-"Thenable") value.
src/utils/aiClient.js
[failure] 457-462: Unexpected await of a non-Promise (non-"Thenable") value.
[failure] 340-345: Unexpected await of a non-Promise (non-"Thenable") value.
[warning] 230-230: knownTools should be a Set, and use knownTools.has() to check existence or non-existence.
[warning] 520-520: Remove the declaration of the unused 'providerName' variable.
[warning] 94-94: Prefer Number.parseFloat over parseFloat.
[warning] 520-520: Remove this useless assignment to variable "providerName".
🔇 Additional comments (14)
web/src/components/ui/discord-markdown-editor.tsx (3)
276-277: Cursor fallback handling is solid.Nice fix: this prevents passing
undefinedtosetSelectionRangeand keeps caret restoration stable after insertion.
309-314: Variable sample substitution remains safe and clear.Escaping is preserved and the early return for missing samples keeps the replacement logic robust.
411-412: Preview section compaction looks good.This is a clean formatting-only simplification with no functional change.
src/index.js (1)
63-63: Good startup optimization placement.Preloading the SDK early and non-blocking in
startup()is a clean way to remove first-request cold-start latency without delaying bot boot.Also applies to: 277-278
src/modules/ai.js (1)
3-3: Header update is consistent with the migration.The module doc now correctly reflects Vercel AI SDK usage.
tests/modules/triage-respond.test.js (1)
44-49: Test fixtures are aligned with the new AI SDK result shape.The updated
extractStatsmock signature andcostUsdfixture fields match the migrated responder/classifier flow.Also applies to: 590-595
web/src/components/dashboard/config-workspace/config-categories.ts (1)
142-143: Search metadata update is coherent with streaming removal.Removing “streaming” from this triage debug search item avoids stale matches in config search.
src/data/model-pricing.json (1)
1-145: Pricing dataset structure matches the runtime cost lookup contract.The
modelsmap and per-model token rate fields are well-formed foraiCostconsumption.tests/utils/aiClient.test.js (6)
1-45: LGTM! Excellent mock setup.The mock structure follows Vitest best practices:
- Uses
vi.hoisted()correctly for factory-referenced mocks- Proper import ordering (mocks before imports under test)
- Comprehensive coverage of external dependencies (logger, AI SDK, Anthropic SDK, aiCost)
49-71: LGTM! Well-designed test helpers.The helper functions effectively reduce duplication and provide flexible mock result builders with sensible defaults.
356-462: LGTM! Comprehensive streaming tests.The stream() test suite thoroughly covers:
- Async result resolution (all Promise fields)
- Tool-call chunk forwarding with callback filtering
- Timeout and API error handling
- Synchronous error catching in onChunk callbacks with proper logging
466-523: LGTM! Solid retry behavior coverage.The retry tests validate:
- Successful retry after 429 with retry-after header parsing
- Retry exhaustion after max attempts
- Immediate failure for non-retryable errors (401)
577-631: LGTM! Thorough cache token extraction tests.These tests comprehensively validate the cache-aware cost calculation:
- Direct provider metadata extraction (
anthropic.cacheReadInputTokens)- Dynamic provider name extraction (
minimax.cacheReadInputTokens)- Fallback to
anthropicmetadata for compatible providers- All four token types passed to
calculateCost(input, output, cachedInput, cacheCreation)The assertions align with the implementation's
getProviderMetadatahelper andcalculateCostparameter expectations.
331-333: 🧹 Nitpick | 🔵 TrivialOptional: Simplify error assertion.
The
expect.fail('should have thrown')call is unnecessary here—if no error is thrown, the test will fail anyway because the try block completes without entering the catch block.♻️ Proposed simplification
try { await generate({ model: 'claude-haiku-4-5', prompt: 'bad' }); - expect.fail('should have thrown'); } catch (err) { expect(err).toBeInstanceOf(AIClientError);Apply the same pattern to lines 427 and 557.
> Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/modules/aiAutoMod.js (1)
212-287:⚠️ Potential issue | 🟠 MajorAvoid issuing two delete requests for the same message.
When
autoDeleteis enabled and the selected action is'delete', this path deletes once before theswitchand once again incase 'delete'. With the current defaults, every spam hit takes that duplicate API path.🔧 Suggested fix
- if (autoModConfig.autoDelete) { + if (autoModConfig.autoDelete && result.action !== 'delete') { await message.delete().catch(() => {}); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/aiAutoMod.js` around lines 212 - 287, The code issues an extra delete because message.delete() is called once when autoModConfig.autoDelete is true and again inside the switch for case 'delete'; update the case 'delete' branch to skip deleting if autoModConfig.autoDelete is true (i.e., check autoModConfig.autoDelete at the top of the 'delete' case and return/break when true), so only one delete API call happens; reference the existing symbols: autoModConfig.autoDelete, result.action switch, case 'delete', and message.delete().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/utils/aiClient.test.js`:
- Around line 130-140: The test currently uses a provider that falls back to
KNOWN_BASE_URLS; change it to use a provider not listed in KNOWN_BASE_URLS
(e.g., "customprov:ModelX") and set the corresponding env vars
(CUSTOMPROV_API_KEY and CUSTOMPROV_BASE_URL) before calling generate so the
codepath that resolves <PROVIDER>_API_KEY / <PROVIDER>_BASE_URL is exercised;
then assert mockCreateAnthropic was called with baseURL equal to
process.env.CUSTOMPROV_BASE_URL (and clean up/reset those env vars after the
test). Ensure you still mockGenerateText and use the same expectations but
replace the provider string and expected baseURL check to reference the env var.
---
Outside diff comments:
In `@src/modules/aiAutoMod.js`:
- Around line 212-287: The code issues an extra delete because message.delete()
is called once when autoModConfig.autoDelete is true and again inside the switch
for case 'delete'; update the case 'delete' branch to skip deleting if
autoModConfig.autoDelete is true (i.e., check autoModConfig.autoDelete at the
top of the 'delete' case and return/break when true), so only one delete API
call happens; reference the existing symbols: autoModConfig.autoDelete,
result.action switch, case 'delete', and message.delete().
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 9a5e8ef1-d35d-4b9c-beea-e003df24655f
📒 Files selected for processing (3)
src/modules/aiAutoMod.jstests/modules/aiAutoMod.test.jstests/utils/aiClient.test.js
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Greptile Review
- GitHub Check: Analyze (javascript-typescript)
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{js,ts,jsx,tsx,mjs,mts}
📄 CodeRabbit inference engine (AGENTS.md)
Use ESM only; do not use CommonJS
Files:
src/modules/aiAutoMod.jstests/utils/aiClient.test.jstests/modules/aiAutoMod.test.js
**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{js,ts,jsx,tsx}: Use src/logger.js; do not use console.*
Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Use parameterized SQL only; do not use string concatenation for SQL queries
Community features should be gated behind config..enabled; moderation commands are the exception
Files:
src/modules/aiAutoMod.jstests/utils/aiClient.test.jstests/modules/aiAutoMod.test.js
**/*.{js,ts,jsx,tsx,json}
📄 CodeRabbit inference engine (AGENTS.md)
Welcome-message variables use double braces only, like {{user}}; single braces are plain text and should not be documented, inserted, or parsed as variables
Files:
src/modules/aiAutoMod.jstests/utils/aiClient.test.jstests/modules/aiAutoMod.test.js
{src,web/src,tests}/**/*.{js,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use ESM syntax only —
import/export, neverrequire()/module.exportsUse single quotes for strings — no double quotes except in JSON files
Always include semicolons at the end of statements
Use 2-space indentation — spaces, not tabs
Always use trailing commas in multi-line structures
Limit line width to 100 characters
Never use
console.*methods — use Winston logger instead withimport logger from '../logger.js'and calllogger.info(),logger.warn(),logger.error(), orlogger.debug()
Files:
src/modules/aiAutoMod.jstests/utils/aiClient.test.jstests/modules/aiAutoMod.test.js
src/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use
safeReply(),safeSend(), orsafeEditReply()instead of raw Discord.js methods for messaging to handle errors gracefullyUse the config system via
getConfig(guildId)fromsrc/modules/config.jsto retrieve merged global and guild-level configurationUse
src/utils/cache.jsfor generic caching with Redis primary and in-memory fallbackUse
src/utils/discordCache.jsfor caching Discord API responses (channels, roles, members)Use
src/utils/reputationCache.jsfor caching leaderboard and user reputation data
Files:
src/modules/aiAutoMod.js
src/modules/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
All community features must be gated behind
config.<feature>.enabledsettings, while moderation commands are always available regardless of configuration
Files:
src/modules/aiAutoMod.js
tests/**/*.test.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Bot test files must follow the naming pattern
*.test.jsand be placed intests/mirroring thesrc/directory structureMaintain bot test coverage above 85% for statements, lines, and functions; 82% for branches as enforced by
vitest.config.js
Files:
tests/utils/aiClient.test.jstests/modules/aiAutoMod.test.js
{tests,web/tests}/**/*.test.{js,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use
vi.mock()for module mocking andvi.fn()for function stubs in Vitest tests
Files:
tests/utils/aiClient.test.jstests/modules/aiAutoMod.test.js
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Always use pnpm as the package manager — the project enforces `engine-strict=true` and requires pnpm 10.30.3+
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Node.js 22 or higher is required as enforced by the `engines` field in `package.json`
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Follow Git branch naming conventions: `feat/`, `fix/`, `chore/`, `docs/`, `refactor/`, `style/`, `test/`, `ci/`
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Follow Conventional Commits message format: `feat:`, `fix:`, `chore:`, `docs:`, `refactor:`, `style:`, `test:`, `ci:`
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Use `pnpm changeset` to create release notes for user-facing changes
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Always target the `main` branch for pull requests
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Run `pnpm lint` and `pnpm test` before making changes to see existing state, and run `pnpm lint` and `pnpm test:coverage` after making changes to verify nothing breaks
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: All CI pipeline checks must pass: Lint (Biome), Test (Vitest with coverage), Web typecheck/lint/build, and Docker build validation
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: The bot is a pnpm monorepo using ESM JavaScript and discord.js v14 with Node.js 22+ as the runtime
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: The web dashboard is built with Next.js 16, TypeScript, React 19, Tailwind CSS 4, and Radix UI
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Use Vercel AI SDK (`ai`, `ai-sdk/anthropic`) for AI chat features with Claude
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Use PostgreSQL 17+ with node-pg-migrate for database operations with parameterized queries
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Use Redis 7+ with ioredis as the primary cache layer, with in-memory fallback when Redis is unavailable
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/*.{js,ts,jsx,tsx} : Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Applied to files:
src/modules/aiAutoMod.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Write bot tests using Vitest 4 with the `node` environment, matching the `src/` structure in the `tests/` directory
Applied to files:
tests/utils/aiClient.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to web/tests/**/*.test.{ts,tsx} : Write web dashboard tests using Vitest 4 with the `jsdom` environment and React Testing Library, matching the `web/src/` structure
Applied to files:
tests/utils/aiClient.test.js
📚 Learning: 2026-03-10T23:29:51.063Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T23:29:51.063Z
Learning: Applies to src/api/routes/*.js : Apply requireGuildModerator authorization check to DELETE endpoints that modify guild-scoped resources
Applied to files:
tests/modules/aiAutoMod.test.js
| it('should resolve unknown providers via env var convention', async () => { | ||
| mockGenerateText.mockResolvedValue(makeGenerateResult()); | ||
|
|
||
| await generate({ model: 'minimax:MiniMax-M2.7', prompt: 'test' }); | ||
|
|
||
| expect(mockCreateAnthropic).toHaveBeenCalledWith( | ||
| expect.objectContaining({ | ||
| baseURL: 'https://api.minimax.io/anthropic/v1', | ||
| }), | ||
| ); | ||
| }); |
There was a problem hiding this comment.
This test does not exercise the env-var convention it claims to cover.
Using minimax:MiniMax-M2.7 here still passes through the KNOWN_BASE_URLS fallback in src/utils/aiClient.js, and Lines 190-200 already cover that path. If the generic <PROVIDER>_API_KEY / <PROVIDER>_BASE_URL resolution regresses, this test will still stay green.
🔧 Suggested test fix
- it('should resolve unknown providers via env var convention', async () => {
- mockGenerateText.mockResolvedValue(makeGenerateResult());
-
- await generate({ model: 'minimax:MiniMax-M2.7', prompt: 'test' });
-
- expect(mockCreateAnthropic).toHaveBeenCalledWith(
- expect.objectContaining({
- baseURL: 'https://api.minimax.io/anthropic/v1',
- }),
- );
- });
+ it('should resolve unknown providers via env var convention', async () => {
+ mockGenerateText.mockResolvedValue(makeGenerateResult());
+ process.env.FOO_API_KEY = 'foo-key';
+ process.env.FOO_BASE_URL = 'https://foo.example.com/v1';
+
+ try {
+ await generate({ model: 'foo:bar-model', prompt: 'test' });
+
+ expect(mockCreateAnthropic).toHaveBeenCalledWith(
+ expect.objectContaining({
+ authToken: 'foo-key',
+ baseURL: 'https://foo.example.com/v1',
+ }),
+ );
+ } finally {
+ delete process.env.FOO_API_KEY;
+ delete process.env.FOO_BASE_URL;
+ }
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/utils/aiClient.test.js` around lines 130 - 140, The test currently uses
a provider that falls back to KNOWN_BASE_URLS; change it to use a provider not
listed in KNOWN_BASE_URLS (e.g., "customprov:ModelX") and set the corresponding
env vars (CUSTOMPROV_API_KEY and CUSTOMPROV_BASE_URL) before calling generate so
the codepath that resolves <PROVIDER>_API_KEY / <PROVIDER>_BASE_URL is
exercised; then assert mockCreateAnthropic was called with baseURL equal to
process.env.CUSTOMPROV_BASE_URL (and clean up/reset those env vars after the
test). Ensure you still mockGenerateText and use the same expectations but
replace the provider string and expected baseURL check to reference the env var.
| inputTokens: usage.inputTokens, | ||
| outputTokens: usage.outputTokens, | ||
| cachedInputTokens, | ||
| cacheCreationInputTokens, | ||
| }); | ||
|
|
||
| const durationMs = Date.now() - timings.start; |
There was a problem hiding this comment.
When wrapping a provider error into AIClientError, only statusCode is preserved. If the underlying SDK error uses status (without statusCode), downstream classifyError()/isRetryable() will lose the HTTP status and may misclassify the failure (e.g., 401/404/429). Consider copying both status and statusCode, or normalizing statusCode from err.status ?? err.statusCode, and optionally preserving headers used for rate-limit backoff.
| }, | ||
| { maxRetries: 3, signal }, | ||
| ); | ||
| timings.streamStarted = Date.now(); | ||
| timings.streamComplete = Date.now(); | ||
|
|
||
| const providerMeta = getProviderMetadata(result.providerMetadata, providerName); |
There was a problem hiding this comment.
Same as generate(): this AIClientError wrapper only includes statusCode. If the SDK throws errors that only expose status, retry/classification logic won’t see the HTTP status. Include status too (or normalize statusCode from err.status ?? err.statusCode).
| }; | ||
| } catch (err) { | ||
| if (err instanceof AIClientError) throw err; | ||
| if (signal.aborted) { |
There was a problem hiding this comment.
providerName is destructured from resolveModel() but never used. This is dead code and may trigger unused-variable linting; consider removing it from the destructuring assignment.
| * @param {Object} evalConfig - Effective triage configuration to use for this evaluation. | ||
| * @param {import('discord.js').Client} evalClient - Discord client used to fetch channels and send messages. | ||
| * @throws {CLIProcessError} When a classifier/responder CLI process times out; the error is rethrown. | ||
| * @throws {AIClientError} When a classifier/responder CLI process times out; the error is rethrown. |
There was a problem hiding this comment.
The @throws description still refers to a "CLI process" timeout, but triage now uses the in-process AI client/SDK. Updating this wording will avoid confusion for future maintainers and keep the docs accurate.
| * @throws {AIClientError} When a classifier/responder CLI process times out; the error is rethrown. | |
| * @throws {AIClientError} When a classifier or responder AI client request times out; the error is rethrown. |
| // ── Classifier result parser ───────────────────────────────────────────────── | ||
|
|
||
| /** | ||
| * Parse the classifier's JSON text output. | ||
| * @param {Object} sdkMessage - Raw CLI result message | ||
| * @param {string} channelId - For logging | ||
| * @returns {Object|null} Parsed { classification, reasoning, targetMessageIds } or null | ||
| * @returns {Object|null} Parsed { classification, reasoning, targetMessageIds, needsThinking, needsSearch } or null | ||
| */ | ||
| export function parseClassifyResult(sdkMessage, channelId) { | ||
| const parsed = parseSDKResult(sdkMessage.result, channelId, 'Classifier'); | ||
| export function parseClassifyResult(sdkResult, channelId) { | ||
| const parsed = parseSDKResult(sdkResult.text, channelId, 'Classifier'); |
There was a problem hiding this comment.
The JSDoc for parseClassifyResult still says the input is a "Raw CLI result message", and the file header also references CLI processes. This parser now operates on aiClient SDK results ({ text, finishReason, ... }), so the docs should be updated to match the new shape/source.
| /** | ||
| * Parse the responder's JSON text output. | ||
| * | ||
| * Note: Response length enforcement (Discord's 2000-char limit) is not applied | ||
| * here. Individual response text is bounded by MAX_MESSAGE_CHARS=1000 at | ||
| * accumulation time, and splitMessage() in triage-respond.js handles chunking | ||
| * for any edge cases before sending to Discord. | ||
| * | ||
| * @param {Object} sdkMessage - Raw CLI result message | ||
| * @param {string} channelId - For logging | ||
| * @returns {Object|null} Parsed { responses: [...] } or null | ||
| */ | ||
| export function parseRespondResult(sdkMessage, channelId) { | ||
| const parsed = parseSDKResult(sdkMessage.result, channelId, 'Responder'); | ||
| export function parseRespondResult(sdkResult, channelId) { | ||
| const parsed = parseSDKResult(sdkResult.text, channelId, 'Responder'); |
There was a problem hiding this comment.
parseRespondResult JSDoc still describes the parameter as a "Raw CLI result message", but the function now expects an aiClient SDK result object with a .text field. Please update the param description to avoid misleading API consumers/maintainers.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/modules/triage-parse.test.js (1)
187-200:⚠️ Potential issue | 🟡 MinorStrengthen responder success assertion for field-shape regressions.
Line 189 uses
target_message_id, but this test only checks array length. That can miss schema drift in responder payload fields. Assert the expected key explicitly (and preferably usetargetMessageIdconsistently with other updated tests).Suggested diff
it('should return parsed result on success', () => { const sdkMessage = { - text: '{"responses":[{"target_message_id":"m1","response":"Help text"}]}', + text: '{"responses":[{"targetMessageId":"m1","response":"Help text"}]}', finishReason: 'stop', costUsd: 0.005, durationMs: 200, usage: { inputTokens: 500, outputTokens: 100 }, sources: [], providerMetadata: { anthropic: {} }, }; const result = parseRespondResult(sdkMessage, 'ch1'); expect(result).not.toBeNull(); expect(result.responses).toHaveLength(1); + expect(result.responses[0].targetMessageId).toBe('m1'); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/modules/triage-parse.test.js` around lines 187 - 200, The test for parseRespondResult should assert the expected response object shape, not just array length: update the spec in the 'should return parsed result on success' case to check that parseRespondResult(sdkMessage, 'ch1') returns a responses array of length 1 and that responses[0] contains the camelCase field targetMessageId with value 'm1' (or that the key exists), and that the response text is present (e.g., 'response' or expected property); locate the test block using the spec name and the parseRespondResult reference and replace the loose length-only assertion with these explicit shape checks to catch schema regressions.src/modules/triage-respond.js (1)
79-103:⚠️ Potential issue | 🟠 MajorReapply the caller's
limitafter merging history and recent context.When
bufferSnapshot[0]exists, this can return up tolimit + RECENT_LIMITmessages:limitolder messages plus 5 recent ones, minus duplicates. That breaks the function contract and quietly increases prompt size/cost on the hot path. Either trim the merged list back tolimit, or reduce the historical fetch budget when you add the recent window.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/triage-respond.js` around lines 79 - 103, The merged context can exceed the requested limit because you fetch history (historyOptions.limit = limit) plus RECENT_LIMIT and then merge; update the logic in the block around historyOptions, RECENT_LIMIT, merged and toContextEntry to ensure the final returned array respects the caller's limit: either reduce historyOptions.limit by RECENT_LIMIT when bufferSnapshot[0] exists or, after building merged, trim the sorted array to the first limit entries before mapping with toContextEntry; make sure you still dedupe using bufferSnapshot and seen sets.
♻️ Duplicate comments (1)
tests/utils/aiClient.test.js (1)
147-157:⚠️ Potential issue | 🟡 MinorThis still doesn't exercise the env-var convention it names.
Using
minimaxhere only covers theKNOWN_BASE_URLSfallback insrc/utils/aiClient.js, so
a regression in generic<PROVIDER>_API_KEY/<PROVIDER>_BASE_URLresolution would still
pass. Switch this case to a provider that is not inKNOWN_BASE_URLS.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/utils/aiClient.test.js` around lines 147 - 157, The test uses 'minimax' which hits KNOWN_BASE_URLS instead of exercising the env-var convention; update the test in aiClient.test.js to use a provider name not listed in KNOWN_BASE_URLS (e.g., 'acme' or 'someother'), change the model string passed to generate (e.g., 'acme:SomeModel'), and ensure the test sets the corresponding environment variables for that provider following the convention (e.g., ACME_BASE_URL or ACME_API_KEY) so that generate (and the code path in src/utils/aiClient.js that resolves provider via env vars) triggers and mockCreateAnthropic is asserted against the expected baseURL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.env.example:
- Around line 33-45: The .env.example section implying arbitrary providers is
misleading because createAnthropic (in src/utils/aiClient.js) routes all
provider keys through Anthropic-compatible clients; update the documentation to
narrow this block to only Anthropic-compatible providers (e.g., minimax) or
explicitly state that any custom <PROVIDER>_API_KEY/<PROVIDER>_BASE_URL must be
Anthropic-compatible; locate the env example block and replace the generic
wording and the codex example with either: 1) a concise note that only
Anthropic-compatible backends are supported (mention minimax as an example), or
2) instructions for adding non-Anthropic providers only after adding a
corresponding implementation in src/utils/aiClient.js (reference
createAnthropic) so the docs and code remain consistent.
In `@src/modules/triage-config.js`:
- Line 38: The current defaulting line sets thinkingTokens to 0 which silences
any previously-omitted configs and prevents needsThinking from working; change
the assignment in src/modules/triage-config.js to preserve "missing" rather than
coerce to zero (e.g., let thinkingTokens = triageConfig.thinkingTokens; or use
triageConfig.hasOwnProperty('thinkingTokens') ? triageConfig.thinkingTokens :
undefined) so that resolvedConfig can distinguish omitted values from an
explicit zero; if you must support explicit zero for some providers, gate that
behavior by provider capability or add an explicit migration step instead of
using a global zero fallback.
In `@src/modules/triage.js`:
- Around line 642-650: Summary: cancellations (reason === 'aborted') are being
treated as generic failures and trigger user-visible error paths. Fix: In the
error handling block around evaluateNow(), add a branch that detects
AIClientError with reason === 'aborted' (similar to the existing timeout case)
and short-circuits further generic logging/user-facing messaging—e.g., emit a
warn/debug about the abort (using channelId) and rethrow or return so the
subsequent logError and user-facing message block (which checks for parse) is
not executed. Update the logic that currently checks for timeout to also handle
aborted cancellations before calling logError; reference evaluateNow(),
buf.abortController.abort(), and AIClientError (reason === 'aborted') to find
and change the code.
- Around line 716-740: classifierConfig and responderConfig are being populated
once in startTriage() with per-guild fields (resolved.classifyModel,
resolved.respondModel, resolved.timeout, resolved.classifyBaseUrl,
resolved.respondBaseUrl, resolved.classifyApiKey, resolved.respondApiKey and the
responder system prompt), which freezes per-guild settings; instead, keep these
objects free of per-request/guild settings and assemble the final config at
evaluation time using evalConfig.triage. Concretely: in startTriage() only store
static pieces (e.g., loaded prompts like loadPrompt('triage-classify-system')
and jsonSchemaAppend from loadPrompt('triage-respond-schema')) and any immutable
defaults, and remove resolved.* fields from classifierConfig/responderConfig;
then, where you construct the request for classification/response (the code that
uses evalConfig.triage), merge the dynamic values
(evalConfig.triage.classifyModel/respondModel, timeout, baseUrl, apiKey, and
botConfig.ai?.systemPrompt or resolved.system) into a fresh config object per
call so each guild/request gets correct runtime settings.
In `@src/utils/aiClient.js`:
- Around line 365-370: The call to calculateCost is synchronous, so remove the
unnecessary await when assigning costUsd in the code that sets costUsd (replace
"const costUsd = await calculateCost(...)" with a direct call) and make the same
change in the stream() call site (the other calculateCost invocation around the
stream handling); reference the calculateCost function and the costUsd variable
to locate both places and ensure you simply call calculateCost(...)
synchronously and assign its return value without awaiting it.
- Around line 445-488: The timing code is placed after withRetry completes so
streamStarted/streamComplete are both set too late; move the assignments into
the async closure around the stream consumption: set timings.streamStarted
immediately after obtaining streamResult from streamText (before awaiting
streamResult.text/usage/finishReason/sources/providerMetadata) and set
timings.streamComplete after all awaits that consume the stream, ensuring
streamConsumeMs measures the await time and streamInitMs measures initiation
latency; adjust the positions inside the withRetry async function (referencing
withRetry, streamText, streamResult, timings.streamStarted,
timings.streamComplete).
- Around line 184-188: The current apiKey assignment leaks ANTHROPIC_API_KEY to
non-Anthropic providers; update the logic in the apiKey computation so
ANTHROPIC_API_KEY is only considered when isAnthropic is true. Concretely, use
overrides.apiKey first, then if isAnthropic use process.env.ANTHROPIC_API_KEY,
otherwise use process.env[`${envPrefix}_API_KEY`] (do not have a fallback to
process.env.ANTHROPIC_API_KEY for non-Anthropic cases); adjust the expression
around the apiKey variable and reference isAnthropic, overrides.apiKey, and
envPrefix to implement this conditional-only behavior.
In `@src/utils/aiCost.js`:
- Around line 80-82: The subtraction of cacheTotal from inputTokens (computing
regularInputTokens) is ambiguous; change the function API to accept an explicit
boolean (e.g., inputTokensAreUncached) or normalize input shape so callers must
supply a canonical value; then compute regularInputTokens as either
(inputTokens) when inputTokensAreUncached is true, or Math.max(0, inputTokens -
cacheTotal) when false. Update the logic around regularInputTokens, inputTokens,
and cacheTotal accordingly and adjust all call sites to pass the new flag (or to
supply canonical inputTokens) so cost/budget math no longer depends on caller
convention.
---
Outside diff comments:
In `@src/modules/triage-respond.js`:
- Around line 79-103: The merged context can exceed the requested limit because
you fetch history (historyOptions.limit = limit) plus RECENT_LIMIT and then
merge; update the logic in the block around historyOptions, RECENT_LIMIT, merged
and toContextEntry to ensure the final returned array respects the caller's
limit: either reduce historyOptions.limit by RECENT_LIMIT when bufferSnapshot[0]
exists or, after building merged, trim the sorted array to the first limit
entries before mapping with toContextEntry; make sure you still dedupe using
bufferSnapshot and seen sets.
In `@tests/modules/triage-parse.test.js`:
- Around line 187-200: The test for parseRespondResult should assert the
expected response object shape, not just array length: update the spec in the
'should return parsed result on success' case to check that
parseRespondResult(sdkMessage, 'ch1') returns a responses array of length 1 and
that responses[0] contains the camelCase field targetMessageId with value 'm1'
(or that the key exists), and that the response text is present (e.g.,
'response' or expected property); locate the test block using the spec name and
the parseRespondResult reference and replace the loose length-only assertion
with these explicit shape checks to catch schema regressions.
---
Duplicate comments:
In `@tests/utils/aiClient.test.js`:
- Around line 147-157: The test uses 'minimax' which hits KNOWN_BASE_URLS
instead of exercising the env-var convention; update the test in
aiClient.test.js to use a provider name not listed in KNOWN_BASE_URLS (e.g.,
'acme' or 'someother'), change the model string passed to generate (e.g.,
'acme:SomeModel'), and ensure the test sets the corresponding environment
variables for that provider following the convention (e.g., ACME_BASE_URL or
ACME_API_KEY) so that generate (and the code path in src/utils/aiClient.js that
resolves provider via env vars) triggers and mockCreateAnthropic is asserted
against the expected baseURL.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 62fda2d6-677c-4666-9e7f-e384ec11c726
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (14)
.env.example.github/copilot-instructions.mddocs/test-coverage-analysis.mdpackage.jsonsrc/modules/triage-config.jssrc/modules/triage-respond.jssrc/modules/triage.jssrc/utils/aiClient.jssrc/utils/aiCost.jssrc/utils/errors.jstests/modules/triage-parse.test.jstests/utils/aiClient.test.jstests/utils/errors.test.jsweb/src/types/config.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Greptile Review
🧰 Additional context used
📓 Path-based instructions (13)
**/package.json
📄 CodeRabbit inference engine (AGENTS.md)
Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing
Files:
package.json
**/*.{js,ts,jsx,tsx,json}
📄 CodeRabbit inference engine (AGENTS.md)
Welcome-message variables use double braces only, like {{user}}; single braces are plain text and should not be documented, inserted, or parsed as variables
Files:
package.jsonsrc/modules/triage-respond.jssrc/modules/triage-config.jsweb/src/types/config.tstests/modules/triage-parse.test.jstests/utils/errors.test.jssrc/utils/aiCost.jssrc/modules/triage.jssrc/utils/errors.jssrc/utils/aiClient.jstests/utils/aiClient.test.js
**/*.{js,ts,jsx,tsx,mjs,mts}
📄 CodeRabbit inference engine (AGENTS.md)
Use ESM only; do not use CommonJS
Files:
src/modules/triage-respond.jssrc/modules/triage-config.jsweb/src/types/config.tstests/modules/triage-parse.test.jstests/utils/errors.test.jssrc/utils/aiCost.jssrc/modules/triage.jssrc/utils/errors.jssrc/utils/aiClient.jstests/utils/aiClient.test.js
**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{js,ts,jsx,tsx}: Use src/logger.js; do not use console.*
Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Use parameterized SQL only; do not use string concatenation for SQL queries
Community features should be gated behind config..enabled; moderation commands are the exception
Files:
src/modules/triage-respond.jssrc/modules/triage-config.jsweb/src/types/config.tstests/modules/triage-parse.test.jstests/utils/errors.test.jssrc/utils/aiCost.jssrc/modules/triage.jssrc/utils/errors.jssrc/utils/aiClient.jstests/utils/aiClient.test.js
src/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use ESM syntax (
import/export) in JavaScript files; never userequire()ormodule.exportsNever use
console.log(),console.error(), or otherconsole.*methods; use the Winston logger instead viaimport logger from '../logger.js'Use generic cache utilities from
src/utils/cache.jsfor Redis primary + in-memory fallback cachingUse
src/utils/discordCache.jsfor caching Discord API responses (channels, roles, members)Use
src/utils/reputationCache.jsfor caching leaderboard and user reputation dataAuto-invalidate all caches when config changes occur
Files:
src/modules/triage-respond.jssrc/modules/triage-config.jssrc/utils/aiCost.jssrc/modules/triage.jssrc/utils/errors.jssrc/utils/aiClient.js
{src,web/src}/**/*.{js,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use single quotes for strings; never use double quotes (except in JSON files)
Always include semicolons at the end of statements
Use 2-space indentation with spaces, never tabs
Always use trailing commas in multi-line arrays and objects
Limit lines to 100 characters or less
Files:
src/modules/triage-respond.jssrc/modules/triage-config.jsweb/src/types/config.tssrc/utils/aiCost.jssrc/modules/triage.jssrc/utils/errors.jssrc/utils/aiClient.js
{src/**/*.js,migrations/**/*.cjs}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Always use parameterized queries for database operations; never use string interpolation in SQL
Files:
src/modules/triage-respond.jssrc/modules/triage-config.jssrc/utils/aiCost.jssrc/modules/triage.jssrc/utils/errors.jssrc/utils/aiClient.js
{src/commands/**/*.js,src/modules/**/*.js}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Always use
safeReply(),safeSend(), orsafeEditReply()instead of raw Discord.js methods when sending messages
Files:
src/modules/triage-respond.jssrc/modules/triage-config.jssrc/modules/triage.js
{src/modules/**/*.js,src/commands/**/*.js}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Gate all community features behind
config.<feature>.enabledchecks; moderation commands are always available
Files:
src/modules/triage-respond.jssrc/modules/triage-config.jssrc/modules/triage.js
src/modules/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Create new bot feature modules in
src/modules/following the module pattern
Files:
src/modules/triage-respond.jssrc/modules/triage-config.jssrc/modules/triage.js
web/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
web/src/**/*.{ts,tsx}: New dashboard routes need title wiring in web/src/lib/page-titles.ts; use createPageMetadata() for SSR and keep DashboardTitleSync aligned for client navigation
Dashboard clients that need the guild list should consume GuildDirectoryProvider; do not stack extra /api/guilds fetch loops in leaf components
Recharts dashboard views should use web/src/components/ui/stable-responsive-container.tsx; raw ResponsiveContainer mounts can spam width(-1)/height(-1) warnings when panels render before layout settles
Any visual dashboard or landing page change must be verified with Chrome DevTools MCP before calling it done; take a screenshot after the change
Check both light and dark themes if colors or theming changed
Check responsive behavior on mobile, tablet, and desktop if layout changed
Files:
web/src/types/config.ts
web/src/**/*.{ts,tsx,css,scss}
📄 CodeRabbit inference engine (AGENTS.md)
Follow the design system when making changes to the UI/UX; see DESIGN.md for the design system and color palette
Files:
web/src/types/config.ts
tests/**/*.test.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place bot test files in
tests/mirroring thesrc/directory structureMaintain minimum code coverage thresholds: statements 85%, branches 82%, functions 85%, lines 85% for bot tests; 85% across all metrics for web tests
Use Vitest 4 with
vi.mock()for module mocking andvi.fn()for function stubs; do not use other testing frameworks
Files:
tests/modules/triage-parse.test.jstests/utils/errors.test.jstests/utils/aiClient.test.js
🧠 Learnings (11)
📓 Common learnings
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use the pnpm package manager exclusively; never use npm or yarn
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Run `pnpm lint` before and after making changes to verify code style and formatting; use `pnpm lint:fix` to auto-fix issues
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Run `pnpm test:coverage` before pushing to main to verify test coverage thresholds are met
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use Node.js 22 or higher; the project enforces this via the `engines` field in `package.json`
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use Conventional Commits for commit messages with prefixes: `feat:`, `fix:`, `chore:`, `docs:`, `refactor:`, `style:`, `test:`, `ci:`
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use branch naming conventions: `feat/`, `fix/`, `chore/`, `docs/`, `refactor/`, `style/`, `test/`, `ci/` prefixes
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use `pnpm changeset` to create release notes for user-facing changes before merging to main
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use PostgreSQL 17+ with node-pg-migrate for database migrations; always create migrations with `pnpm migrate:create`
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use Redis 7+ (ioredis) for caching with automatic fallback to in-memory caching if Redis is unavailable
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use the Vercel AI SDK (`ai`, `ai-sdk/anthropic`) for Claude AI features
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use discord.js v14 for bot interactions and Discord API calls
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use Next.js 16, React 19, and Tailwind CSS 4 for the web dashboard
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use Biome 2 as the linter and formatter; configuration is in `biome.json`
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: CI pipeline must pass all checks: Biome lint, Vitest coverage, web typecheck, lint, and build, plus Docker build validation
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Biome linting applies to `src/**/*.js` (bot) and `web/src/**/*.{ts,tsx}` (web); run `pnpm lint` from root to check both
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Read this coding instructions file at the start of every session to understand project conventions
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Run `pnpm lint` and `pnpm test` before making changes to see the existing project state
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Run `pnpm lint` and `pnpm test:coverage` after making changes to verify nothing breaks
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/package.json : Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing
Applied to files:
package.json
📚 Learning: 2026-04-09T20:31:03.264Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.264Z
Learning: Use Node 22+ and LATEST version of pnpm
Applied to files:
package.json
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Write bot tests using Vitest 4 with the `node` environment, matching the `src/` structure in the `tests/` directory
Applied to files:
.github/copilot-instructions.mdtests/utils/errors.test.jstests/utils/aiClient.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to web/tests/**/*.test.{ts,tsx} : Write web dashboard tests using Vitest 4 with the `jsdom` environment and React Testing Library, matching the `web/src/` structure
Applied to files:
.github/copilot-instructions.mdtests/utils/errors.test.jstests/utils/aiClient.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Maintain test coverage thresholds: statements 85%, branches 82%, functions 85%, lines 85%; never lower thresholds—add tests to cover new code instead
Applied to files:
.github/copilot-instructions.mddocs/test-coverage-analysis.md
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to web/tests/**/*.test.{ts,tsx} : Maintain test coverage thresholds of 85% across all metrics (statements, branches, functions, lines) for web dashboard tests
Applied to files:
docs/test-coverage-analysis.md
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to config.json : All community features in `config.json` should be gated behind a `config.<feature>.enabled` flag; moderation commands are always available regardless of config settings
Applied to files:
web/src/types/config.ts
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/*.{js,ts,jsx,tsx} : Community features should be gated behind config.<feature>.enabled; moderation commands are the exception
Applied to files:
web/src/types/config.ts
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to **/*.js : Use ESM-only syntax: `import`/`export`, never `require()`/`module.exports`
Applied to files:
tests/utils/errors.test.js
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/*.{js,ts,jsx,tsx,mjs,mts} : Use ESM only; do not use CommonJS
Applied to files:
tests/utils/errors.test.js
🪛 GitHub Check: CodeQL
src/utils/aiClient.js
[failure] 173-173: Use of password hash with insufficient computational effort
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
🪛 GitHub Check: SonarCloud Code Analysis
src/modules/triage.js
[failure] 436-436: Refactor this function to reduce its Cognitive Complexity from 48 to the 15 allowed.
[failure] 521-527: Unexpected await of a non-Promise (non-"Thenable") value.
src/utils/aiClient.js
[warning] 199-201: Extract this nested ternary operation into an independent statement.
[warning] 559-559: Remove the declaration of the unused 'providerName' variable.
[warning] 249-249: knownTools should be a Set, and use knownTools.has() to check existence or non-existence.
[warning] 559-559: Remove this useless assignment to variable "providerName".
[warning] 94-94: Prefer Number.parseFloat over parseFloat.
[failure] 493-498: Unexpected await of a non-Promise (non-"Thenable") value.
[failure] 365-370: Unexpected await of a non-Promise (non-"Thenable") value.
🔇 Additional comments (3)
tests/modules/triage-parse.test.js (3)
61-68: Nice coverage improvement for fenced JSON parsing.This case is strong: it validates leading-whitespace fence stripping and asserts a nested payload field, which protects the parser behavior well.
107-165: Classifier-path test updates look solid.Good migration to
sdkMessage.textfixtures, plus explicit assertions forneedsThinkingandneedsSearchin the success path.
173-185: Good null/empty guards for responder parsing.These tests cover both
nulland''input paths and lock in expectednullbehavior cleanly.Also applies to: 217-229
| const tokenRecycleLimit = triageConfig.tokenRecycleLimit ?? 20000; | ||
| const thinkingTokens = triageConfig.thinkingTokens ?? 4096; | ||
| const streaming = triageConfig.streaming ?? false; | ||
| const thinkingTokens = triageConfig.thinkingTokens ?? 0; |
There was a problem hiding this comment.
Don't make needsThinking a silent no-op for configs that omit thinkingTokens.
src/modules/triage.js only enables thinking when resolvedConfig.thinkingTokens > 0. With a zero fallback here, any existing guild config that never set thinkingTokens will stop honoring the classifier's needsThinking signal after upgrade. If zero-thinking is required for some providers, gate it by capability or migrate the setting explicitly instead of changing the shared default.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/modules/triage-config.js` at line 38, The current defaulting line sets
thinkingTokens to 0 which silences any previously-omitted configs and prevents
needsThinking from working; change the assignment in
src/modules/triage-config.js to preserve "missing" rather than coerce to zero
(e.g., let thinkingTokens = triageConfig.thinkingTokens; or use
triageConfig.hasOwnProperty('thinkingTokens') ? triageConfig.thinkingTokens :
undefined) so that resolvedConfig can distinguish omitted values from an
explicit zero; if you must support explicit zero for some providers, gate that
behavior by provider capability or add an explicit migration step instead of
using a global zero fallback.
| try { | ||
| const result = await withRetry( | ||
| async () => { | ||
| const streamResult = streamText({ | ||
| model, | ||
| system, | ||
| prompt, | ||
| ...(hasTools && { tools, stopWhen: stepCountIs(5) }), | ||
| maxTokens, | ||
| providerOptions, | ||
| abortSignal: signal, | ||
| onChunk: ({ chunk }) => { | ||
| if (chunk.type === 'tool-call' && userOnChunk) { | ||
| try { | ||
| const cbResult = userOnChunk(chunk.toolName, chunk.args); | ||
| if (cbResult?.catch) | ||
| cbResult.catch((err) => | ||
| logError('onChunk callback error', { error: err.message }), | ||
| ); | ||
| } catch (err) { | ||
| logError('onChunk callback error (sync)', { error: err.message }); | ||
| } | ||
| } | ||
| }, | ||
| }); | ||
|
|
||
| const text = await streamResult.text; | ||
| const usage = (await streamResult.totalUsage) ?? (await streamResult.usage) ?? {}; | ||
| const finishReason = await streamResult.finishReason; | ||
| const sources = (await streamResult.sources) ?? []; | ||
| const providerMetadata = (await streamResult.providerMetadata) ?? {}; | ||
|
|
||
| return { | ||
| text, | ||
| usage, | ||
| finishReason, | ||
| sources, | ||
| providerMetadata, | ||
| }; | ||
| }, | ||
| { maxRetries: 3, signal }, | ||
| ); | ||
| timings.streamStarted = Date.now(); | ||
| timings.streamComplete = Date.now(); |
There was a problem hiding this comment.
The stream latency breakdown is currently wrong.
timings.streamStarted and timings.streamComplete are both assigned after withRetry()
returns, i.e. after the whole stream has already been consumed. That makes streamConsumeMs
effectively 0 and rolls the full provider latency into streamInitMs, so the new timing
logs are misleading.
📈 Minimal fix
try {
const result = await withRetry(
async () => {
+ timings.streamStarted = Date.now();
const streamResult = streamText({
model,
system,
prompt,
...(hasTools && { tools, stopWhen: stepCountIs(5) }),
@@
const usage = (await streamResult.totalUsage) ?? (await streamResult.usage) ?? {};
const finishReason = await streamResult.finishReason;
const sources = (await streamResult.sources) ?? [];
const providerMetadata = (await streamResult.providerMetadata) ?? {};
+ timings.streamComplete = Date.now();
return {
text,
usage,
finishReason,
@@
},
{ maxRetries: 3, signal },
);
- timings.streamStarted = Date.now();
- timings.streamComplete = Date.now();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const result = await withRetry( | |
| async () => { | |
| const streamResult = streamText({ | |
| model, | |
| system, | |
| prompt, | |
| ...(hasTools && { tools, stopWhen: stepCountIs(5) }), | |
| maxTokens, | |
| providerOptions, | |
| abortSignal: signal, | |
| onChunk: ({ chunk }) => { | |
| if (chunk.type === 'tool-call' && userOnChunk) { | |
| try { | |
| const cbResult = userOnChunk(chunk.toolName, chunk.args); | |
| if (cbResult?.catch) | |
| cbResult.catch((err) => | |
| logError('onChunk callback error', { error: err.message }), | |
| ); | |
| } catch (err) { | |
| logError('onChunk callback error (sync)', { error: err.message }); | |
| } | |
| } | |
| }, | |
| }); | |
| const text = await streamResult.text; | |
| const usage = (await streamResult.totalUsage) ?? (await streamResult.usage) ?? {}; | |
| const finishReason = await streamResult.finishReason; | |
| const sources = (await streamResult.sources) ?? []; | |
| const providerMetadata = (await streamResult.providerMetadata) ?? {}; | |
| return { | |
| text, | |
| usage, | |
| finishReason, | |
| sources, | |
| providerMetadata, | |
| }; | |
| }, | |
| { maxRetries: 3, signal }, | |
| ); | |
| timings.streamStarted = Date.now(); | |
| timings.streamComplete = Date.now(); | |
| try { | |
| const result = await withRetry( | |
| async () => { | |
| timings.streamStarted = Date.now(); | |
| const streamResult = streamText({ | |
| model, | |
| system, | |
| prompt, | |
| ...(hasTools && { tools, stopWhen: stepCountIs(5) }), | |
| maxTokens, | |
| providerOptions, | |
| abortSignal: signal, | |
| onChunk: ({ chunk }) => { | |
| if (chunk.type === 'tool-call' && userOnChunk) { | |
| try { | |
| const cbResult = userOnChunk(chunk.toolName, chunk.args); | |
| if (cbResult?.catch) | |
| cbResult.catch((err) => | |
| logError('onChunk callback error', { error: err.message }), | |
| ); | |
| } catch (err) { | |
| logError('onChunk callback error (sync)', { error: err.message }); | |
| } | |
| } | |
| }, | |
| }); | |
| const text = await streamResult.text; | |
| const usage = (await streamResult.totalUsage) ?? (await streamResult.usage) ?? {}; | |
| const finishReason = await streamResult.finishReason; | |
| const sources = (await streamResult.sources) ?? []; | |
| const providerMetadata = (await streamResult.providerMetadata) ?? {}; | |
| timings.streamComplete = Date.now(); | |
| return { | |
| text, | |
| usage, | |
| finishReason, | |
| sources, | |
| providerMetadata, | |
| }; | |
| }, | |
| { maxRetries: 3, signal }, | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/aiClient.js` around lines 445 - 488, The timing code is placed
after withRetry completes so streamStarted/streamComplete are both set too late;
move the assignments into the async closure around the stream consumption: set
timings.streamStarted immediately after obtaining streamResult from streamText
(before awaiting streamResult.text/usage/finishReason/sources/providerMetadata)
and set timings.streamComplete after all awaits that consume the stream,
ensuring streamConsumeMs measures the await time and streamInitMs measures
initiation latency; adjust the positions inside the withRetry async function
(referencing withRetry, streamText, streamResult, timings.streamStarted,
timings.streamComplete).
- P0 #2: remove ANTHROPIC_API_KEY fallback for non-Anthropic providers. Previously a MiniMax (or any unknown) provider without MINIMAX_API_KEY would silently reuse the Anthropic key against a third-party endpoint. Now: loud-fail at resolveModel with AIClientError('api') naming the missing <PROVIDER>_API_KEY env var. - Decision A: replace SHA-256 provider cache key (flagged by CodeQL as "insecure password hash") with a plain \x00-joined string. The cache lives in-process and is never serialized — hashing was never needed. - Decision F: lock aiCost contract to "inputTokens is ALWAYS the total input count reported by the SDK, including cache-read and cache-write tokens." Clamp at 0 to defend against provider metadata mismatches. - Safe #1: drop stale `await` on synchronous `calculateCost()` in both generate() and stream() (SonarCloud flag). - Safe #2: normalize `err.status ?? err.statusCode` everywhere so errors thrown with either field expose the correct retry-after status. - Safe #3: drop unused `providerName` from the `warmConnection` destructure. - Tests: add beforeEach env-var setup for ANTHROPIC_API_KEY across all describe blocks; wrap minimax tests in try/finally with MINIMAX_API_KEY; flip sync calculateCost mocks to `mockReturnValue`; replace the obsolete "fall back to ANTHROPIC_API_KEY" test with a loud-fail assertion.
- P0 #5: classifier/responder config was frozen at startTriage() time, so per-guild overrides for model/timeout/apiKey/baseUrl and the responder's system prompt never took effect until a full restart. Replace the frozen module objects with prompt-only bindings and two `buildClassifierConfig(evalConfig)` / `buildResponderConfig(evalConfig)` helpers that pick up per-guild overrides on every evaluation. - P0 #3: treat `AIClientError.reason === 'aborted'` the same as `'timeout'` — both are routine (superseded eval or provider cut off), must NOT log as errors or surface "Sorry, I'm having trouble thinking" to the channel. evaluateAndRespond now warns once with `{ channelId, reason }` before rethrowing; evaluateNow silently returns for both reasons. - Safe #6: update triage-parse.js JSDoc from "Raw CLI result message" to "Vercel AI SDK result object". - Tests: two new parseClassifyResult cases covering non-boolean `needsThinking`/`needsSearch` coercion and absent-field defaults (Safe #11); new coverage test for the `aborted` reason path; existing timeout test updated for the new structured log shape.
Safe #15: the AI client routes ALL providers through `createAnthropic`, so non-Anthropic-compatible APIs (OpenAI-native, Mistral-native, etc.) cannot be wired in through the <PROVIDER>_API_KEY convention. Document this explicitly. Also call out that there is NO fallback to ANTHROPIC_API_KEY for non-Anthropic providers — missing a provider-specific key is a hard failure. This mirrors the aiClient loud-fail behavior added in the credential-leak fix. Drop the fake CODEX example (would not actually work) and replace it with a generic Anthropic-compatible third-party placeholder.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/modules/triage.js (1)
699-712: 🧹 Nitpick | 🔵 TrivialMinor: reuse
fetchChannelCachedin the error-message fallback path.Every other channel lookup in this module goes through
fetchChannelCached(evalClient, channelId); the rawevalClient.channels.fetch(channelId)on L702 bypasses the cache on what is already a slow/unhappy path. Switching it keeps the caching boundary consistent and honors the shared-helper convention used elsewhere in triage.♻️ Suggested change
- const channel = await evalClient.channels.fetch(channelId).catch(() => null); + const channel = await fetchChannelCached(evalClient, channelId); if (channel) { await safeSend( channel, "Sorry, I'm having trouble thinking right now. Try again in a moment!", ); }As per coding guidelines
src/**/*.js:Use safeReply()/safeSend()/safeEditReply() instead of raw Discord.js methods for message operations to handle errors gracefully— thesafeSendcall is already compliant; this is just about the channel-fetch half of the pair.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/triage.js` around lines 699 - 712, The fallback error path is bypassing the module's caching helper by calling evalClient.channels.fetch(channelId); replace that call with the shared helper fetchChannelCached(evalClient, channelId) so the cache is honored and the same null-on-failure semantics are preserved, then pass the returned channel into the existing safeSend call (the surrounding code references fetchChannelCached, evalClient.channels.fetch, and safeSend).
♻️ Duplicate comments (1)
src/utils/aiClient.js (1)
457-521:⚠️ Potential issue | 🟠 MajorStream timing breakdown is still wrong —
streamStarted/streamCompleteare set afterwithRetryresolves.
timings.streamStartedandtimings.streamCompleteare both assigned on lines 499–500, i.e. afterawait withRetry(...)has already consumed the entire stream. That meansstreamConsumeMs≈ 0 andstreamInitMssilently absorbs the full provider latency — the debug log on lines 513–521 is misleading in exactly the way flagged previously. The fix is to move these assignments inside thewithRetryclosure (before the firstawait streamResult.textand after the lastawait).📈 Proposed fix
const result = await withRetry( async () => { const streamResult = streamText({ model, system, prompt, ...(hasTools && { tools, stopWhen: stepCountIs(5) }), maxTokens, providerOptions, abortSignal: signal, onChunk: ({ chunk }) => { /* ... */ }, }); + timings.streamStarted = Date.now(); const text = await streamResult.text; const usage = (await streamResult.totalUsage) ?? (await streamResult.usage) ?? {}; const finishReason = await streamResult.finishReason; const sources = (await streamResult.sources) ?? []; const providerMetadata = (await streamResult.providerMetadata) ?? {}; + timings.streamComplete = Date.now(); return { text, usage, finishReason, sources, providerMetadata }; }, { maxRetries: 3, signal }, ); - timings.streamStarted = Date.now(); - timings.streamComplete = Date.now();Note: on retry, the closure runs more than once so
streamStartedwill reflect the last attempt — that is already the behavior you want for "time spent on the successful attempt."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/aiClient.js` around lines 457 - 521, The stream timing markers are set after withRetry resolves, so streamConsumeMs is effectively zero; move the assignments for timings.streamStarted and timings.streamComplete into the withRetry closure around the actual streaming awaits: set timings.streamStarted = Date.now() immediately before the first await (await streamResult.text) and set timings.streamComplete = Date.now() after the final awaited properties (after await streamResult.finishReason / sources / providerMetadata) inside the async closure passed to withRetry so they reflect the duration of the successful attempt (keep timings declared as they are so retries update the markers for the last attempt).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/utils/aiClient.js`:
- Around line 93-98: Replace the use of the global parseFloat when computing
retryAfter with Number.parseFloat to satisfy SonarCloud's preference;
specifically update the expression that computes retryAfter from
(parseFloat(err.headers?.['retry-after']) || 0) * 1000 to use Number.parseFloat
so the retryAfter variable calculation (near the exponential/jitter/delay logic)
uses Number.parseFloat(err.headers?.['retry-after']) || 0.
---
Outside diff comments:
In `@src/modules/triage.js`:
- Around line 699-712: The fallback error path is bypassing the module's caching
helper by calling evalClient.channels.fetch(channelId); replace that call with
the shared helper fetchChannelCached(evalClient, channelId) so the cache is
honored and the same null-on-failure semantics are preserved, then pass the
returned channel into the existing safeSend call (the surrounding code
references fetchChannelCached, evalClient.channels.fetch, and safeSend).
---
Duplicate comments:
In `@src/utils/aiClient.js`:
- Around line 457-521: The stream timing markers are set after withRetry
resolves, so streamConsumeMs is effectively zero; move the assignments for
timings.streamStarted and timings.streamComplete into the withRetry closure
around the actual streaming awaits: set timings.streamStarted = Date.now()
immediately before the first await (await streamResult.text) and set
timings.streamComplete = Date.now() after the final awaited properties (after
await streamResult.finishReason / sources / providerMetadata) inside the async
closure passed to withRetry so they reflect the duration of the successful
attempt (keep timings declared as they are so retries update the markers for the
last attempt).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: b36346fb-87ba-4e4f-8b8f-421bdc1ff75a
📒 Files selected for processing (8)
.env.examplesrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiClient.jssrc/utils/aiCost.jstests/modules/triage-parse.test.jstests/modules/triage.coverage.test.jstests/utils/aiClient.test.js
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Greptile Review
- GitHub Check: Analyze (javascript-typescript)
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{js,ts,jsx,tsx,mjs,mts}
📄 CodeRabbit inference engine (AGENTS.md)
Use ESM only; do not use CommonJS
Files:
tests/modules/triage.coverage.test.jstests/modules/triage-parse.test.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jstests/utils/aiClient.test.jssrc/utils/aiClient.js
**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{js,ts,jsx,tsx}: Use src/logger.js; do not use console.*
Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Use parameterized SQL only; do not use string concatenation for SQL queries
Community features should be gated behind config..enabled; moderation commands are the exception
Files:
tests/modules/triage.coverage.test.jstests/modules/triage-parse.test.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jstests/utils/aiClient.test.jssrc/utils/aiClient.js
**/*.{js,ts,jsx,tsx,json}
📄 CodeRabbit inference engine (AGENTS.md)
Welcome-message variables use double braces only, like {{user}}; single braces are plain text and should not be documented, inserted, or parsed as variables
Files:
tests/modules/triage.coverage.test.jstests/modules/triage-parse.test.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jstests/utils/aiClient.test.jssrc/utils/aiClient.js
**/*.{js,cjs}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use ESM syntax (
import/export) in all JavaScript files. Database migrations use.cjsextension due to node-pg-migrate ESM incompatibility.
Files:
tests/modules/triage.coverage.test.jstests/modules/triage-parse.test.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jstests/utils/aiClient.test.jssrc/utils/aiClient.js
**/*.{js,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use single quotes instead of double quotes (except in JSON files)
Always include semicolons at the end of statements
Use 2-space indentation with spaces, not tabs
Always use trailing commas in multi-line constructs
Keep lines to a maximum of 100 characters
Never use
console.*methods (console.log, console.error, etc.). Use Winston logger instead:import logger from '../logger.js';and calllogger.info(),logger.warn(),logger.error(),logger.debug()
Files:
tests/modules/triage.coverage.test.jstests/modules/triage-parse.test.jssrc/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jstests/utils/aiClient.test.jssrc/utils/aiClient.js
tests/**/*.test.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Bot test files must be JavaScript (ESM) and match the
src/structure. Use Vitest withvi.mock()for module mocking andvi.fn()for function stubs. Test timeout is 10 seconds.
Files:
tests/modules/triage.coverage.test.jstests/modules/triage-parse.test.jstests/utils/aiClient.test.js
**/*.test.{js,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Maintain test coverage thresholds: Bot tests require 85% statements, 82% branches, 85% functions, 85% lines. Web tests require 85% across all metrics. Do not lower thresholds; add tests instead.
Files:
tests/modules/triage.coverage.test.jstests/modules/triage-parse.test.jstests/utils/aiClient.test.js
src/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
All non-migration JavaScript files must use ESM with
import/exportsyntax, neverrequire()ormodule.exportsImport the Winston logger with
import logger from '../logger.js';(adjust relative path as needed) and use methods:logger.info(),logger.warn(),logger.error(),logger.debug()Use
safeReply(),safeSend(), orsafeEditReply()instead of raw Discord.js methods for message operations to handle errors gracefully (deleted channels, missing permissions)Use parameterized queries for all database operations. Never use string interpolation for SQL.
Files:
src/modules/triage-parse.jssrc/modules/triage.jssrc/utils/aiCost.jssrc/utils/aiClient.js
src/modules/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Feature modules should manage bot functionality and be gated behind
config.<feature>.enabled. Moderation commands are always available regardless of config.
Files:
src/modules/triage-parse.jssrc/modules/triage.js
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Use pnpm workspaces for the monorepo. Run `pnpm mono:dev`, `pnpm mono:build`, `pnpm mono:lint`, `pnpm mono:test` for workspace-wide operations.
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Follow Conventional Commits for commit messages: `feat:`, `fix:`, `chore:`, `docs:`, `refactor:`, `style:`, `test:`, `ci:`. Use branch naming conventions: `feat/`, `fix/`, `chore/`, `docs/`, `refactor/`, `style/`, `test/`, `ci/`. Always target `main` for PRs.
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Use `pnpm changeset` to document user-facing changes for release notes
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Before making changes, run `pnpm lint` and `pnpm test` to see existing state. After changes, run `pnpm lint` and `pnpm test:coverage` to verify nothing breaks. Use `pnpm lint:fix` to auto-fix formatting and lint issues.
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: The project uses Biome 2 as the linter and formatter. Run `pnpm lint` to check both bot (`src/**/*.js`) and web (`web/src/**/*.{ts,tsx}`) code. CI enforces all Biome checks.
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: The tech stack includes: Node.js 22+ ESM JavaScript for the bot, discord.js v14, Vercel AI SDK (`ai`, `ai-sdk/anthropic`), PostgreSQL 17+ with node-pg-migrate, Redis 7+ with ioredis, Express 5 REST API, Winston logging, Next.js 16 TypeScript web dashboard with React 19 and Tailwind CSS 4, Vitest 4 testing, and Turborepo for build orchestration.
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Database migrations must be created with `pnpm migrate:create -- <name>`, use sequential numbering (`001_`, `002_`, etc.), and use parameterized queries. Use `pnpm migrate` to apply and `pnpm migrate:down` to rollback.
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: For local development, use `docker compose up` to start PostgreSQL, Redis, bot, and web dashboard. Docker Compose sets `DATABASE_URL`, `REDIS_URL`, and `NEXTAUTH_URL` environment variables that override `.env` values.
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Environment variables are defined in `.env`. Key variables include: `DISCORD_TOKEN`, `ANTHROPIC_API_KEY`, `DATABASE_URL`, `DISCORD_CLIENT_ID` (required); `REDIS_URL`, `BOT_API_SECRET`, `SENTRY_DSN`, `LOG_LEVEL` (optional). Copy from `.env.example`.
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Maintain test coverage thresholds: statements 85%, branches 82%, functions 85%, lines 85%; never lower thresholds—add tests to cover new code instead
Applied to files:
tests/modules/triage.coverage.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to src/**/*.js : Always use `safeReply()`, `safeSend()`, or `safeEditReply()` instead of raw Discord.js methods for safe Discord messaging that handles errors gracefully
Applied to files:
tests/modules/triage.coverage.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Write bot tests using Vitest 4 with the `node` environment, matching the `src/` structure in the `tests/` directory
Applied to files:
tests/utils/aiClient.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to web/tests/**/*.test.{ts,tsx} : Write web dashboard tests using Vitest 4 with the `jsdom` environment and React Testing Library, matching the `web/src/` structure
Applied to files:
tests/utils/aiClient.test.js
🪛 GitHub Check: SonarCloud Code Analysis
src/modules/triage.js
[failure] 483-483: Refactor this function to reduce its Cognitive Complexity from 48 to the 15 allowed.
[failure] 568-574: Unexpected await of a non-Promise (non-"Thenable") value.
src/utils/aiClient.js
[warning] 95-95: Prefer Number.parseFloat over parseFloat.
[failure] 166-166: Refactor this function to reduce its Cognitive Complexity from 18 to the 15 allowed.
[warning] 209-211: Extract this nested ternary operation into an independent statement.
[warning] 259-259: knownTools should be a Set, and use knownTools.has() to check existence or non-existence.
🔇 Additional comments (10)
src/utils/aiCost.js (1)
67-96: LGTM — contract and clamp address prior ambiguity.The explicit "inputTokens is total reported by SDK" contract plus the
Math.max(0, …)clamp makes the cache-token accounting deterministic, and unknown models fall back to 0 instead of throwing. The prior ambiguity aroundregularInputTokensis resolved by the documented caller contract.src/utils/aiClient.js (1)
278-303: LGTM — pre-aborted external signal now honored.Synchronously aborting the local controller (and clearing the timer) when
externalSignal.abortedis already true fixes the prior bug where a pre-cancelled caller signal would still let the SDK call proceed to timeout. Combined with theexternalSignal?.abortedcheck in the catch blocks (lines 404, 535), caller cancellations are correctly surfaced asreason: 'aborted'rather than'timeout'..env.example (1)
25-52: LGTM — matches new provider-routing contract.Making
MINIMAX_API_KEYrequired for the default triage models, narrowing the generic<PROVIDER>_*section to Anthropic-compatible endpoints, and explicitly documenting the no-fallback-to-ANTHROPIC_API_KEYrule all line up with the behavior insrc/utils/aiClient.js(resolveModelloud-fails on missing provider keys and only sends Anthropic credentials to the Anthropic path).tests/utils/aiClient.test.js (2)
159-206: LGTM — env-var resolution path now actually exercised.The
codex:some-modeltests (lines 193–224) hit the generic<PROVIDER>_API_KEY/<PROVIDER>_BASE_URLpath rather than theKNOWN_BASE_URLSshortcut, so a regression in the env-var convention will now fail tests. The loud-fail test on lines 176–191 also verifies that the Anthropic fallback can't leak into foreign providers.
384-417: LGTM — timeout vs external abort are now distinguished in tests.Assertions on
reason: 'aborted'/message: 'Request was cancelled'(lines 398, 416) andreason: 'timeout'/message: 'Request timed out'(line 380) pin down the distinction added inaiClient.js, including the pre-aborted signal case. This directly guards the caller-cancellation path used bytriage.js.tests/modules/triage.coverage.test.js (1)
270-302: LGTM — locks in the silent-on-aborted contract.The new
abortedtest (lines 288–301) asserts thatevaluateNowresolves cleanly and only emits a structured'Triage evaluation aborted'warn — guarding the exact regression where a superseded evaluation would otherwise send "trouble thinking" to the channel. Paired with the existingtimeoutcase, this pins down both silent branches inevaluateAndRespond(triage.js L692) andevaluateNow(L995).tests/modules/triage-parse.test.js (1)
135-200: LGTM —needsThinking/needsSearchnormalization is now pinned by tests.The success case (lines 135–151), the non-boolean coercion case (lines 167–184), and the absent-fields default case (lines 186–200) collectively enforce the strict-boolean normalization in
parseClassifyResult(triage-parse.js L98–99). That's the correct regression guard given downstreamtriage.jsuses these flags to decide thinking tokens and WebSearch tool enablement.src/modules/triage-parse.js (1)
83-102: LGTM — strict-boolean normalization prevents leaking non-boolean values downstream.Using
=== truedefends against the classifier returning"true"/1/nullforneedsThinking/needsSearch, which is important becausetriage.js:368-369uses these in strict-boolean conditionals (classification.needsThinking ? resolvedConfig.thinkingTokens : 0). Defaulting tofalseon the truncated-recovery path (L61–62) keeps the contract consistent there too.src/modules/triage.js (2)
85-116: LGTM — per-guild settings now resolved per evaluation.Moving classifier/responder config construction into
buildClassifierConfig/buildResponderConfigand calling them fromrunClassification(L169) andrunResponder(L367) resolves the prior "frozen snapshot" bug:classifyModel,respondModel,timeout,classifyBaseUrl/respondBaseUrl,classifyApiKey/respondApiKey, andevalConfig.ai?.systemPromptare all re-read from the live guild config each evaluation, so admin changes take effect without a restart. Holding onlyclassifySystemPrompt/respondSystemDefault/respondJsonSchemaAppendat module scope (static prompt strings) is the right boundary.
688-712: LGTM — aborted and timeout are now both silent no-ops.Treating
AIClientErrorwithreason === 'aborted'the same as'timeout'at L692 (and mirrored at L995 inevaluateNow) prevents the superseded-evaluation abort from logging as a failure and posting "trouble thinking" to the channel. The structured'Triage evaluation aborted'warn with{ channelId, reason }also gives clean observability for both branches.
| // Calculate delay: min(1000 * 2^attempt, 15_000) + jitter | ||
| const retryAfter = | ||
| status === 429 ? (parseFloat(err.headers?.['retry-after']) || 0) * 1000 : 0; | ||
| const exponential = Math.min(1000 * 2 ** attempt, 15_000); | ||
| const jitter = Math.random() * 1000; | ||
| const delay = Math.max(exponential + jitter, retryAfter); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Nit: prefer Number.parseFloat over the global parseFloat.
SonarCloud flags this; the stricter Number.parseFloat is functionally identical here but avoids the coercion-prone global. Low priority.
♻️ Suggested change
- const retryAfter =
- status === 429 ? (parseFloat(err.headers?.['retry-after']) || 0) * 1000 : 0;
+ const retryAfter =
+ status === 429 ? (Number.parseFloat(err.headers?.['retry-after']) || 0) * 1000 : 0;🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis
[warning] 95-95: Prefer Number.parseFloat over parseFloat.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/aiClient.js` around lines 93 - 98, Replace the use of the global
parseFloat when computing retryAfter with Number.parseFloat to satisfy
SonarCloud's preference; specifically update the expression that computes
retryAfter from (parseFloat(err.headers?.['retry-after']) || 0) * 1000 to use
Number.parseFloat so the retryAfter variable calculation (near the
exponential/jitter/delay logic) uses
Number.parseFloat(err.headers?.['retry-after']) || 0.
Resolved SDK vs CLI migration conflicts by superseding main's CLI path: dropped @anthropic-ai/claude-code, kept @ai-sdk/anthropic + ai. - package.json: took main's dependency bumps (@sentry/node, discord.js, dotenv) and pnpm override restructure; regenerated pnpm-lock.yaml. - docker-compose.yml: preserved bot/web/docs services required for local testing. DISABLE_PROMPT_CACHING env var dropped (no-op on SDK path; caching is disabled by default since we don't send cacheControl). - src/modules/aiAutoMod.js: kept Vercel AI SDK generate() call; adopted main's expanded JSDoc and guildId/channelId logging fields. - src/modules/triage.js: adopted main's improved runClassification JSDoc; kept our AbortSignal parameter (documented). - tests/modules/cli-process.test.js: removed (module no longer exists). - tests/modules/aiAutoMod.test.js: kept aiClient mock (SDK path). - web/src/components/ui/discord-markdown-editor.tsx and web/src/components/dashboard/config-categories/ai-automation.tsx: taken from main wholesale (frontend refactors out of scope for this PR).
| const baseUrl = | ||
| overrides.baseUrl || | ||
| (isAnthropic ? undefined : process.env[`${envPrefix}_BASE_URL`]) || | ||
| KNOWN_BASE_URLS[providerName]; | ||
|
|
||
| providerCache.set( | ||
| cacheKey, | ||
| createAnthropic({ | ||
| ...(isAnthropic | ||
| ? isAnthropicAuthToken(apiKey) | ||
| ? { authToken: apiKey } | ||
| : { apiKey } | ||
| : { authToken: apiKey }), | ||
| ...(baseUrl && { baseURL: baseUrl }), | ||
| }), |
There was a problem hiding this comment.
For non-Anthropic providers not present in KNOWN_BASE_URLS, baseUrl can be undefined (no _BASE_URL and no override). In that case we still create an Anthropic client and will send the provider's authToken to the default Anthropic endpoint, which is both confusing and an unintended credential destination. Consider failing fast for non-anthropic providers when no base URL is resolved, with an error that instructs setting <PROVIDER>_BASE_URL or passing baseUrl override.
| await new Promise((resolve, reject) => { | ||
| const timer = setTimeout(resolve, delay); | ||
| if (signal) { | ||
| const onAbort = () => { | ||
| clearTimeout(timer); | ||
| reject(err); | ||
| }; | ||
| signal.addEventListener('abort', onAbort, { once: true }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
withRetry adds an abort event listener on the provided signal for each retry delay, but if the timer completes normally the listener is never removed (the { once: true } only removes it if abort fires). Over time, repeated retries can accumulate listeners and create a slow memory leak. Consider removing the listener after the delay resolves/rejects, or refactoring the delay to race setTimeout with an abort promise that cleans itself up.
| timings.streamStarted = Date.now(); | ||
| timings.streamComplete = Date.now(); | ||
|
|
||
| const providerMeta = getProviderMetadata(result.providerMetadata, providerName); | ||
| const cachedInputTokens = providerMeta.cacheReadInputTokens ?? 0; | ||
| const cacheCreationInputTokens = providerMeta.cacheCreationInputTokens ?? 0; | ||
| const costUsd = calculateCost(providerName, modelId, { | ||
| inputTokens: result.usage.inputTokens, | ||
| outputTokens: result.usage.outputTokens, | ||
| cachedInputTokens, | ||
| cacheCreationInputTokens, | ||
| }); | ||
|
|
||
| const durationMs = Date.now() - timings.start; | ||
| debug('aiClient.stream timing breakdown', { | ||
| model: modelString, | ||
| sdkResolveMs: timings.sdkResolved - timings.start, | ||
| preApiSetupMs: timings.preApiCall - timings.sdkResolved, | ||
| streamInitMs: timings.streamStarted - timings.preApiCall, | ||
| streamConsumeMs: timings.streamComplete - timings.streamStarted, | ||
| postProcessMs: Date.now() - timings.streamComplete, | ||
| totalMs: durationMs, |
There was a problem hiding this comment.
The stream timing instrumentation sets timings.streamStarted and timings.streamComplete back-to-back after withRetry returns, so streamConsumeMs will always be ~0 and streamInitMs will include the entire stream consumption time. If these numbers are used to diagnose latency, they’ll be misleading. Consider capturing streamStarted immediately before/after the streamText(...) call, and streamComplete after awaiting streamResult.text/usage/etc.
| # providers. Missing a provider-specific key is a hard failure at startup — | ||
| # this is deliberate so we never ship the Anthropic credential to a | ||
| # third-party endpoint. |
There was a problem hiding this comment.
This comment says missing a provider-specific key is a “hard failure at startup”, but the current startup path pre-warms providers via warmConnection(...).catch(() => {}) and the actual key check happens when generate/stream are invoked. Consider rewording to reflect that missing <PROVIDER>_API_KEY will fail the first request for that provider (and may be logged), rather than necessarily failing startup.
| # providers. Missing a provider-specific key is a hard failure at startup — | |
| # this is deliberate so we never ship the Anthropic credential to a | |
| # third-party endpoint. | |
| # providers. Missing a provider-specific key will cause the first request for | |
| # that provider to fail (and warm-up may log the issue) — this is deliberate | |
| # so we never ship the Anthropic credential to a third-party endpoint. |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/modules/triage.js (2)
759-765:⚠️ Potential issue | 🟡 MinorStale JSDoc: triage no longer spawns CLI processes.
After the SDK migration,
startTriageloads prompts and pre-warms provider connections; it doesn't boot CLI processes.✏️ Proposed fix
-/** - * Start the triage module: create and boot classifier + responder CLI processes. +/** + * Start the triage module: load static prompts and best-effort pre-warm the + * classifier and responder provider connections via warmConnection(). * * `@param` {import('discord.js').Client} discordClient - Discord client * `@param` {Object} botConfig - Bot configuration * `@param` {Object} [monitor] - Health monitor instance */🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/triage.js` around lines 759 - 765, The JSDoc for startTriage is stale—remove the line saying it "create and boot classifier + responder CLI processes" and update the description to state that startTriage now loads prompts and pre-warms provider connections (and any other current responsibilities); update parameter docs if needed to match current signature (keep import('discord.js').Client and botConfig descriptions, and mark monitor as optional if it still is). Ensure the function name startTriage is referenced in the JSDoc and that no CLI-related wording remains.
1-8:⚠️ Potential issue | 🟡 MinorHeader comment still references "Haiku classifier + Sonnet responder" and "Two AI SDK calls handle classification".
With the provider-agnostic switch (now defaulting to MiniMax in
config.json), the Haiku/Sonnet specifics and the "~10x lower cost than Sonnet" claim are misleading. Re-word generically.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/triage.js` around lines 1 - 8, Update the header comment at the top of the Triage Module (src/modules/triage.js) to remove the specific "Haiku classifier + Sonnet responder" wording and the "~10x lower cost than Sonnet" claim; instead describe it generically as a provider-agnostic, two-stage triage that uses a lightweight/cheap classifier call followed by a more expensive responder call only when needed, and mention that provider selection is configurable (defaults to MiniMax in config.json). Ensure the updated comment still conveys the two-stage flow and cost rationale without referencing Haiku/Sonnet or specific cost multiples.src/modules/aiAutoMod.js (1)
51-58:⚠️ Potential issue | 🟡 MinorStale "Claude"-specific wording in docstrings and log messages.
After the Vercel AI SDK migration, the analyzer is provider-agnostic. Update the JSDoc on
analyzeMessage("Analyze a message using Claude AI") and the log label at line 105 ("failed to parse Claude response") so they don't lie about the runtime when a non-Anthropic provider is configured.✏️ Proposed fix
-/** - * Analyze a message using Claude AI. +/** + * Analyze a message using the configured AI model via the Vercel AI SDK. ... - logError('AI auto-mod: failed to parse Claude response', { text }); + logError('AI auto-mod: failed to parse model response', { text });Also applies to: 105-105
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/aiAutoMod.js` around lines 51 - 58, Update the stale Claude-specific wording: change the JSDoc summary for the analyzeMessage function ("Analyze a message using Claude AI") to a provider-agnostic description (e.g., "Analyze a message using configured AI provider") and replace the log label string "failed to parse Claude response" used around the response parsing code with a neutral message like "failed to parse AI provider response" so it no longer names a specific provider; ensure the function name analyzeMessage and the existing log call location (the statement that currently uses "failed to parse Claude response") are updated to the new wording.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/modules/triage-respond.js`:
- Around line 79-88: When bufferSnapshot is empty the code does two redundant
Discord API calls because historyOptions (used in
channel.messages.fetch(historyOptions)) without a before returns the most recent
messages already covering the RECENT_LIMIT fetch; update the logic in
triage-respond.js to skip the second fetch when oldest is falsy: check
bufferSnapshot[0] (oldest) and if undefined only call
channel.messages.fetch(historyOptions) otherwise perform the Promise.all of both
channel.messages.fetch(historyOptions) and channel.messages.fetch({ limit:
RECENT_LIMIT }); this removes the unnecessary channel.messages.fetch({ limit:
RECENT_LIMIT }) API call on cold-start while keeping the parallel fetch when
oldest exists.
- Around line 357-365: The provider-extraction logic in triage-respond.js
duplicates aiClient.resolveModel's parsing (using
classifySeparator/respondSeparator on resolved.classifyModel/respondModel);
replace this duplication by importing and using a shared helper from
src/utils/aiClient.js (e.g., export parseProviderModel(modelString) that returns
{ providerName, modelId }) or by having resolveTriageConfig/aiClient return
classifyProvider/respondProvider on the resolved object; update
triage-respond.js to call parseProviderModel(resolved.classifyModel) and
parseProviderModel(resolved.respondModel) (or read
resolved.classifyProvider/resolved.respondProvider) and remove
classifySeparator/respondSeparator logic to avoid drift.
---
Outside diff comments:
In `@src/modules/aiAutoMod.js`:
- Around line 51-58: Update the stale Claude-specific wording: change the JSDoc
summary for the analyzeMessage function ("Analyze a message using Claude AI") to
a provider-agnostic description (e.g., "Analyze a message using configured AI
provider") and replace the log label string "failed to parse Claude response"
used around the response parsing code with a neutral message like "failed to
parse AI provider response" so it no longer names a specific provider; ensure
the function name analyzeMessage and the existing log call location (the
statement that currently uses "failed to parse Claude response") are updated to
the new wording.
In `@src/modules/triage.js`:
- Around line 759-765: The JSDoc for startTriage is stale—remove the line saying
it "create and boot classifier + responder CLI processes" and update the
description to state that startTriage now loads prompts and pre-warms provider
connections (and any other current responsibilities); update parameter docs if
needed to match current signature (keep import('discord.js').Client and
botConfig descriptions, and mark monitor as optional if it still is). Ensure the
function name startTriage is referenced in the JSDoc and that no CLI-related
wording remains.
- Around line 1-8: Update the header comment at the top of the Triage Module
(src/modules/triage.js) to remove the specific "Haiku classifier + Sonnet
responder" wording and the "~10x lower cost than Sonnet" claim; instead describe
it generically as a provider-agnostic, two-stage triage that uses a
lightweight/cheap classifier call followed by a more expensive responder call
only when needed, and mention that provider selection is configurable (defaults
to MiniMax in config.json). Ensure the updated comment still conveys the
two-stage flow and cost rationale without referencing Haiku/Sonnet or specific
cost multiples.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: a92e121a-11ba-405c-af86-4a2b0f781b00
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (8)
config.jsonpackage.jsonsrc/api/utils/configValidation.jssrc/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/modules/triage.jsweb/src/components/dashboard/config-workspace/config-categories.tsweb/src/types/config.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Agent
- GitHub Check: Greptile Review
- GitHub Check: E2E Tests (1/2)
- GitHub Check: E2E Tests (2/2)
- GitHub Check: Test
- GitHub Check: Docker Build Validation
- GitHub Check: Analyze (javascript-typescript)
🧰 Additional context used
📓 Path-based instructions (14)
**/package.json
📄 CodeRabbit inference engine (AGENTS.md)
Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing
Files:
package.json
**/*.{js,ts,jsx,tsx,json}
📄 CodeRabbit inference engine (AGENTS.md)
Welcome-message variables use double braces only, like {{user}}; single braces are plain text and should not be documented, inserted, or parsed as variables
Files:
package.jsonweb/src/components/dashboard/config-workspace/config-categories.tsconfig.jsonsrc/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/api/utils/configValidation.jsweb/src/types/config.tssrc/modules/triage.js
**/*.{js,ts,jsx,tsx,mjs,mts}
📄 CodeRabbit inference engine (AGENTS.md)
Use ESM only; do not use CommonJS
Files:
web/src/components/dashboard/config-workspace/config-categories.tssrc/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/api/utils/configValidation.jsweb/src/types/config.tssrc/modules/triage.js
**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{js,ts,jsx,tsx}: Use src/logger.js; do not use console.*
Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Use parameterized SQL only; do not use string concatenation for SQL queries
Community features should be gated behind config..enabled; moderation commands are the exception
Files:
web/src/components/dashboard/config-workspace/config-categories.tssrc/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/api/utils/configValidation.jsweb/src/types/config.tssrc/modules/triage.js
web/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
web/src/**/*.{ts,tsx}: New dashboard routes need title wiring in web/src/lib/page-titles.ts; use createPageMetadata() for SSR and keep DashboardTitleSync aligned for client navigation
Dashboard clients that need the guild list should consume GuildDirectoryProvider; do not stack extra /api/guilds fetch loops in leaf components
Recharts dashboard views should use web/src/components/ui/stable-responsive-container.tsx; raw ResponsiveContainer mounts can spam width(-1)/height(-1) warnings when panels render before layout settles
Any visual dashboard or landing page change must be verified with Chrome DevTools MCP before calling it done; take a screenshot after the change
Check both light and dark themes if colors or theming changed
Check responsive behavior on mobile, tablet, and desktop if layout changed
Files:
web/src/components/dashboard/config-workspace/config-categories.tsweb/src/types/config.ts
web/src/**/*.{ts,tsx,css,scss}
📄 CodeRabbit inference engine (AGENTS.md)
Follow the design system when making changes to the UI/UX; see DESIGN.md for the design system and color palette
Files:
web/src/components/dashboard/config-workspace/config-categories.tsweb/src/types/config.ts
**/*.{js,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use single quotes instead of double quotes for strings
Always include semicolons at the end of statements
Use 2-space indentation with spaces, never tabs
Always use trailing commas in multi-line objects, arrays, and function parameters
Enforce 100-character maximum line width
Never use console.* methods; use Winston logger instead (import logger from '../logger.js' and call logger.info(), logger.warn(), logger.error(), logger.debug())
Use Biome linter and formatter (run pnpm lint to check, pnpm lint:fix to auto-fix)
Files:
web/src/components/dashboard/config-workspace/config-categories.tssrc/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/api/utils/configValidation.jsweb/src/types/config.tssrc/modules/triage.js
{config.json,src/api/utils/configAllowlist.js}
📄 CodeRabbit inference engine (AGENTS.md)
Config-backed features must be added to config.json and src/api/utils/configAllowlist.js; if a key is missing from SAFE_CONFIG_KEYS, the dashboard cannot save it
Files:
config.json
**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use ESM only (import/export), never use require() or module.exports
Files:
src/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/api/utils/configValidation.jssrc/modules/triage.js
**/{migrations,src/{api,modules}}/**/*.{js,cjs}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Always use parameterized queries for database operations; never use string interpolation
Files:
src/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/api/utils/configValidation.jssrc/modules/triage.js
src/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Always use safe Discord messaging methods: safeReply(), safeSend(), or safeEditReply() instead of raw Discord.js methods
Use config system via src/modules/config.js; call getConfig(guildId) to retrieve merged global + guild-level config
Use Redis caching with in-memory fallback via src/utils/cache.js for generic caching
Files:
src/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/api/utils/configValidation.jssrc/modules/triage.js
src/{commands,modules}/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Gate all community features behind config..enabled check; moderation commands are always available regardless of config
Files:
src/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/modules/triage.js
src/{modules,commands,api}/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use src/utils/discordCache.js for Discord API response caching (channels, roles, members)
Files:
src/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/api/utils/configValidation.jssrc/modules/triage.js
src/{modules,api}/**/*.js
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use src/utils/reputationCache.js for leaderboard and user reputation data caching
All caches auto-invalidate when config changes; do not manually invalidate them when config is updated
Files:
src/modules/aiAutoMod.jssrc/modules/triage-respond.jssrc/api/utils/configValidation.jssrc/modules/triage.js
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Always use pnpm as the package manager, never npm or yarn
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Create bot feature modules in src/modules/ and add config section to config.json
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Run pnpm lint and pnpm test before making changes and after making changes to verify nothing breaks
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Use Conventional Commits for commit messages (feat:, fix:, chore:, docs:, refactor:, style:, test:, ci:)
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Use branch naming conventions: feat/, fix/, chore/, docs/, refactor/, style/, test/, ci/
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Always target PR merges to the main branch
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Use pnpm changeset to create release notes for user-facing changes
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Require Node.js 22+ (enforced by engines field in package.json)
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Require pnpm 10.30.3+ (enforced by engine-strict=true in .npmrc)
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Run pnpm migrate for database migrations, pnpm migrate:down for rollbacks, pnpm migrate:create for new migrations
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Never lower code coverage thresholds; add tests to cover new code instead
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Use pnpm install --frozen-lockfile to install dependencies with exact versions from lock files
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Bot runtime uses Node.js 22+ with ESM JavaScript and discord.js v14
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Bot AI features use Vercel AI SDK (ai, ai-sdk/anthropic)
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Database is PostgreSQL 17+ with node-pg-migrate
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Caching uses Redis 7+ (ioredis) with in-memory fallback when Redis is unavailable
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Web dashboard uses Next.js 16, TypeScript, React 19, Tailwind CSS 4, and Radix UI
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Use the web workspace for web dashboard development: pnpm --prefix web <command>
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: The web/ directory has its own package.json, pnpm-lock.yaml, and node_modules; install with pnpm --prefix web install
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Biome linter includes scopes for src/**/*.js and web/src/**/*.{ts,tsx}
Learnt from: CR
URL:
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Read the project guidelines file (this file) in every session before making changes
📚 Learning: 2026-04-09T20:31:03.276Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.276Z
Learning: Applies to **/package.json : Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing
Applied to files:
package.json
📚 Learning: 2026-03-10T23:21:49.730Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T23:21:49.730Z
Learning: Applies to web/src/components/dashboard/config-workspace/**/*.{ts,tsx} : Web dashboard config editor should use category workspace navigation with reusable SettingsFeatureCard pattern (header + master toggle + Basic/Advanced blocks)
Applied to files:
web/src/components/dashboard/config-workspace/config-categories.ts
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to config.json : All community features in `config.json` should be gated behind a `config.<feature>.enabled` flag; moderation commands are always available regardless of config settings
Applied to files:
web/src/types/config.ts
📚 Learning: 2026-04-09T20:31:03.276Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.276Z
Learning: Applies to **/*.{js,ts,jsx,tsx} : Community features should be gated behind config.<feature>.enabled; moderation commands are the exception
Applied to files:
web/src/types/config.ts
🪛 GitHub Check: SonarCloud Code Analysis
src/modules/triage.js
[failure] 491-491: Refactor this function to reduce its Cognitive Complexity from 48 to the 15 allowed.
[failure] 576-582: Unexpected await of a non-Promise (non-"Thenable") value.
🔇 Additional comments (7)
web/src/components/dashboard/config-workspace/config-categories.ts (1)
142-143: Good cleanup of obsolete triage search metadata.Line 142 and Line 143 correctly remove stale “streaming” references from debug search content, keeping search indexing aligned with current triage settings.
src/modules/triage.js (1)
576-582: SonarCloud "unexpected await of non-Promise" onrunClassificationis a false positive — safe to ignore.
runClassificationis declaredasyncat line 146 and returns aPromise<{…}|null>, soawaitis correct. If the SonarCloud noise keeps failing the check, consider annotating/configuring a suppression rather than changing the call site.package.json (1)
36-38: LGTM — dependency versions match the current latest stable releases.Prior review feedback on bumping to
@ai-sdk/anthropic ^3.0.68andai ^6.0.158has been addressed.web/src/types/config.ts (1)
237-242: LGTM — TriageConfig now mirrors the validation schema's nullable semantics.Prior feedback on making
statusReactions,dailyBudgetUsd,confidenceThreshold, andresponseCooldownMsT | nullrather than just optional has been applied, anddebugFooterLevelis likewise nullable.config.json (1)
26-28: No changes needed.minimax:MiniMax-M2.7is a valid, current MiniMax model identifier listed as their top text generation model with 204,800 token context window on their official API documentation.src/modules/aiAutoMod.js (1)
17-34: No changes required — the default model string is correct.The
resolveModel()function insrc/utils/aiClient.jsexplicitly supports bare model names (withoutprovider:prefix) by defaulting them to the'anthropic'provider. The logic on lines 194–195 shows:const colonIdx = modelString.indexOf(':'); const providerName = colonIdx > 0 ? modelString.slice(0, colonIdx) : 'anthropic';This design is documented in the header comments: "Bare model names default to Anthropic." The DEFAULTS in
src/modules/aiAutoMod.jsand widespread usage of unprefixed model names throughout the test suite confirm this is intentional and working as designed. No routing failure or provider resolution error will occur.> Likely an incorrect or invalid review comment.src/api/utils/configValidation.js (1)
197-200: No action required. Thetriagesection is already inSAFE_CONFIG_KEYS, which permits saves to all fields within that section, including the four new keys. The allowlist operates at the section level, not the individual field level; the four new triage fields do not need separate entries.> Likely an incorrect or invalid review comment.
| // Fetch historical context + recent messages in parallel | ||
| const oldest = bufferSnapshot[0]; | ||
| const historyOptions = { limit }; | ||
| if (oldest) historyOptions.before = oldest.messageId; | ||
| const historyFetched = await channel.messages.fetch(historyOptions); | ||
|
|
||
| // Also fetch the most recent messages (catches replies that arrived after | ||
| // the buffer started accumulating — fixes the "already being helped" blind spot) | ||
| const RECENT_LIMIT = 5; | ||
| const recentFetched = await channel.messages.fetch({ limit: RECENT_LIMIT }); | ||
|
|
||
| const [historyFetched, recentFetched] = await Promise.all([ | ||
| channel.messages.fetch(historyOptions), | ||
| channel.messages.fetch({ limit: RECENT_LIMIT }), | ||
| ]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Redundant fetch when bufferSnapshot is empty.
When oldest is undefined, historyOptions has no before, so channel.messages.fetch({ limit }) returns the most recent limit messages — which fully contains the second fetch({ limit: RECENT_LIMIT }). The parallel call still works (dedup handles it) but wastes a Discord API call on every cold-start/empty-buffer evaluation.
Consider skipping the recent fetch when !oldest:
♻️ Proposed change
- const [historyFetched, recentFetched] = await Promise.all([
- channel.messages.fetch(historyOptions),
- channel.messages.fetch({ limit: RECENT_LIMIT }),
- ]);
+ const fetches = [channel.messages.fetch(historyOptions)];
+ if (oldest) fetches.push(channel.messages.fetch({ limit: RECENT_LIMIT }));
+ const [historyFetched, recentFetched] = await Promise.all(fetches);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/modules/triage-respond.js` around lines 79 - 88, When bufferSnapshot is
empty the code does two redundant Discord API calls because historyOptions (used
in channel.messages.fetch(historyOptions)) without a before returns the most
recent messages already covering the RECENT_LIMIT fetch; update the logic in
triage-respond.js to skip the second fetch when oldest is falsy: check
bufferSnapshot[0] (oldest) and if undefined only call
channel.messages.fetch(historyOptions) otherwise perform the Promise.all of both
channel.messages.fetch(historyOptions) and channel.messages.fetch({ limit:
RECENT_LIMIT }); this removes the unnecessary channel.messages.fetch({ limit:
RECENT_LIMIT }) API call on cold-start while keeping the parallel fetch when
oldest exists.
| const classifySeparator = resolved.classifyModel?.indexOf(':') ?? -1; | ||
| const respondSeparator = resolved.respondModel?.indexOf(':') ?? -1; | ||
| const classifyProvider = classifySeparator > 0 | ||
| ? resolved.classifyModel.slice(0, classifySeparator) | ||
| : 'anthropic'; | ||
| const respondProvider = respondSeparator > 0 | ||
| ? resolved.respondModel.slice(0, respondSeparator) | ||
| : 'anthropic'; | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Duplicates provider-extraction logic from aiClient.resolveModel.
This same provider:model parsing (colon index > 0, default 'anthropic') is implemented in src/utils/aiClient.js::resolveModel (lines 161-171 per the provided snippet). Duplicating it here risks drift if the canonical rules ever change (e.g., allowing schemes, case handling, or additional defaults).
Prefer exporting a small helper from aiClient.js (e.g., parseProviderModel(modelString) returning { providerName, modelId }) and reusing it here, or — better — having resolveTriageConfig/the AI client surface the already-resolved classifyProvider/respondProvider on resolved so callers don't re-parse strings.
♻️ Proposed refactor
- const classifySeparator = resolved.classifyModel?.indexOf(':') ?? -1;
- const respondSeparator = resolved.respondModel?.indexOf(':') ?? -1;
- const classifyProvider = classifySeparator > 0
- ? resolved.classifyModel.slice(0, classifySeparator)
- : 'anthropic';
- const respondProvider = respondSeparator > 0
- ? resolved.respondModel.slice(0, respondSeparator)
- : 'anthropic';
+ const { parseProviderModel } = await import('../utils/aiClient.js');
+ const classifyProvider = parseProviderModel(resolved.classifyModel).providerName;
+ const respondProvider = parseProviderModel(resolved.respondModel).providerName;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/modules/triage-respond.js` around lines 357 - 365, The
provider-extraction logic in triage-respond.js duplicates
aiClient.resolveModel's parsing (using classifySeparator/respondSeparator on
resolved.classifyModel/respondModel); replace this duplication by importing and
using a shared helper from src/utils/aiClient.js (e.g., export
parseProviderModel(modelString) that returns { providerName, modelId }) or by
having resolveTriageConfig/aiClient return classifyProvider/respondProvider on
the resolved object; update triage-respond.js to call
parseProviderModel(resolved.classifyModel) and
parseProviderModel(resolved.respondModel) (or read
resolved.classifyProvider/resolved.respondProvider) and remove
classifySeparator/respondSeparator logic to avoid drift.
…on override - biome.json: bump $schema to 2.4.12 to match @biomejs/biome ^2.4.12 devDep - package.json: drop brace-expansion override; it forced ^2.0.2 which lacks the named `expand` export that minimatch@10.2.5 imports, breaking 32+ test files via node-pg-migrate -> glob -> minimatch. Natural pnpm resolution already picks post-CVE-2025-5889 versions across the dep tree (1.1.12, 2.1.0, 5.0.5). - pnpm-lock.yaml: regenerated - src/modules/triage-respond.js: biome auto-format (single-line ternary) - tests/utils/aiCost.test.js: drop unused `key` destructuring in for-of
|


Summary
Migrates the bot's AI inference from shelling out to the Claude CLI subprocess to using the Vercel AI SDK in-process. This provides better control over API calls, proper cost tracking, and multi-provider support.
Key Changes
@ai-sdk/anthropicandaipackagesprovider:modelstrings (e.g.,minimax:MiniMax-M2.7)model-pricing.jsonfor offline cost lookups with cache-aware pricingPerformance
Known Issues
Test plan