Skip to content

feat: migrate from Claude CLI subprocess to Vercel AI SDK#479

Open
AnExiledDev wants to merge 14 commits intomainfrom
feat/vercel-ai-sdk-migration
Open

feat: migrate from Claude CLI subprocess to Vercel AI SDK#479
AnExiledDev wants to merge 14 commits intomainfrom
feat/vercel-ai-sdk-migration

Conversation

@AnExiledDev
Copy link
Copy Markdown
Contributor

Summary

Migrates the bot's AI inference from shelling out to the Claude CLI subprocess to using the Vercel AI SDK in-process. This provides better control over API calls, proper cost tracking, and multi-provider support.

Key Changes

  • Vercel AI SDK integration: Replace Claude CLI subprocess with @ai-sdk/anthropic and ai packages
  • Multi-provider support: Generic provider routing via provider:model strings (e.g., minimax:MiniMax-M2.7)
  • SDK lazy loading: Bot startup no longer blocked by ~6s SDK import overhead
  • Local cost calculation: model-pricing.json for offline cost lookups with cache-aware pricing
  • Timing instrumentation: Detailed timing breakdowns to identify latency sources

Performance

Metric Before After
Startup blocking ~6s SDK import 0ms (lazy loaded)
SDK preload N/A Background during Discord connect
Total response time 28s ~10s

Known Issues

  • Response latency: 8-16s with MiniMax provider (API-side latency, not SDK)
    • Classifier: ~6-7s (complex decision logic in prompt)
    • Responder: ~2s (acceptable)
  • Further optimization needed on classifier prompt or model selection

Test plan

  • All existing tests pass
  • Manual testing with MiniMax provider
  • Timing logs confirm SDK lazy loading works
  • Cost calculation matches expected values

AnExiledDev and others added 4 commits April 10, 2026 03:24
Replace the CLI subprocess model (cli-process.js + anthropicClient.js) with
direct Vercel AI SDK calls via a new aiClient.js wrapper and aiCost.js for
automatic token cost tracking.

Key changes:
- Add generate() and stream() wrappers with retry logic (3-attempt exponential
  backoff), timeout/abort handling, and provider caching
- Automatic cost calculation via token-costs library with model ID normalization
- Split triage config: separate classifier (Haiku) and responder (Sonnet) models
  with per-role budgets, API keys, and base URLs
- Fix 19 issues found in migration review: CostClient race condition, cached
  token extraction, truncated classification confidence, thinkingTokens default,
  async onChunk safety, double-delete in automod, budget enforcement logging,
  timer cleanup, config validation gaps, and TypeScript type corrections
- Add 17 new tests covering retry logic, error classification, abort handling,
  race conditions, and all fixed edge cases
- Remove cli-process.js (22KB subprocess state machine) and anthropicClient.js
Any provider:model string now resolves automatically using
<PROVIDER>_API_KEY and <PROVIDER>_BASE_URL env vars, with
ANTHROPIC_API_KEY as fallback. Minimax and Codex work out of
the box — no provider-specific code needed.
- Add MiniMax local pricing table (token-costs doesn't cover it yet)
- Track cacheCreationInputTokens alongside cacheReadInputTokens
- Use authToken instead of apiKey for non-Anthropic providers
- Fix provider metadata extraction to check provider-specific key
  before falling back to anthropic
- Fix MiniMax base URL to include /v1 suffix
- Harden JSON fence stripping for providers that prepend whitespace
- Genericize error messages from "Anthropic" to "AI provider"
- Remove stale DISABLE_PROMPT_CACHING references
- Update tests for all of the above
- Convert aiClient.js to lazy-load @ai-sdk/anthropic and ai modules
- Add preloadSDK() for background pre-warming at startup
- Add detailed timing breakdowns to generate() and stream()
- Add timing instrumentation to triage classification and response
- Simplify aiCost.js to use local model-pricing.json lookup
- Update config to use MiniMax models and reduce contextMessages to 5
- Fix various test mocks for new SDK patterns

Performance improvements:
- Bot startup no longer blocked by ~6s SDK import
- SDK preloads in background while Discord connects
- Timing logs identify latency sources (API vs SDK vs Discord)

Known issue: Response latency of 8-16s with MiniMax provider
needs further optimization (API-side, not SDK).
Copilot AI review requested due to automatic review settings April 12, 2026 23:40
@github-project-automation github-project-automation bot moved this to Backlog in Volvox.Bot Apr 12, 2026
@railway-app
Copy link
Copy Markdown

railway-app bot commented Apr 12, 2026

This PR was not deployed automatically as @AnExiledDev does not have access to the Railway project.

In order to get automatic PR deploys, please add @AnExiledDev to your workspace on Railway.

@mintlify
Copy link
Copy Markdown

mintlify bot commented Apr 12, 2026

Preview deployment for your docs. Learn more about Mintlify Previews.

Project Status Preview Updated (UTC)
volvox 🟢 Ready View Preview Apr 12, 2026, 11:41 PM

💡 Tip: Enable Workflows to automatically generate PRs for you.

@changeset-bot
Copy link
Copy Markdown

changeset-bot bot commented Apr 12, 2026

⚠️ No Changeset found

Latest commit: f81c793

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 12, 2026

Warning

Rate limit exceeded

@AnExiledDev has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 5 minutes and 35 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 5 minutes and 35 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 12bc1385-bc64-4b73-87f4-6c7568d540d2

📥 Commits

Reviewing files that changed from the base of the PR and between 838e8cb and f81c793.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (4)
  • biome.json
  • package.json
  • src/modules/triage-respond.js
  • tests/utils/aiCost.test.js
📝 Walkthrough

Walkthrough

This pull request migrates the application from using headless Claude CLI processes to the Vercel AI SDK with multi-provider support. It refactors the triage system, auto-moderation, and summarization features to use the new AI client, updates configuration schemas and environment setup, replaces error handling, and provides comprehensive test coverage for the new architecture.

Changes

Cohort / File(s) Summary
AI SDK Integration
src/utils/aiClient.js, src/utils/aiCost.js, src/data/model-pricing.json
New modules implementing Vercel AI SDK wrapper with multi-provider support, model resolution, caching, retry logic, timeout/abort handling, and cost calculation using local pricing data.
Configuration & Validation
src/api/utils/configValidation.js, src/modules/triage-config.js, .env.example, config.json, web/src/types/config.ts
Updated schemas and environment templates to support multiple AI providers via <PROVIDER>_API_KEY/<PROVIDER>_BASE_URL pattern, switched triage models to MiniMax, added budget/confidence/cooldown configuration fields, removed streaming/tokenRecycleLimit settings, and updated TypeScript interfaces.
Triage System Refactoring
src/modules/triage.js, src/modules/triage-parse.js, src/modules/triage-respond.js, src/prompts/triage-classify-system.md
Replaced long-lived CLI processes with per-evaluation AI SDK calls, added needsThinking/needsSearch signals to classification output, improved error handling with abort signals, added cost tracking and budget warnings, and enhanced response recovery with confidence scoring.
Error Handling Redesign
src/utils/errors.js, src/utils/debugFooter.js
Replaced CLIProcessError with AIClientError, updated error classification to recognize new AIClientError reasons (timeout, aborted, api, parse), refactored cost/duration extraction to use AI SDK response shape, and updated user-facing error messages to reference generic AI providers.
Feature Module Updates
src/modules/aiAutoMod.js, src/commands/tldr.js, src/modules/ai.js
Migrated auto-moderation and TLDR summarization from CLI subprocess spawning to direct AI SDK calls via generate(), updated response shape handling, removed subprocess lifecycle management, and updated module documentation.
Initialization & Startup
src/index.js, docker-compose.yml
Added SDK preload step during startup for connection warming, removed DISABLE_PROMPT_CACHING environment variable from Docker configuration.
Documentation & Frontend
.github/copilot-instructions.md, docs/test-coverage-analysis.md, web/src/components/dashboard/config-workspace/config-categories.ts
Updated technology stack references from Anthropic Claude SDK to Vercel AI SDK, updated test coverage documentation to reflect new error types, and removed streaming-related search keywords from frontend configuration UI.
Legacy Module Removal
src/modules/cli-process.js
Deleted entire CLI subprocess management module (AsyncQueue, CLIProcess, CLIProcessError classes) totaling 692 lines, as functionality is now handled by Vercel AI SDK wrapper.
Dependency Updates
package.json
Replaced @anthropic-ai/claude-code with @ai-sdk/anthropic, added ai package for Vercel AI SDK.
Test Suite Migration
tests/utils/aiClient.test.js, tests/utils/aiCost.test.js, tests/commands/tldr.test.js, tests/modules/aiAutoMod.test.js, tests/modules/triage-budget.test.js, tests/modules/triage-config.test.js, tests/modules/triage-parse.test.js, tests/modules/triage-respond.test.js, tests/modules/triage.coverage.test.js, tests/modules/triage.test.js, tests/utils/debugFooter.test.js, tests/utils/errors.test.js, tests/config.test.js
Replaced mocks of CLIProcess/CLIProcessError with mocks of aiClient.generate()/stream(), updated response shape expectations to match AI SDK format, added comprehensive coverage for new cost calculation and error handling paths, removed tests for deleted cli-process module, and updated provider/model/thinking signal assertions.
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: migrating from Claude CLI subprocess to Vercel AI SDK, which is the primary objective of the entire changeset.
Description check ✅ Passed The description is well-structured and directly related to the changeset, providing summary, key changes, performance metrics, and test plan—all aligned with the actual modifications across the codebase.
Docstring Coverage ✅ Passed Docstring coverage is 87.76% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/vercel-ai-sdk-migration
  • 🛠️ Publish Changes: Commit on current branch
  • 🛠️ Publish Changes: Create PR

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment thread src/utils/aiClient.js Fixed
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Note

Copilot was unable to run its full agentic suite in this review.

Migrates bot AI inference from a Claude CLI subprocess to the Vercel AI SDK, adding multi-provider routing, in-process cost tracking, and improved instrumentation.

Changes:

  • Replaced CLI subprocess inference with an in-process aiClient wrapper around Vercel AI SDK (ai, @ai-sdk/anthropic) including provider/model routing.
  • Added local token-based cost calculation via model-pricing.json and expanded timing instrumentation/logging.
  • Updated config schema/UI and refactored tests to match the new SDK result shapes and error handling.

Reviewed changes

Copilot reviewed 42 out of 43 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
web/src/types/config.ts Updates dashboard config types for new triage fields and optional prompts.
web/src/components/ui/discord-markdown-editor.tsx Minor robustness/formatting improvements in editor cursor handling and preview rendering.
web/src/components/dashboard/config-workspace/config-categories.ts Removes “streaming” from triage debug search metadata.
web/src/components/dashboard/config-categories/ai-automation.tsx Removes triage “Streaming” toggle from UI.
tests/utils/errors.test.js Extends error classification tests for SDK error shapes and AIClientError reasons.
tests/utils/debugFooter.test.js Updates debug footer stats extraction tests to new SDK result structure.
tests/utils/aiCost.test.js Adds tests for local pricing map + token cost calculation.
tests/utils/aiClient.test.js Adds comprehensive unit tests for the new aiClient wrapper (providers, retry, streaming, cost).
tests/modules/triage.test.js Refactors triage tests from CLI process mocks to aiClient generate/stream mocks.
tests/modules/triage.coverage.test.js Updates triage coverage tests for aiClient and AIClientError behavior.
tests/modules/triage-respond.test.js Adjusts triage-respond tests for updated extractStats signature and cost fields.
tests/modules/triage-parse.test.js Updates parser tests for .text/finishReason shape and code-fence trimming behavior.
tests/modules/triage-config.test.js Updates triage-config tests to reflect removal of tokenRecycleLimit.
tests/modules/triage-budget.test.js Refactors budget-gate tests for aiClient usage/cost shapes.
tests/modules/cli-process.test.js Removes CLI subprocess tests now that CLIProcess is removed.
tests/modules/aiAutoMod.test.js Moves auto-mod AI calls to aiClient and updates mocks/expectations.
tests/config.test.js Updates config.json structural expectations after triage config changes.
tests/commands/tldr.test.js Refactors TLDR command tests to use aiClient generate() instead of CLIProcess.
src/utils/errors.js Replaces CLIProcessError with AIClientError and updates classification/suggestions for provider-agnostic messaging.
src/utils/debugFooter.js Updates usage/cost extraction to SDK result shape and providerMetadata cache stats.
src/utils/anthropicClient.js Removes the shared Anthropic SDK singleton (no longer used).
src/utils/aiCost.js Adds local pricing-based cost computation utility.
src/utils/aiClient.js Adds Vercel AI SDK wrapper with lazy loading, provider routing, retries, streaming, and instrumentation.
src/prompts/triage-classify-system.md Extends classifier schema with needsThinking/needsSearch hints.
src/modules/triage.js Replaces CLIProcess orchestration with aiClient calls, adds warmConnection, budgets, and timing logs.
src/modules/triage-respond.js Parallelizes context fetching and passes provider name into extractStats.
src/modules/triage-parse.js Updates parsing to use SDK .text, improves code-fence stripping, and adds truncated-response recovery fields.
src/modules/triage-config.js Removes streaming/tokenRecycleLimit and changes thinkingTokens default behavior.
src/modules/cli-process.js Removes CLIProcess implementation now that inference is in-process via SDK.
src/modules/aiAutoMod.js Migrates moderation inference to aiClient generate(); fixes double-delete behavior when autoDelete is enabled.
src/modules/ai.js Updates module header comment to reflect Vercel AI SDK usage.
src/logger.js Updates sensitive-field filtering list (removes CLAUDE_CODE_OAUTH_TOKEN).
src/index.js Preloads Vercel AI SDK in background during startup.
src/data/model-pricing.json Adds offline per-model pricing table for cost estimation.
src/commands/tldr.js Migrates TLDR summarization from CLIProcess to aiClient generate() with error handling.
src/api/utils/configValidation.js Updates schema validation for new/removed triage fields.
package.json Replaces @anthropic-ai/sdk dependency usage with Vercel AI SDK packages.
docs/test-coverage-analysis.md Updates documentation to reflect aiClient/AIClientError terminology.
docker-compose.yml Removes DISABLE_PROMPT_CACHING env var from compose config.
config.json Updates default triage models and context message count for the new provider setup.
.github/copilot-instructions.md Updates contributor docs to reference Vercel AI SDK packages.
.env.example Removes Claude CLI OAuth env docs; adds provider env var conventions for multi-provider routing.
Files not reviewed (1)
  • pnpm-lock.yaml: Language not supported

Comment thread src/utils/aiClient.js Outdated
Comment on lines +417 to +452
try {
const result = await withRetry(
() =>
streamText({
model,
system,
prompt,
...(hasTools && { tools, stopWhen: stepCountIs(5) }),
maxTokens,
providerOptions,
abortSignal: signal,
onChunk: ({ chunk }) => {
if (chunk.type === 'tool-call' && userOnChunk) {
try {
const cbResult = userOnChunk(chunk.toolName, chunk.args);
if (cbResult?.catch)
cbResult.catch((err) =>
logError('onChunk callback error', { error: err.message }),
);
} catch (err) {
logError('onChunk callback error (sync)', { error: err.message });
}
}
},
}),
{ maxRetries: 3, signal },
);
timings.streamStarted = Date.now();

// Await final results from the stream
const text = await result.text;
timings.streamComplete = Date.now();
const usage = (await result.totalUsage) ?? (await result.usage) ?? {};
const finishReason = await result.finishReason;
const sources = (await result.sources) ?? [];
const providerMetadata = (await result.providerMetadata) ?? {};
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

withRetry()only wraps thestreamText()call, but most failures for streaming (timeouts/network/API errors) can surface when awaitingresult.text, result.usage, etc. In that case, retries will never trigger even if isRetryable()` would allow them. Wrap the entire streaming consumption (creating the stream + awaiting final promises) inside the retry function, or restructure so the retry function awaits the final stream output before returning.

Copilot uses AI. Check for mistakes.
Comment thread src/utils/aiClient.js
Comment on lines +169 to +186
const apiKey =
overrides.apiKey ||
(isAnthropic ? process.env.ANTHROPIC_API_KEY : undefined) ||
process.env[`${envPrefix}_API_KEY`] ||
process.env.ANTHROPIC_API_KEY;

const baseUrl =
overrides.baseUrl ||
(isAnthropic ? undefined : process.env[`${envPrefix}_BASE_URL`]) ||
KNOWN_BASE_URLS[providerName];

providerCache.set(
cacheKey,
createAnthropic({
...(isAnthropic ? { apiKey } : { authToken: apiKey }),
...(baseUrl && { baseURL: baseUrl }),
}),
);
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This hard-codes Anthropic auth to { apiKey } only. If any deployments previously used OAuth-style tokens (or a future auth mode requiring bearer auth), passing them via apiKey is likely to fail. A concrete compatibility improvement is to detect token type (e.g., known OAuth prefixes) and pass { authToken } for those cases, while keeping { apiKey } for standard API keys.

Copilot uses AI. Check for mistakes.
Comment thread src/logger.js
const SENSITIVE_FIELDS = [
'DISCORD_TOKEN',
'ANTHROPIC_API_KEY',
'ANTHROPIC_AUTH_TOKEN',
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing CLAUDE_CODE_OAUTH_TOKEN from the sensitive-field redaction list can leak it if it remains set in existing deployments or logs (even if no longer used by the app). Keeping it in SENSITIVE_FIELDS is a low-cost defense-in-depth change that prevents accidental secret exposure.

Suggested change
'ANTHROPIC_AUTH_TOKEN',
'ANTHROPIC_AUTH_TOKEN',
'CLAUDE_CODE_OAUTH_TOKEN',

Copilot uses AI. Check for mistakes.
Comment thread src/modules/triage.js
Comment on lines +376 to +380
debug('runResponder timing', {
channelId,
promptBuildMs: timings.start, // already logged separately
streamApiMs: timings.streamDone - timings.start,
});
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

promptBuildMs is logged as an absolute timestamp (timings.start) rather than a duration, so the emitted metric is incorrect. Either remove this field or log a real duration (e.g., time spent building prompt or time from start to stream init).

Copilot uses AI. Check for mistakes.
Comment thread src/utils/aiCost.js
Comment on lines +22 to +36
* The Anthropic API uses hyphen-separated version numbers and optional date
* suffixes (e.g. 'claude-haiku-4-5-20250514'), while the pricing JSON uses
* short names (e.g. 'claude-haiku-4-5').
*
* @param {string} modelId - API model ID
* @returns {string} Normalised model ID for pricing lookup
*/
function normaliseModelId(modelId) {
// Strip date suffix (e.g. '-20250514', '-20250929')
const stripped = modelId.replace(/-\d{8}$/, '');

// Family-last: 'claude-sonnet-4-6' → 'claude-sonnet-4.6' (for dotted input)
// Not needed for our JSON keys (which use hyphens), but keeps compat if
// someone passes a dotted version like 'claude-sonnet-4.6'.
return stripped;
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment mentions converting hyphenated versions to dotted versions, but the function currently only strips a date suffix and returns stripped unchanged. Update the comment to reflect the actual behavior (date suffix stripping only), or implement the described dotted-version normalization so the comment stays accurate.

Suggested change
* The Anthropic API uses hyphen-separated version numbers and optional date
* suffixes (e.g. 'claude-haiku-4-5-20250514'), while the pricing JSON uses
* short names (e.g. 'claude-haiku-4-5').
*
* @param {string} modelId - API model ID
* @returns {string} Normalised model ID for pricing lookup
*/
function normaliseModelId(modelId) {
// Strip date suffix (e.g. '-20250514', '-20250929')
const stripped = modelId.replace(/-\d{8}$/, '');
// Family-last: 'claude-sonnet-4-6' → 'claude-sonnet-4.6' (for dotted input)
// Not needed for our JSON keys (which use hyphens), but keeps compat if
// someone passes a dotted version like 'claude-sonnet-4.6'.
return stripped;
* The Anthropic API may append an optional date suffix to model IDs
* (e.g. 'claude-haiku-4-5-20250514'), while the pricing JSON uses the base
* model name without that suffix (e.g. 'claude-haiku-4-5').
*
* @param {string} modelId - API model ID
* @returns {string} Model ID with any trailing date suffix removed
*/
function normaliseModelId(modelId) {
// Strip trailing date suffix (e.g. '-20250514', '-20250929').
return modelId.replace(/-\d{8}$/, '');

Copilot uses AI. Check for mistakes.
Comment thread src/modules/triage-respond.js Outdated
Comment on lines +353 to +357
const classifyProvider = resolved.classifyModel?.split(':')[1]
? resolved.classifyModel.split(':')[0]
: 'anthropic';
const respondProvider = resolved.respondModel?.split(':')[1]
? resolved.respondModel.split(':')[0]
Copy link

Copilot AI Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This parses providers by calling split(':') multiple times and relying on [1] truthiness. Consider parsing once (e.g., via indexOf(':') or a single split) to avoid duplicated work and make the intent clearer/less error-prone, especially if model IDs ever contain additional colons or if you extend the model-string grammar.

Suggested change
const classifyProvider = resolved.classifyModel?.split(':')[1]
? resolved.classifyModel.split(':')[0]
: 'anthropic';
const respondProvider = resolved.respondModel?.split(':')[1]
? resolved.respondModel.split(':')[0]
const classifyModel = resolved.classifyModel;
const classifySeparatorIndex = classifyModel?.indexOf(':') ?? -1;
const classifyProvider = classifySeparatorIndex >= 0 && classifySeparatorIndex < classifyModel.length - 1
? classifyModel.slice(0, classifySeparatorIndex)
: 'anthropic';
const respondModel = resolved.respondModel;
const respondSeparatorIndex = respondModel?.indexOf(':') ?? -1;
const respondProvider = respondSeparatorIndex >= 0 && respondSeparatorIndex < respondModel.length - 1
? respondModel.slice(0, respondSeparatorIndex)

Copilot uses AI. Check for mistakes.
@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Apr 12, 2026

Greptile Summary

Replaces the Claude CLI subprocess (CLIProcess) with an in-process Vercel AI SDK integration, enabling multi-provider routing via provider:model strings, lazy SDK loading (eliminating the ~6s startup block), and local cost calculation from model-pricing.json. The migration is thorough — all call sites updated, tests rewritten, and CLIProcessError renamed to AIClientError — with two minor instrumentation issues in aiClient.js worth cleaning up before the timing logs are relied on in production.

Confidence Score: 5/5

Safe to merge — no P0/P1 issues; both findings are observability-only P2s in the timing instrumentation

All functional paths are correct: provider routing, retry logic, abort propagation, cost calculation, and error classification all work as intended. The two flagged issues are purely about debug log accuracy (streamConsumeMs always 0) and error recovery in the unlikely event of a module import failure. Neither affects production correctness.

src/utils/aiClient.js — stream() timing marks and SDK load promise reset

Important Files Changed

Filename Overview
src/utils/aiClient.js New core AI client wrapper — well-structured with lazy loading, provider caching, retry logic, and abort support; stream() timing marks are both set after full stream consumption, making streamConsumeMs always ~0ms; rejected SDK load promise is never reset
src/utils/aiCost.js New offline cost calculator using model-pricing.json; correctly handles cache-aware pricing and date-suffixed model ID normalization
src/modules/triage.js Major refactor from CLIProcess to per-call generate()/stream() SDK calls; adds abortSignal propagation, timing instrumentation, and budget-overage warnings; logic is sound
src/modules/aiAutoMod.js Migrated from CLIProcess to generate(); generate() call is correctly wrapped in the existing try/catch in checkAiAutoMod()
src/data/model-pricing.json New static pricing data for Anthropic, OpenAI, and MiniMax models; well-commented with correct USD-per-1M-token format
src/utils/debugFooter.js Updated extractStats() to read from AI SDK result shape (camelCase usage fields, providerMetadata for cache tokens); old multi-key fallback logic correctly removed
src/utils/errors.js CLIProcessError renamed to AIClientError with updated reason enum; classifyError() updated to handle AIClientError.statusCode (not .status)
src/modules/triage-parse.js Updated to consume sdkResult.text instead of sdkMessage.result; adds whitespace trimming before fence stripping and normalises needsThinking/needsSearch defaults
tests/utils/aiClient.test.js New comprehensive test suite covering generate(), stream(), withRetry, provider routing, credential handling, and onChunk callbacks
tests/utils/aiCost.test.js New tests covering cost calculation, cache-aware pricing, model normalization, and unknown-model fallback

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[Discord Message] --> B[triage.js: evaluateNow]
    B --> C[runClassification]
    C --> D[generate — aiClient.js]
    D --> E{resolveModel}
    E -->|provider:model string| F{Provider?}
    F -->|anthropic| G[createAnthropic\napiKey / authToken]
    F -->|minimax| H[createAnthropic\nauthToken + baseURL]
    F -->|other| I[createAnthropic\nENV: PROVIDER_API_KEY\n+ PROVIDER_BASE_URL]
    G & H & I --> J[generateText — Vercel AI SDK]
    J --> K[parseClassifyResult]
    K -->|ignore| L[Done — no response]
    K -->|respond| M[runResponder]
    M --> N[stream — aiClient.js]
    N --> O[streamText — Vercel AI SDK]
    O -->|onChunk: tool-call| P[onChunk callback\nsearchCount++\nDiscord reaction]
    O --> Q[Await .text / .usage / .providerMetadata]
    Q --> R[calculateCost — aiCost.js\nmodel-pricing.json lookup]
    R --> S[parseRespondResult]
    S --> T[Send to Discord]
Loading
Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/utils/aiClient.js
Line: 499-500

Comment:
**Stream timing marks set after stream is fully consumed**

Both `timings.streamStarted` and `timings.streamComplete` are assigned `Date.now()` back-to-back _after_ `withRetry` resolves — but the full stream (all awaited promises: `.text`, `.usage`, etc.) finishes _inside_ `withRetry`. As a result `streamConsumeMs` is always ~0ms and `streamInitMs` absorbs the entire API + stream time, defeating the purpose of the split.

To get meaningful numbers, capture the timestamps inside the `withRetry` callback, wrapping the `await streamResult.text` call.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src/utils/aiClient.js
Line: 39-52

Comment:
**Rejected SDK load promise is never reset**

If either `import('@ai-sdk/anthropic')` or `import('ai')` rejects (e.g. missing package in a misconfigured environment), `_sdkLoadPromise` is set to a permanently rejected promise. Because the guard `if (!_sdkLoadPromise)` is never re-entered after a failure, every subsequent call to `getSDK()` returns the same rejected promise with no retry. The bot can only recover with a process restart.

Adding `.catch((err) => { _sdkLoadPromise = null; throw err; })` to the promise chain would allow callers to retry after transient load errors.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (5): Last reviewed commit: "fix(ci): unblock CI after merge — schema..." | Re-trigger Greptile

Comment thread src/modules/aiAutoMod.js
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
src/modules/aiAutoMod.js (1)

212-214: ⚠️ Potential issue | 🔴 Critical

autoDelete currently flips the delete semantics.

The unconditional delete on Lines 212-214 runs before the action switch, so warn, timeout, kick, and ban all remove the source message whenever autoDelete is true. Then the delete branch on Lines 285-288 is inverted and still deletes when autoDelete is false. That effectively breaks the flag and also removes the evidence before sendFlagEmbed() runs.

🛠️ Minimal fix
-  if (autoModConfig.autoDelete) {
-    await message.delete().catch(() => {});
-  }
-
   switch (result.action) {
@@
     case 'delete':
-      if (!autoModConfig.autoDelete) {
+      if (autoModConfig.autoDelete) {
         await message.delete().catch(() => {});
       }
       break;

Also applies to: 285-288

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/aiAutoMod.js` around lines 212 - 214, Remove the unconditional
message.delete() call and instead perform deletions inside the action switch:
delete the message unconditionally in the 'delete' action handler (use
message.delete().catch(() => {})), and for other actions (warn, timeout, kick,
ban) only call message.delete() if autoModConfig.autoDelete is true; also fix
the inverted logic in the existing 'delete' branch so it does not delete when
autoModConfig.autoDelete is false. Update references in the function handling
the mod action (look for autoModConfig.autoDelete, message.delete(), and
sendFlagEmbed()) so deletion happens in the correct branches.
src/modules/triage-respond.js (1)

79-103: ⚠️ Potential issue | 🟠 Major

Keep fetchChannelContext() within the caller's limit.

This path can now return up to limit + RECENT_LIMIT unique messages, so a configured cap like 15 can silently become 20. It also does a duplicate latest-message fetch when bufferSnapshot is empty. That inflates prompt size/cost and adds avoidable Discord API traffic.

♻️ Proposed fix
-    const oldest = bufferSnapshot[0];
-    const historyOptions = { limit };
-    if (oldest) historyOptions.before = oldest.messageId;
-    const RECENT_LIMIT = 5;
+    const oldest = bufferSnapshot[0];
+    const RECENT_LIMIT = 5;
+    const recentLimit = oldest ? Math.min(RECENT_LIMIT, limit) : 0;
+    const historyOptions = { limit: Math.max(limit - recentLimit, 0) };
+    if (oldest) historyOptions.before = oldest.messageId;
 
     const [historyFetched, recentFetched] = await Promise.all([
-      channel.messages.fetch(historyOptions),
-      channel.messages.fetch({ limit: RECENT_LIMIT }),
+      historyOptions.limit > 0 ? channel.messages.fetch(historyOptions) : Promise.resolve(new Map()),
+      recentLimit > 0 ? channel.messages.fetch({ limit: recentLimit }) : Promise.resolve(new Map()),
     ]);
@@
     return merged
       .sort((a, b) => a.createdTimestamp - b.createdTimestamp) // chronological
+      .slice(-limit)
       .map(toContextEntry);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/triage-respond.js` around lines 79 - 103, fetchChannelContext
currently can return up to limit + RECENT_LIMIT and duplicates the latest fetch
when bufferSnapshot is empty; fix it by (1) computing recentFetchLimit =
Math.min(RECENT_LIMIT, Math.max(0, limit)) and if there is no oldest
(bufferSnapshot empty) set recentFetchLimit = 0 so you don't re-fetch the same
messages, (2) pass recentFetchLimit to channel.messages.fetch for the
recentFetched call instead of a fixed RECENT_LIMIT, and (3) after building
merged (using historyFetched, recentFetched, seen, bufferIds, toContextEntry)
enforce the caller limit by returning merged.slice(0, limit). Ensure you
reference historyOptions, RECENT_LIMIT, bufferSnapshot, historyFetched,
recentFetched, merged, and toContextEntry when making these changes.
src/modules/triage.js (1)

307-315: ⚠️ Potential issue | 🟡 Minor

promptBuildMs is logging an epoch timestamp, not a duration.

timings.start will produce a huge absolute value here, so the new latency breakdown is misleading. Capture a prompt-built timestamp or drop the field.

Also applies to: 376-380

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/triage.js` around lines 307 - 315, The timings.start currently
stores Date.now() and is being logged as promptBuildMs (an epoch) instead of a
duration; change the timing measurement around buildRespondPrompt (and the
similar block at the later lines) to record a high-level start timestamp before
calling buildRespondPrompt and then compute promptBuildMs as Date.now() - start
(i.e., a duration in milliseconds) before adding it to the timings object, or
alternatively remove the field; update references to timings.start/promptBuildMs
so promptBuildMs is a computed duration for buildRespondPrompt (refer to
timings.start, buildRespondPrompt, respondPrompt) and do the same for the other
occurrence around the later block.
tests/modules/triage.test.js (1)

1178-1203: 🧹 Nitpick | 🔵 Trivial

Assert the responder call options, not just the reaction side effects.

These tests can still pass if runResponder() stops forwarding needsSearch / needsThinking into stream(), because the mock manually triggers onChunk() and the emoji state is derived from the same classifier flag. Add direct assertions on mockStream.mock.calls[0][0].tools and .thinking.

Also applies to: 1256-1276

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/modules/triage.test.js` around lines 1178 - 1203, The test currently
only asserts mockReact side-effects but must also assert the responder was
invoked with correct stream options; update the test case(s) that simulate
streaming (the one using mockStream and mockGenerate around the 'web_search'
onChunk and the similar case at lines 1256-1276) to assert
mockStream.mock.calls[0][0].tools includes the expected 'web_search' tool and
that mockStream.mock.calls[0][0].thinking matches the classifier intent (e.g.,
true for needsThinking or includes the proper flag for needsSearch); reference
the runResponder/stream invocation by checking mockStream's first call options
rather than relying solely on mockReact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.env.example:
- Around line 27-42: Update .env.example to match the new default triage model
in config.json by making MINIMAX_API_KEY (and optionally MINIMAX_BASE_URL)
documented as required for out-of-box triage, or revert config.json defaults to
use ANTHROPIC_API_KEY; specifically align the .env example with the config.json
keys triage.classifyModel and triage.respondModel by either adding
MINIMAX_API_KEY as a required credential alongside ANTHROPIC_API_KEY or changing
the defaults in config.json away from "minimax:MiniMax-M2.7" so the existing
ANTHROPIC_API_KEY remains the required out-of-box provider.

In @.github/copilot-instructions.md:
- Line 31: Update the terminology in the test-mocking guidance by replacing the
phrase "Claude SDK" with "Vercel AI SDK" (or "Vercel AI SDK (`ai`,
`@ai-sdk/anthropic`)") so it matches the header "AI features | Vercel AI SDK
(`ai`, `@ai-sdk/anthropic`)" and any examples or code snippets that reference
Claude; ensure mock setup names, import examples, and README/test comments
consistently use "Vercel AI SDK" and the correct package identifiers instead of
"Claude SDK".

In `@docs/test-coverage-analysis.md`:
- Around line 96-100: Update the outdated phrase "CLI timeout/parse failure
tests" to use AI client terminology (e.g., "AI client timeout/parse failure
tests" or "AI Client timeout/parse failure tests") so it matches the renamed
section "AI Client Timeout and Parse Failures — MEDIUM PRIORITY"; search for the
exact string "CLI timeout/parse failure tests" in the document and replace it
with the consistent AI client wording to keep section names and recommendations
aligned.

In `@package.json`:
- Around line 36-38: Update the dependency versions for "@ai-sdk/anthropic" and
"ai" in package.json to the latest stable releases by changing
"@ai-sdk/anthropic" from ^3.0.66 to ^3.0.68 and "ai" from ^6.0.146 to ^6.0.158;
verify any engine/CI Node.js requirement is >=18 (project targets Node 22+), run
npm install (or yarn) and the test suite to confirm no regressions after
updating these packages.

In `@src/modules/triage-config.js`:
- Around line 19-23: The current selection logic for respondModel uses || which
treats an explicit empty string as missing and silently falls back to the
legacy/default; change the operator back to nullish coalescing so an explicit
respondModel: "" remains surfaced (i.e., use respondModel ?? fallback) and add
or ensure a config validation step rejects empty-string values for
triageConfig.respondModel/triageConfig.model so bad configs are surfaced rather
than routing traffic to triageConfig.models?.default ('claude-sonnet-4-6').

In `@src/modules/triage.js`:
- Around line 348-351: The WebSearch notification currently calls
evalClient.channels.fetch(channelId) directly, bypassing the shared cache;
replace that fetch with the cache helper fetchChannelCached(channelId,
evalClient) (or the existing fetchChannelCached usage pattern in this file) so
the channel lookup uses src/utils/discordCache.js, then call safeSend on the
returned channel as before; update the surrounding try/catch to handle null from
fetchChannelCached the same way you handled the previous fetch.

In `@src/utils/aiClient.js`:
- Around line 366-371: The catch blocks in generate() and stream() currently map
any aborted signal to a 'timeout' AIClientError, hiding caller-initiated
cancellations; update both catch handlers to check externalSignal?.aborted and,
if true, throw new AIClientError('Request was cancelled', 'aborted') (or
similar) while keeping the existing 'timeout' error when the internal signal
fired (i.e., signal.aborted && !externalSignal?.aborted); preserve existing
behavior for other errors (rethrow AIClientError and wrap others with 'api'
including statusCode).
- Around line 249-267: createAbortController currently ignores a pre-aborted
externalSignal and later code (generate and stream) cannot tell whether an abort
came from the caller or the timeout; fix createAbortController to immediately
abort the local controller if externalSignal && externalSignal.aborted (and
clear the timer), only attach externalSignal.addEventListener if it wasn't
already aborted, and keep the existing cleanup to remove the listener and
clearTimeout (refer to createAbortController, timer, onExternalAbort). Then
update generate() and stream() to distinguish abort reasons: detect if abort was
caused by the externalSignal (externalSignal?.aborted or
controller.signal.reason tied to externalSignal) and throw AIClientError(...,
'aborted') for caller cancellations, but throw AIClientError(..., 'timeout')
when the timer fired; use the same controller/signal returned by
createAbortController to determine which condition occurred.

In `@src/utils/errors.js`:
- Around line 223-224: Update the message for ErrorType.API_SERVER_ERROR in
src/utils/errors.js to remove the stale "restart the AI service" guidance and
instead instruct operators to check provider status and outbound connectivity;
locate the ErrorType.API_SERVER_ERROR entry and replace its remediation string
with a concise note suggesting provider status pages, upstream provider errors,
and network/outbound connectivity checks.

In `@tests/modules/triage-parse.test.js`:
- Around line 135-149: The test's fixture for parseClassifyResult omits the new
boolean flags so it won't catch regressions; update the sdkMessage payload in
the 'should return parsed classification on success' test to include
needsThinking: true and needsSearch: true, then add assertions that
result.needsThinking === true and result.needsSearch === true so
parseClassifyResult (and downstream triage.js usage) preserves those fields;
locate the test exercising parseClassifyResult and modify the sdkMessage and
expectations accordingly.

In `@tests/utils/aiClient.test.js`:
- Around line 541-562: Rename the test case title to accurately describe its
behavior: change the it(...) description for the test that calls generate({
model: 'claude-haiku-4-5', prompt: 'cancelled', abortSignal: controller.signal
}) and asserts the caught error is an AIClientError with reason 'timeout' so it
reads something like "should treat external abort as timeout" (reference
symbols: generate, AIClientError, mockGenerateText).

In `@tests/utils/errors.test.js`:
- Around line 345-363: The tests use CommonJS require() to load AIClientError
which breaks in an ESM test file; add a single ESM import for AIClientError at
the top of the test file (alongside existing imports) and remove the three
require() calls inside the tests so the specs use the imported AIClientError
when constructing error instances for classifyError and ErrorType assertions.

In `@web/src/types/config.ts`:
- Around line 239-242: The front-end type for CONFIG_SCHEMA.triage must accept
nulls: update the properties statusReactions, dailyBudgetUsd,
confidenceThreshold, and responseCooldownMs in web/src/types/config.ts so they
allow null values (e.g., boolean|null and number|null) in addition to being
optional, ensuring API payloads that send null are correctly typed; keep the
existing optionality and reference CONFIG_SCHEMA.triage and those exact property
names when making the change.

---

Outside diff comments:
In `@src/modules/aiAutoMod.js`:
- Around line 212-214: Remove the unconditional message.delete() call and
instead perform deletions inside the action switch: delete the message
unconditionally in the 'delete' action handler (use message.delete().catch(() =>
{})), and for other actions (warn, timeout, kick, ban) only call
message.delete() if autoModConfig.autoDelete is true; also fix the inverted
logic in the existing 'delete' branch so it does not delete when
autoModConfig.autoDelete is false. Update references in the function handling
the mod action (look for autoModConfig.autoDelete, message.delete(), and
sendFlagEmbed()) so deletion happens in the correct branches.

In `@src/modules/triage-respond.js`:
- Around line 79-103: fetchChannelContext currently can return up to limit +
RECENT_LIMIT and duplicates the latest fetch when bufferSnapshot is empty; fix
it by (1) computing recentFetchLimit = Math.min(RECENT_LIMIT, Math.max(0,
limit)) and if there is no oldest (bufferSnapshot empty) set recentFetchLimit =
0 so you don't re-fetch the same messages, (2) pass recentFetchLimit to
channel.messages.fetch for the recentFetched call instead of a fixed
RECENT_LIMIT, and (3) after building merged (using historyFetched,
recentFetched, seen, bufferIds, toContextEntry) enforce the caller limit by
returning merged.slice(0, limit). Ensure you reference historyOptions,
RECENT_LIMIT, bufferSnapshot, historyFetched, recentFetched, merged, and
toContextEntry when making these changes.

In `@src/modules/triage.js`:
- Around line 307-315: The timings.start currently stores Date.now() and is
being logged as promptBuildMs (an epoch) instead of a duration; change the
timing measurement around buildRespondPrompt (and the similar block at the later
lines) to record a high-level start timestamp before calling buildRespondPrompt
and then compute promptBuildMs as Date.now() - start (i.e., a duration in
milliseconds) before adding it to the timings object, or alternatively remove
the field; update references to timings.start/promptBuildMs so promptBuildMs is
a computed duration for buildRespondPrompt (refer to timings.start,
buildRespondPrompt, respondPrompt) and do the same for the other occurrence
around the later block.

In `@tests/modules/triage.test.js`:
- Around line 1178-1203: The test currently only asserts mockReact side-effects
but must also assert the responder was invoked with correct stream options;
update the test case(s) that simulate streaming (the one using mockStream and
mockGenerate around the 'web_search' onChunk and the similar case at lines
1256-1276) to assert mockStream.mock.calls[0][0].tools includes the expected
'web_search' tool and that mockStream.mock.calls[0][0].thinking matches the
classifier intent (e.g., true for needsThinking or includes the proper flag for
needsSearch); reference the runResponder/stream invocation by checking
mockStream's first call options rather than relying solely on mockReact.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 467d03fe-1f3f-4910-a4b7-80df94c78445

📥 Commits

Reviewing files that changed from the base of the PR and between 9a7cb1a and b5d96d5.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (42)
  • .env.example
  • .github/copilot-instructions.md
  • config.json
  • docker-compose.yml
  • docs/test-coverage-analysis.md
  • package.json
  • src/api/utils/configValidation.js
  • src/commands/tldr.js
  • src/data/model-pricing.json
  • src/index.js
  • src/logger.js
  • src/modules/ai.js
  • src/modules/aiAutoMod.js
  • src/modules/cli-process.js
  • src/modules/triage-config.js
  • src/modules/triage-parse.js
  • src/modules/triage-respond.js
  • src/modules/triage.js
  • src/prompts/triage-classify-system.md
  • src/utils/aiClient.js
  • src/utils/aiCost.js
  • src/utils/anthropicClient.js
  • src/utils/debugFooter.js
  • src/utils/errors.js
  • tests/commands/tldr.test.js
  • tests/config.test.js
  • tests/modules/aiAutoMod.test.js
  • tests/modules/cli-process.test.js
  • tests/modules/triage-budget.test.js
  • tests/modules/triage-config.test.js
  • tests/modules/triage-parse.test.js
  • tests/modules/triage-respond.test.js
  • tests/modules/triage.coverage.test.js
  • tests/modules/triage.test.js
  • tests/utils/aiClient.test.js
  • tests/utils/aiCost.test.js
  • tests/utils/debugFooter.test.js
  • tests/utils/errors.test.js
  • web/src/components/dashboard/config-categories/ai-automation.tsx
  • web/src/components/dashboard/config-workspace/config-categories.ts
  • web/src/components/ui/discord-markdown-editor.tsx
  • web/src/types/config.ts
💤 Files with no reviewable changes (8)
  • tests/modules/triage-config.test.js
  • src/logger.js
  • docker-compose.yml
  • web/src/components/dashboard/config-categories/ai-automation.tsx
  • tests/config.test.js
  • src/utils/anthropicClient.js
  • src/modules/cli-process.js
  • tests/modules/cli-process.test.js
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Greptile Review
🧰 Additional context used
📓 Path-based instructions (17)
**/*.{js,ts,jsx,tsx,mjs,mts}

📄 CodeRabbit inference engine (AGENTS.md)

Use ESM only; do not use CommonJS

Files:

  • src/index.js
  • src/modules/ai.js
  • web/src/components/ui/discord-markdown-editor.tsx
  • web/src/components/dashboard/config-workspace/config-categories.ts
  • tests/modules/triage-budget.test.js
  • tests/modules/triage-respond.test.js
  • src/modules/aiAutoMod.js
  • src/commands/tldr.js
  • src/modules/triage-respond.js
  • tests/commands/tldr.test.js
  • src/modules/triage-config.js
  • tests/utils/errors.test.js
  • tests/utils/debugFooter.test.js
  • web/src/types/config.ts
  • tests/modules/triage.coverage.test.js
  • src/utils/debugFooter.js
  • tests/modules/aiAutoMod.test.js
  • src/api/utils/configValidation.js
  • src/utils/errors.js
  • tests/utils/aiCost.test.js
  • tests/modules/triage-parse.test.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • src/utils/aiClient.js
  • tests/modules/triage.test.js
  • tests/utils/aiClient.test.js
**/*.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{js,ts,jsx,tsx}: Use src/logger.js; do not use console.*
Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Use parameterized SQL only; do not use string concatenation for SQL queries
Community features should be gated behind config..enabled; moderation commands are the exception

Files:

  • src/index.js
  • src/modules/ai.js
  • web/src/components/ui/discord-markdown-editor.tsx
  • web/src/components/dashboard/config-workspace/config-categories.ts
  • tests/modules/triage-budget.test.js
  • tests/modules/triage-respond.test.js
  • src/modules/aiAutoMod.js
  • src/commands/tldr.js
  • src/modules/triage-respond.js
  • tests/commands/tldr.test.js
  • src/modules/triage-config.js
  • tests/utils/errors.test.js
  • tests/utils/debugFooter.test.js
  • web/src/types/config.ts
  • tests/modules/triage.coverage.test.js
  • src/utils/debugFooter.js
  • tests/modules/aiAutoMod.test.js
  • src/api/utils/configValidation.js
  • src/utils/errors.js
  • tests/utils/aiCost.test.js
  • tests/modules/triage-parse.test.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • src/utils/aiClient.js
  • tests/modules/triage.test.js
  • tests/utils/aiClient.test.js
**/*.{js,ts,jsx,tsx,json}

📄 CodeRabbit inference engine (AGENTS.md)

Welcome-message variables use double braces only, like {{user}}; single braces are plain text and should not be documented, inserted, or parsed as variables

Files:

  • src/index.js
  • package.json
  • src/modules/ai.js
  • web/src/components/ui/discord-markdown-editor.tsx
  • web/src/components/dashboard/config-workspace/config-categories.ts
  • src/data/model-pricing.json
  • config.json
  • tests/modules/triage-budget.test.js
  • tests/modules/triage-respond.test.js
  • src/modules/aiAutoMod.js
  • src/commands/tldr.js
  • src/modules/triage-respond.js
  • tests/commands/tldr.test.js
  • src/modules/triage-config.js
  • tests/utils/errors.test.js
  • tests/utils/debugFooter.test.js
  • web/src/types/config.ts
  • tests/modules/triage.coverage.test.js
  • src/utils/debugFooter.js
  • tests/modules/aiAutoMod.test.js
  • src/api/utils/configValidation.js
  • src/utils/errors.js
  • tests/utils/aiCost.test.js
  • tests/modules/triage-parse.test.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • src/utils/aiClient.js
  • tests/modules/triage.test.js
  • tests/utils/aiClient.test.js
**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use ESM only — import/export, never require() or module.exports

Files:

  • src/index.js
  • src/modules/ai.js
  • tests/modules/triage-budget.test.js
  • tests/modules/triage-respond.test.js
  • src/modules/aiAutoMod.js
  • src/commands/tldr.js
  • src/modules/triage-respond.js
  • tests/commands/tldr.test.js
  • src/modules/triage-config.js
  • tests/utils/errors.test.js
  • tests/utils/debugFooter.test.js
  • tests/modules/triage.coverage.test.js
  • src/utils/debugFooter.js
  • tests/modules/aiAutoMod.test.js
  • src/api/utils/configValidation.js
  • src/utils/errors.js
  • tests/utils/aiCost.test.js
  • tests/modules/triage-parse.test.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • src/utils/aiClient.js
  • tests/modules/triage.test.js
  • tests/utils/aiClient.test.js
**/*.{js,ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use single quotes for strings, never double quotes (except in JSON files)

Always include semicolons at the end of statements

Always use trailing commas in multi-line constructs

Keep line width to 100 characters maximum

Never use console.* (console.log, console.error, etc.) — use Winston logger instead

Files:

  • src/index.js
  • src/modules/ai.js
  • web/src/components/ui/discord-markdown-editor.tsx
  • web/src/components/dashboard/config-workspace/config-categories.ts
  • tests/modules/triage-budget.test.js
  • tests/modules/triage-respond.test.js
  • src/modules/aiAutoMod.js
  • src/commands/tldr.js
  • src/modules/triage-respond.js
  • tests/commands/tldr.test.js
  • src/modules/triage-config.js
  • tests/utils/errors.test.js
  • tests/utils/debugFooter.test.js
  • web/src/types/config.ts
  • tests/modules/triage.coverage.test.js
  • src/utils/debugFooter.js
  • tests/modules/aiAutoMod.test.js
  • src/api/utils/configValidation.js
  • src/utils/errors.js
  • tests/utils/aiCost.test.js
  • tests/modules/triage-parse.test.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • src/utils/aiClient.js
  • tests/modules/triage.test.js
  • tests/utils/aiClient.test.js
**/*.{js,ts,tsx,json}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use 2-space indentation with spaces, never tabs

Files:

  • src/index.js
  • package.json
  • src/modules/ai.js
  • web/src/components/ui/discord-markdown-editor.tsx
  • web/src/components/dashboard/config-workspace/config-categories.ts
  • src/data/model-pricing.json
  • config.json
  • tests/modules/triage-budget.test.js
  • tests/modules/triage-respond.test.js
  • src/modules/aiAutoMod.js
  • src/commands/tldr.js
  • src/modules/triage-respond.js
  • tests/commands/tldr.test.js
  • src/modules/triage-config.js
  • tests/utils/errors.test.js
  • tests/utils/debugFooter.test.js
  • web/src/types/config.ts
  • tests/modules/triage.coverage.test.js
  • src/utils/debugFooter.js
  • tests/modules/aiAutoMod.test.js
  • src/api/utils/configValidation.js
  • src/utils/errors.js
  • tests/utils/aiCost.test.js
  • tests/modules/triage-parse.test.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • src/utils/aiClient.js
  • tests/modules/triage.test.js
  • tests/utils/aiClient.test.js
src/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Import and use Winston logger from '../logger.js' instead of console methods

Use safeReply(), safeSend(), or safeEditReply() for Discord message operations instead of raw Discord.js methods

Use parameterized queries in database operations — never string interpolation

Use getConfig(guildId) from src/modules/config.js to retrieve merged global + guild-level configuration

Gate community features behind config..enabled; moderation commands are always available

Use src/utils/cache.js for generic caching with Redis primary + in-memory fallback

Use src/utils/discordCache.js for caching Discord API responses (channels, roles, members)

Use src/utils/reputationCache.js for caching leaderboard and user reputation data

Use discord.js v14 for Discord bot interactions

Files:

  • src/index.js
  • src/modules/ai.js
  • src/modules/aiAutoMod.js
  • src/commands/tldr.js
  • src/modules/triage-respond.js
  • src/modules/triage-config.js
  • src/utils/debugFooter.js
  • src/api/utils/configValidation.js
  • src/utils/errors.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • src/utils/aiClient.js
**/package.json

📄 CodeRabbit inference engine (AGENTS.md)

Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing

Files:

  • package.json
src/modules/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use Vercel AI SDK (ai, @ai-sdk/anthropic) for Claude AI features

Files:

  • src/modules/ai.js
  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/modules/triage-config.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
web/src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

web/src/**/*.{ts,tsx}: New dashboard routes need title wiring in web/src/lib/page-titles.ts; use createPageMetadata() for SSR and keep DashboardTitleSync aligned for client navigation
Dashboard clients that need the guild list should consume GuildDirectoryProvider; do not stack extra /api/guilds fetch loops in leaf components
Recharts dashboard views should use web/src/components/ui/stable-responsive-container.tsx; raw ResponsiveContainer mounts can spam width(-1)/height(-1) warnings when panels render before layout settles
Any visual dashboard or landing page change must be verified with Chrome DevTools MCP before calling it done; take a screenshot after the change
Check both light and dark themes if colors or theming changed
Check responsive behavior on mobile, tablet, and desktop if layout changed

Prefer TypeScript interface over type for defining object shapes in Next.js web dashboard

Files:

  • web/src/components/ui/discord-markdown-editor.tsx
  • web/src/components/dashboard/config-workspace/config-categories.ts
  • web/src/types/config.ts
web/src/**/*.{ts,tsx,css,scss}

📄 CodeRabbit inference engine (AGENTS.md)

Follow the design system when making changes to the UI/UX; see DESIGN.md for the design system and color palette

Files:

  • web/src/components/ui/discord-markdown-editor.tsx
  • web/src/components/dashboard/config-workspace/config-categories.ts
  • web/src/types/config.ts
web/src/**/*.tsx

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use Tailwind CSS 4 for styling in web dashboard components

Files:

  • web/src/components/ui/discord-markdown-editor.tsx
web/src/components/**/*.tsx

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use Radix UI for accessible component primitives in web dashboard

Files:

  • web/src/components/ui/discord-markdown-editor.tsx
{config.json,src/api/utils/configAllowlist.js}

📄 CodeRabbit inference engine (AGENTS.md)

Config-backed features must be added to config.json and src/api/utils/configAllowlist.js; if a key is missing from SAFE_CONFIG_KEYS, the dashboard cannot save it

Files:

  • config.json
tests/**/*.test.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Place bot test files in tests/ directory mirroring the src/ structure with .test.js extension

Maintain bot test coverage thresholds: statements 85%, branches 82%, functions 85%, lines 85%

Mock external dependencies (Discord.js, database, Redis, Claude SDK) in tests using vi.mock() and vi.fn()

Use vi.mock() for module mocking and vi.fn() for function stubs in tests

Use Vitest 4 for bot tests with node environment

Test timeout is 10 seconds for Vitest

Files:

  • tests/modules/triage-budget.test.js
  • tests/modules/triage-respond.test.js
  • tests/commands/tldr.test.js
  • tests/utils/errors.test.js
  • tests/utils/debugFooter.test.js
  • tests/modules/triage.coverage.test.js
  • tests/modules/aiAutoMod.test.js
  • tests/utils/aiCost.test.js
  • tests/modules/triage-parse.test.js
  • tests/modules/triage.test.js
  • tests/utils/aiClient.test.js
src/commands/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Create slash commands in src/commands/ exporting command builder + execute function with corresponding tests in tests/commands/

Files:

  • src/commands/tldr.js
src/api/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use Express 5 for REST API server

Files:

  • src/api/utils/configValidation.js
🧠 Learnings (12)
📓 Common learnings
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Create new bot features as modules in src/modules/ with corresponding config section, slash command, and tests
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use pnpm as the package manager; do not use npm or yarn (engine-strict=true enforced)
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Require Node.js 22+ as enforced by engines field in package.json
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Run pnpm lint and pnpm test before making changes and pnpm lint and pnpm test:coverage after making changes
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use Conventional Commits format for commit messages: feat:, fix:, chore:, docs:, refactor:, style:, test:, ci:
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use pnpm changeset to create release notes for user-facing changes
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use branch naming convention: feat/, fix/, chore/, docs/, refactor/, style/, test/, ci/
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Always target PRs to the main branch
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use Biome 2 for linting and formatting the entire project
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use pnpm workspaces + Turborepo for monorepo management
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Use Changesets for versioning across the monorepo
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Run pnpm migrate to apply database migrations, pnpm migrate:down to rollback, pnpm migrate:create to create new migrations
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Auto-fix formatting and lint issues by running pnpm lint:fix
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-12T23:41:31.315Z
Learning: Run pnpm deploy to register slash commands with Discord
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/package.json : Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing

Applied to files:

  • package.json
📚 Learning: 2026-03-10T23:21:49.730Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T23:21:49.730Z
Learning: Applies to web/src/components/dashboard/config-workspace/**/*.{ts,tsx} : Web dashboard config editor should use category workspace navigation with reusable SettingsFeatureCard pattern (header + master toggle + Basic/Advanced blocks)

Applied to files:

  • web/src/components/dashboard/config-workspace/config-categories.ts
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/*.{js,ts,jsx,tsx} : Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls

Applied to files:

  • src/modules/aiAutoMod.js
  • src/commands/tldr.js
  • tests/commands/tldr.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to src/commands/**/*.js : Create slash command definitions in `src/commands/`, exporting a slash command builder and an `execute` function

Applied to files:

  • src/commands/tldr.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to src/**/*.js : Always use `safeReply()`, `safeSend()`, or `safeEditReply()` instead of raw Discord.js methods for safe Discord messaging that handles errors gracefully

Applied to files:

  • src/commands/tldr.js
  • tests/commands/tldr.test.js
  • tests/modules/triage.coverage.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Maintain test coverage thresholds: statements 85%, branches 82%, functions 85%, lines 85%; never lower thresholds—add tests to cover new code instead

Applied to files:

  • docs/test-coverage-analysis.md
  • tests/modules/triage.coverage.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to config.json : All community features in `config.json` should be gated behind a `config.<feature>.enabled` flag; moderation commands are always available regardless of config settings

Applied to files:

  • web/src/types/config.ts
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/*.{js,ts,jsx,tsx} : Community features should be gated behind config.<feature>.enabled; moderation commands are the exception

Applied to files:

  • web/src/types/config.ts
📚 Learning: 2026-03-10T23:29:51.063Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T23:29:51.063Z
Learning: Applies to src/api/routes/*.js : Apply requireGuildModerator authorization check to DELETE endpoints that modify guild-scoped resources

Applied to files:

  • tests/modules/aiAutoMod.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Write bot tests using Vitest 4 with the `node` environment, matching the `src/` structure in the `tests/` directory

Applied to files:

  • tests/utils/aiCost.test.js
  • tests/modules/triage.test.js
  • tests/utils/aiClient.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to web/tests/**/*.test.{ts,tsx} : Write web dashboard tests using Vitest 4 with the `jsdom` environment and React Testing Library, matching the `web/src/` structure

Applied to files:

  • tests/utils/aiCost.test.js
🪛 GitHub Check: CodeQL
src/utils/aiClient.js

[failure] 158-158: Use of password hash with insufficient computational effort
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.

🪛 GitHub Check: SonarCloud Code Analysis
tests/utils/aiCost.test.js

[warning] 24-24: Remove this useless assignment to variable "key".

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuQhkud7vHWqFg8o&open=AZ2EEuQhkud7vHWqFg8o&pullRequest=479


[warning] 24-24: Remove the declaration of the unused 'key' variable.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuQhkud7vHWqFg8n&open=AZ2EEuQhkud7vHWqFg8n&pullRequest=479

src/modules/triage.js

[failure] 435-435: Refactor this function to reduce its Cognitive Complexity from 48 to the 15 allowed.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuVRkud7vHWqFg8x&open=AZ2EEuVRkud7vHWqFg8x&pullRequest=479


[failure] 520-526: Unexpected await of a non-Promise (non-"Thenable") value.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuVRkud7vHWqFg8y&open=AZ2EEuVRkud7vHWqFg8y&pullRequest=479

src/utils/aiClient.js

[failure] 457-462: Unexpected await of a non-Promise (non-"Thenable") value.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8t&open=AZ2EEuS3kud7vHWqFg8t&pullRequest=479


[failure] 340-345: Unexpected await of a non-Promise (non-"Thenable") value.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8s&open=AZ2EEuS3kud7vHWqFg8s&pullRequest=479


[warning] 230-230: knownTools should be a Set, and use knownTools.has() to check existence or non-existence.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8r&open=AZ2EEuS3kud7vHWqFg8r&pullRequest=479


[warning] 520-520: Remove the declaration of the unused 'providerName' variable.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8u&open=AZ2EEuS3kud7vHWqFg8u&pullRequest=479


[warning] 94-94: Prefer Number.parseFloat over parseFloat.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8p&open=AZ2EEuS3kud7vHWqFg8p&pullRequest=479


[warning] 520-520: Remove this useless assignment to variable "providerName".

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8v&open=AZ2EEuS3kud7vHWqFg8v&pullRequest=479

🔇 Additional comments (14)
web/src/components/ui/discord-markdown-editor.tsx (3)

276-277: Cursor fallback handling is solid.

Nice fix: this prevents passing undefined to setSelectionRange and keeps caret restoration stable after insertion.


309-314: Variable sample substitution remains safe and clear.

Escaping is preserved and the early return for missing samples keeps the replacement logic robust.


411-412: Preview section compaction looks good.

This is a clean formatting-only simplification with no functional change.

src/index.js (1)

63-63: Good startup optimization placement.

Preloading the SDK early and non-blocking in startup() is a clean way to remove first-request cold-start latency without delaying bot boot.

Also applies to: 277-278

src/modules/ai.js (1)

3-3: Header update is consistent with the migration.

The module doc now correctly reflects Vercel AI SDK usage.

tests/modules/triage-respond.test.js (1)

44-49: Test fixtures are aligned with the new AI SDK result shape.

The updated extractStats mock signature and costUsd fixture fields match the migrated responder/classifier flow.

Also applies to: 590-595

web/src/components/dashboard/config-workspace/config-categories.ts (1)

142-143: Search metadata update is coherent with streaming removal.

Removing “streaming” from this triage debug search item avoids stale matches in config search.

src/data/model-pricing.json (1)

1-145: Pricing dataset structure matches the runtime cost lookup contract.

The models map and per-model token rate fields are well-formed for aiCost consumption.

tests/utils/aiClient.test.js (6)

1-45: LGTM! Excellent mock setup.

The mock structure follows Vitest best practices:

  • Uses vi.hoisted() correctly for factory-referenced mocks
  • Proper import ordering (mocks before imports under test)
  • Comprehensive coverage of external dependencies (logger, AI SDK, Anthropic SDK, aiCost)

49-71: LGTM! Well-designed test helpers.

The helper functions effectively reduce duplication and provide flexible mock result builders with sensible defaults.


356-462: LGTM! Comprehensive streaming tests.

The stream() test suite thoroughly covers:

  • Async result resolution (all Promise fields)
  • Tool-call chunk forwarding with callback filtering
  • Timeout and API error handling
  • Synchronous error catching in onChunk callbacks with proper logging

466-523: LGTM! Solid retry behavior coverage.

The retry tests validate:

  • Successful retry after 429 with retry-after header parsing
  • Retry exhaustion after max attempts
  • Immediate failure for non-retryable errors (401)

577-631: LGTM! Thorough cache token extraction tests.

These tests comprehensively validate the cache-aware cost calculation:

  • Direct provider metadata extraction (anthropic.cacheReadInputTokens)
  • Dynamic provider name extraction (minimax.cacheReadInputTokens)
  • Fallback to anthropic metadata for compatible providers
  • All four token types passed to calculateCost (input, output, cachedInput, cacheCreation)

The assertions align with the implementation's getProviderMetadata helper and calculateCost parameter expectations.


331-333: 🧹 Nitpick | 🔵 Trivial

Optional: Simplify error assertion.

The expect.fail('should have thrown') call is unnecessary here—if no error is thrown, the test will fail anyway because the try block completes without entering the catch block.

♻️ Proposed simplification
   try {
     await generate({ model: 'claude-haiku-4-5', prompt: 'bad' });
-    expect.fail('should have thrown');
   } catch (err) {
     expect(err).toBeInstanceOf(AIClientError);

Apply the same pattern to lines 427 and 557.

			> Likely an incorrect or invalid review comment.

Comment thread .env.example Outdated
Comment thread .github/copilot-instructions.md
Comment thread docs/test-coverage-analysis.md
Comment thread package.json Outdated
Comment thread src/modules/triage-config.js
Comment thread src/utils/errors.js Outdated
Comment thread tests/modules/triage-parse.test.js
Comment thread tests/utils/aiClient.test.js Outdated
Comment thread tests/utils/errors.test.js
Comment thread web/src/types/config.ts Outdated
@github-project-automation github-project-automation bot moved this from Backlog to In Review in Volvox.Bot Apr 12, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/modules/aiAutoMod.js (1)

212-287: ⚠️ Potential issue | 🟠 Major

Avoid issuing two delete requests for the same message.

When autoDelete is enabled and the selected action is 'delete', this path deletes once before the switch and once again in case 'delete'. With the current defaults, every spam hit takes that duplicate API path.

🔧 Suggested fix
-  if (autoModConfig.autoDelete) {
+  if (autoModConfig.autoDelete && result.action !== 'delete') {
     await message.delete().catch(() => {});
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/aiAutoMod.js` around lines 212 - 287, The code issues an extra
delete because message.delete() is called once when autoModConfig.autoDelete is
true and again inside the switch for case 'delete'; update the case 'delete'
branch to skip deleting if autoModConfig.autoDelete is true (i.e., check
autoModConfig.autoDelete at the top of the 'delete' case and return/break when
true), so only one delete API call happens; reference the existing symbols:
autoModConfig.autoDelete, result.action switch, case 'delete', and
message.delete().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/utils/aiClient.test.js`:
- Around line 130-140: The test currently uses a provider that falls back to
KNOWN_BASE_URLS; change it to use a provider not listed in KNOWN_BASE_URLS
(e.g., "customprov:ModelX") and set the corresponding env vars
(CUSTOMPROV_API_KEY and CUSTOMPROV_BASE_URL) before calling generate so the
codepath that resolves <PROVIDER>_API_KEY / <PROVIDER>_BASE_URL is exercised;
then assert mockCreateAnthropic was called with baseURL equal to
process.env.CUSTOMPROV_BASE_URL (and clean up/reset those env vars after the
test). Ensure you still mockGenerateText and use the same expectations but
replace the provider string and expected baseURL check to reference the env var.

---

Outside diff comments:
In `@src/modules/aiAutoMod.js`:
- Around line 212-287: The code issues an extra delete because message.delete()
is called once when autoModConfig.autoDelete is true and again inside the switch
for case 'delete'; update the case 'delete' branch to skip deleting if
autoModConfig.autoDelete is true (i.e., check autoModConfig.autoDelete at the
top of the 'delete' case and return/break when true), so only one delete API
call happens; reference the existing symbols: autoModConfig.autoDelete,
result.action switch, case 'delete', and message.delete().
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 9a5e8ef1-d35d-4b9c-beea-e003df24655f

📥 Commits

Reviewing files that changed from the base of the PR and between b5d96d5 and cf1e1d3.

📒 Files selected for processing (3)
  • src/modules/aiAutoMod.js
  • tests/modules/aiAutoMod.test.js
  • tests/utils/aiClient.test.js
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Greptile Review
  • GitHub Check: Analyze (javascript-typescript)
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{js,ts,jsx,tsx,mjs,mts}

📄 CodeRabbit inference engine (AGENTS.md)

Use ESM only; do not use CommonJS

Files:

  • src/modules/aiAutoMod.js
  • tests/utils/aiClient.test.js
  • tests/modules/aiAutoMod.test.js
**/*.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{js,ts,jsx,tsx}: Use src/logger.js; do not use console.*
Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Use parameterized SQL only; do not use string concatenation for SQL queries
Community features should be gated behind config..enabled; moderation commands are the exception

Files:

  • src/modules/aiAutoMod.js
  • tests/utils/aiClient.test.js
  • tests/modules/aiAutoMod.test.js
**/*.{js,ts,jsx,tsx,json}

📄 CodeRabbit inference engine (AGENTS.md)

Welcome-message variables use double braces only, like {{user}}; single braces are plain text and should not be documented, inserted, or parsed as variables

Files:

  • src/modules/aiAutoMod.js
  • tests/utils/aiClient.test.js
  • tests/modules/aiAutoMod.test.js
{src,web/src,tests}/**/*.{js,ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use ESM syntax only — import/export, never require()/module.exports

Use single quotes for strings — no double quotes except in JSON files

Always include semicolons at the end of statements

Use 2-space indentation — spaces, not tabs

Always use trailing commas in multi-line structures

Limit line width to 100 characters

Never use console.* methods — use Winston logger instead with import logger from '../logger.js' and call logger.info(), logger.warn(), logger.error(), or logger.debug()

Files:

  • src/modules/aiAutoMod.js
  • tests/utils/aiClient.test.js
  • tests/modules/aiAutoMod.test.js
src/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use safeReply(), safeSend(), or safeEditReply() instead of raw Discord.js methods for messaging to handle errors gracefully

Use the config system via getConfig(guildId) from src/modules/config.js to retrieve merged global and guild-level configuration

Use src/utils/cache.js for generic caching with Redis primary and in-memory fallback

Use src/utils/discordCache.js for caching Discord API responses (channels, roles, members)

Use src/utils/reputationCache.js for caching leaderboard and user reputation data

Files:

  • src/modules/aiAutoMod.js
src/modules/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

All community features must be gated behind config.<feature>.enabled settings, while moderation commands are always available regardless of configuration

Files:

  • src/modules/aiAutoMod.js
tests/**/*.test.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Bot test files must follow the naming pattern *.test.js and be placed in tests/ mirroring the src/ directory structure

Maintain bot test coverage above 85% for statements, lines, and functions; 82% for branches as enforced by vitest.config.js

Files:

  • tests/utils/aiClient.test.js
  • tests/modules/aiAutoMod.test.js
{tests,web/tests}/**/*.test.{js,ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use vi.mock() for module mocking and vi.fn() for function stubs in Vitest tests

Files:

  • tests/utils/aiClient.test.js
  • tests/modules/aiAutoMod.test.js
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Always use pnpm as the package manager — the project enforces `engine-strict=true` and requires pnpm 10.30.3+
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Node.js 22 or higher is required as enforced by the `engines` field in `package.json`
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Follow Git branch naming conventions: `feat/`, `fix/`, `chore/`, `docs/`, `refactor/`, `style/`, `test/`, `ci/`
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Follow Conventional Commits message format: `feat:`, `fix:`, `chore:`, `docs:`, `refactor:`, `style:`, `test:`, `ci:`
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Use `pnpm changeset` to create release notes for user-facing changes
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Always target the `main` branch for pull requests
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Run `pnpm lint` and `pnpm test` before making changes to see existing state, and run `pnpm lint` and `pnpm test:coverage` after making changes to verify nothing breaks
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: All CI pipeline checks must pass: Lint (Biome), Test (Vitest with coverage), Web typecheck/lint/build, and Docker build validation
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: The bot is a pnpm monorepo using ESM JavaScript and discord.js v14 with Node.js 22+ as the runtime
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: The web dashboard is built with Next.js 16, TypeScript, React 19, Tailwind CSS 4, and Radix UI
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Use Vercel AI SDK (`ai`, `ai-sdk/anthropic`) for AI chat features with Claude
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Use PostgreSQL 17+ with node-pg-migrate for database operations with parameterized queries
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:27:31.922Z
Learning: Use Redis 7+ with ioredis as the primary cache layer, with in-memory fallback when Redis is unavailable
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/*.{js,ts,jsx,tsx} : Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls

Applied to files:

  • src/modules/aiAutoMod.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Write bot tests using Vitest 4 with the `node` environment, matching the `src/` structure in the `tests/` directory

Applied to files:

  • tests/utils/aiClient.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to web/tests/**/*.test.{ts,tsx} : Write web dashboard tests using Vitest 4 with the `jsdom` environment and React Testing Library, matching the `web/src/` structure

Applied to files:

  • tests/utils/aiClient.test.js
📚 Learning: 2026-03-10T23:29:51.063Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T23:29:51.063Z
Learning: Applies to src/api/routes/*.js : Apply requireGuildModerator authorization check to DELETE endpoints that modify guild-scoped resources

Applied to files:

  • tests/modules/aiAutoMod.test.js

Comment on lines +130 to +140
it('should resolve unknown providers via env var convention', async () => {
mockGenerateText.mockResolvedValue(makeGenerateResult());

await generate({ model: 'minimax:MiniMax-M2.7', prompt: 'test' });

expect(mockCreateAnthropic).toHaveBeenCalledWith(
expect.objectContaining({
baseURL: 'https://api.minimax.io/anthropic/v1',
}),
);
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

This test does not exercise the env-var convention it claims to cover.

Using minimax:MiniMax-M2.7 here still passes through the KNOWN_BASE_URLS fallback in src/utils/aiClient.js, and Lines 190-200 already cover that path. If the generic <PROVIDER>_API_KEY / <PROVIDER>_BASE_URL resolution regresses, this test will still stay green.

🔧 Suggested test fix
-  it('should resolve unknown providers via env var convention', async () => {
-    mockGenerateText.mockResolvedValue(makeGenerateResult());
-
-    await generate({ model: 'minimax:MiniMax-M2.7', prompt: 'test' });
-
-    expect(mockCreateAnthropic).toHaveBeenCalledWith(
-      expect.objectContaining({
-        baseURL: 'https://api.minimax.io/anthropic/v1',
-      }),
-    );
-  });
+  it('should resolve unknown providers via env var convention', async () => {
+    mockGenerateText.mockResolvedValue(makeGenerateResult());
+    process.env.FOO_API_KEY = 'foo-key';
+    process.env.FOO_BASE_URL = 'https://foo.example.com/v1';
+
+    try {
+      await generate({ model: 'foo:bar-model', prompt: 'test' });
+
+      expect(mockCreateAnthropic).toHaveBeenCalledWith(
+        expect.objectContaining({
+          authToken: 'foo-key',
+          baseURL: 'https://foo.example.com/v1',
+        }),
+      );
+    } finally {
+      delete process.env.FOO_API_KEY;
+      delete process.env.FOO_BASE_URL;
+    }
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/utils/aiClient.test.js` around lines 130 - 140, The test currently uses
a provider that falls back to KNOWN_BASE_URLS; change it to use a provider not
listed in KNOWN_BASE_URLS (e.g., "customprov:ModelX") and set the corresponding
env vars (CUSTOMPROV_API_KEY and CUSTOMPROV_BASE_URL) before calling generate so
the codepath that resolves <PROVIDER>_API_KEY / <PROVIDER>_BASE_URL is
exercised; then assert mockCreateAnthropic was called with baseURL equal to
process.env.CUSTOMPROV_BASE_URL (and clean up/reset those env vars after the
test). Ensure you still mockGenerateText and use the same expectations but
replace the provider string and expected baseURL check to reference the env var.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 41 out of 42 changed files in this pull request and generated 6 comments.

Files not reviewed (1)
  • pnpm-lock.yaml: Language not supported

Comment thread src/utils/aiClient.js
Comment on lines +366 to +372
inputTokens: usage.inputTokens,
outputTokens: usage.outputTokens,
cachedInputTokens,
cacheCreationInputTokens,
});

const durationMs = Date.now() - timings.start;
Copy link

Copilot AI Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When wrapping a provider error into AIClientError, only statusCode is preserved. If the underlying SDK error uses status (without statusCode), downstream classifyError()/isRetryable() will lose the HTTP status and may misclassify the failure (e.g., 401/404/429). Consider copying both status and statusCode, or normalizing statusCode from err.status ?? err.statusCode, and optionally preserving headers used for rate-limit backoff.

Copilot uses AI. Check for mistakes.
Comment thread src/utils/aiClient.js
Comment on lines +484 to +490
},
{ maxRetries: 3, signal },
);
timings.streamStarted = Date.now();
timings.streamComplete = Date.now();

const providerMeta = getProviderMetadata(result.providerMetadata, providerName);
Copy link

Copilot AI Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as generate(): this AIClientError wrapper only includes statusCode. If the SDK throws errors that only expose status, retry/classification logic won’t see the HTTP status. Include status too (or normalize statusCode from err.status ?? err.statusCode).

Copilot uses AI. Check for mistakes.
Comment thread src/utils/aiClient.js
Comment on lines +519 to +522
};
} catch (err) {
if (err instanceof AIClientError) throw err;
if (signal.aborted) {
Copy link

Copilot AI Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

providerName is destructured from resolveModel() but never used. This is dead code and may trigger unused-variable linting; consider removing it from the destructuring assignment.

Copilot uses AI. Check for mistakes.
Comment thread src/modules/triage.js Outdated
* @param {Object} evalConfig - Effective triage configuration to use for this evaluation.
* @param {import('discord.js').Client} evalClient - Discord client used to fetch channels and send messages.
* @throws {CLIProcessError} When a classifier/responder CLI process times out; the error is rethrown.
* @throws {AIClientError} When a classifier/responder CLI process times out; the error is rethrown.
Copy link

Copilot AI Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The @throws description still refers to a "CLI process" timeout, but triage now uses the in-process AI client/SDK. Updating this wording will avoid confusion for future maintainers and keep the docs accurate.

Suggested change
* @throws {AIClientError} When a classifier/responder CLI process times out; the error is rethrown.
* @throws {AIClientError} When a classifier or responder AI client request times out; the error is rethrown.

Copilot uses AI. Check for mistakes.
Comment on lines 74 to +83
// ── Classifier result parser ─────────────────────────────────────────────────

/**
* Parse the classifier's JSON text output.
* @param {Object} sdkMessage - Raw CLI result message
* @param {string} channelId - For logging
* @returns {Object|null} Parsed { classification, reasoning, targetMessageIds } or null
* @returns {Object|null} Parsed { classification, reasoning, targetMessageIds, needsThinking, needsSearch } or null
*/
export function parseClassifyResult(sdkMessage, channelId) {
const parsed = parseSDKResult(sdkMessage.result, channelId, 'Classifier');
export function parseClassifyResult(sdkResult, channelId) {
const parsed = parseSDKResult(sdkResult.text, channelId, 'Classifier');
Copy link

Copilot AI Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSDoc for parseClassifyResult still says the input is a "Raw CLI result message", and the file header also references CLI processes. This parser now operates on aiClient SDK results ({ text, finishReason, ... }), so the docs should be updated to match the new shape/source.

Copilot uses AI. Check for mistakes.
Comment on lines 105 to +118
/**
* Parse the responder's JSON text output.
*
* Note: Response length enforcement (Discord's 2000-char limit) is not applied
* here. Individual response text is bounded by MAX_MESSAGE_CHARS=1000 at
* accumulation time, and splitMessage() in triage-respond.js handles chunking
* for any edge cases before sending to Discord.
*
* @param {Object} sdkMessage - Raw CLI result message
* @param {string} channelId - For logging
* @returns {Object|null} Parsed { responses: [...] } or null
*/
export function parseRespondResult(sdkMessage, channelId) {
const parsed = parseSDKResult(sdkMessage.result, channelId, 'Responder');
export function parseRespondResult(sdkResult, channelId) {
const parsed = parseSDKResult(sdkResult.text, channelId, 'Responder');
Copy link

Copilot AI Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parseRespondResult JSDoc still describes the parameter as a "Raw CLI result message", but the function now expects an aiClient SDK result object with a .text field. Please update the param description to avoid misleading API consumers/maintainers.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tests/modules/triage-parse.test.js (1)

187-200: ⚠️ Potential issue | 🟡 Minor

Strengthen responder success assertion for field-shape regressions.

Line 189 uses target_message_id, but this test only checks array length. That can miss schema drift in responder payload fields. Assert the expected key explicitly (and preferably use targetMessageId consistently with other updated tests).

Suggested diff
   it('should return parsed result on success', () => {
     const sdkMessage = {
-      text: '{"responses":[{"target_message_id":"m1","response":"Help text"}]}',
+      text: '{"responses":[{"targetMessageId":"m1","response":"Help text"}]}',
       finishReason: 'stop',
       costUsd: 0.005,
       durationMs: 200,
       usage: { inputTokens: 500, outputTokens: 100 },
       sources: [],
       providerMetadata: { anthropic: {} },
     };
     const result = parseRespondResult(sdkMessage, 'ch1');
     expect(result).not.toBeNull();
     expect(result.responses).toHaveLength(1);
+    expect(result.responses[0].targetMessageId).toBe('m1');
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/modules/triage-parse.test.js` around lines 187 - 200, The test for
parseRespondResult should assert the expected response object shape, not just
array length: update the spec in the 'should return parsed result on success'
case to check that parseRespondResult(sdkMessage, 'ch1') returns a responses
array of length 1 and that responses[0] contains the camelCase field
targetMessageId with value 'm1' (or that the key exists), and that the response
text is present (e.g., 'response' or expected property); locate the test block
using the spec name and the parseRespondResult reference and replace the loose
length-only assertion with these explicit shape checks to catch schema
regressions.
src/modules/triage-respond.js (1)

79-103: ⚠️ Potential issue | 🟠 Major

Reapply the caller's limit after merging history and recent context.

When bufferSnapshot[0] exists, this can return up to limit + RECENT_LIMIT messages: limit older messages plus 5 recent ones, minus duplicates. That breaks the function contract and quietly increases prompt size/cost on the hot path. Either trim the merged list back to limit, or reduce the historical fetch budget when you add the recent window.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/triage-respond.js` around lines 79 - 103, The merged context can
exceed the requested limit because you fetch history (historyOptions.limit =
limit) plus RECENT_LIMIT and then merge; update the logic in the block around
historyOptions, RECENT_LIMIT, merged and toContextEntry to ensure the final
returned array respects the caller's limit: either reduce historyOptions.limit
by RECENT_LIMIT when bufferSnapshot[0] exists or, after building merged, trim
the sorted array to the first limit entries before mapping with toContextEntry;
make sure you still dedupe using bufferSnapshot and seen sets.
♻️ Duplicate comments (1)
tests/utils/aiClient.test.js (1)

147-157: ⚠️ Potential issue | 🟡 Minor

This still doesn't exercise the env-var convention it names.

Using minimax here only covers the KNOWN_BASE_URLS fallback in src/utils/aiClient.js, so
a regression in generic <PROVIDER>_API_KEY / <PROVIDER>_BASE_URL resolution would still
pass. Switch this case to a provider that is not in KNOWN_BASE_URLS.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/utils/aiClient.test.js` around lines 147 - 157, The test uses 'minimax'
which hits KNOWN_BASE_URLS instead of exercising the env-var convention; update
the test in aiClient.test.js to use a provider name not listed in
KNOWN_BASE_URLS (e.g., 'acme' or 'someother'), change the model string passed to
generate (e.g., 'acme:SomeModel'), and ensure the test sets the corresponding
environment variables for that provider following the convention (e.g.,
ACME_BASE_URL or ACME_API_KEY) so that generate (and the code path in
src/utils/aiClient.js that resolves provider via env vars) triggers and
mockCreateAnthropic is asserted against the expected baseURL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.env.example:
- Around line 33-45: The .env.example section implying arbitrary providers is
misleading because createAnthropic (in src/utils/aiClient.js) routes all
provider keys through Anthropic-compatible clients; update the documentation to
narrow this block to only Anthropic-compatible providers (e.g., minimax) or
explicitly state that any custom <PROVIDER>_API_KEY/<PROVIDER>_BASE_URL must be
Anthropic-compatible; locate the env example block and replace the generic
wording and the codex example with either: 1) a concise note that only
Anthropic-compatible backends are supported (mention minimax as an example), or
2) instructions for adding non-Anthropic providers only after adding a
corresponding implementation in src/utils/aiClient.js (reference
createAnthropic) so the docs and code remain consistent.

In `@src/modules/triage-config.js`:
- Line 38: The current defaulting line sets thinkingTokens to 0 which silences
any previously-omitted configs and prevents needsThinking from working; change
the assignment in src/modules/triage-config.js to preserve "missing" rather than
coerce to zero (e.g., let thinkingTokens = triageConfig.thinkingTokens; or use
triageConfig.hasOwnProperty('thinkingTokens') ? triageConfig.thinkingTokens :
undefined) so that resolvedConfig can distinguish omitted values from an
explicit zero; if you must support explicit zero for some providers, gate that
behavior by provider capability or add an explicit migration step instead of
using a global zero fallback.

In `@src/modules/triage.js`:
- Around line 642-650: Summary: cancellations (reason === 'aborted') are being
treated as generic failures and trigger user-visible error paths. Fix: In the
error handling block around evaluateNow(), add a branch that detects
AIClientError with reason === 'aborted' (similar to the existing timeout case)
and short-circuits further generic logging/user-facing messaging—e.g., emit a
warn/debug about the abort (using channelId) and rethrow or return so the
subsequent logError and user-facing message block (which checks for parse) is
not executed. Update the logic that currently checks for timeout to also handle
aborted cancellations before calling logError; reference evaluateNow(),
buf.abortController.abort(), and AIClientError (reason === 'aborted') to find
and change the code.
- Around line 716-740: classifierConfig and responderConfig are being populated
once in startTriage() with per-guild fields (resolved.classifyModel,
resolved.respondModel, resolved.timeout, resolved.classifyBaseUrl,
resolved.respondBaseUrl, resolved.classifyApiKey, resolved.respondApiKey and the
responder system prompt), which freezes per-guild settings; instead, keep these
objects free of per-request/guild settings and assemble the final config at
evaluation time using evalConfig.triage. Concretely: in startTriage() only store
static pieces (e.g., loaded prompts like loadPrompt('triage-classify-system')
and jsonSchemaAppend from loadPrompt('triage-respond-schema')) and any immutable
defaults, and remove resolved.* fields from classifierConfig/responderConfig;
then, where you construct the request for classification/response (the code that
uses evalConfig.triage), merge the dynamic values
(evalConfig.triage.classifyModel/respondModel, timeout, baseUrl, apiKey, and
botConfig.ai?.systemPrompt or resolved.system) into a fresh config object per
call so each guild/request gets correct runtime settings.

In `@src/utils/aiClient.js`:
- Around line 365-370: The call to calculateCost is synchronous, so remove the
unnecessary await when assigning costUsd in the code that sets costUsd (replace
"const costUsd = await calculateCost(...)" with a direct call) and make the same
change in the stream() call site (the other calculateCost invocation around the
stream handling); reference the calculateCost function and the costUsd variable
to locate both places and ensure you simply call calculateCost(...)
synchronously and assign its return value without awaiting it.
- Around line 445-488: The timing code is placed after withRetry completes so
streamStarted/streamComplete are both set too late; move the assignments into
the async closure around the stream consumption: set timings.streamStarted
immediately after obtaining streamResult from streamText (before awaiting
streamResult.text/usage/finishReason/sources/providerMetadata) and set
timings.streamComplete after all awaits that consume the stream, ensuring
streamConsumeMs measures the await time and streamInitMs measures initiation
latency; adjust the positions inside the withRetry async function (referencing
withRetry, streamText, streamResult, timings.streamStarted,
timings.streamComplete).
- Around line 184-188: The current apiKey assignment leaks ANTHROPIC_API_KEY to
non-Anthropic providers; update the logic in the apiKey computation so
ANTHROPIC_API_KEY is only considered when isAnthropic is true. Concretely, use
overrides.apiKey first, then if isAnthropic use process.env.ANTHROPIC_API_KEY,
otherwise use process.env[`${envPrefix}_API_KEY`] (do not have a fallback to
process.env.ANTHROPIC_API_KEY for non-Anthropic cases); adjust the expression
around the apiKey variable and reference isAnthropic, overrides.apiKey, and
envPrefix to implement this conditional-only behavior.

In `@src/utils/aiCost.js`:
- Around line 80-82: The subtraction of cacheTotal from inputTokens (computing
regularInputTokens) is ambiguous; change the function API to accept an explicit
boolean (e.g., inputTokensAreUncached) or normalize input shape so callers must
supply a canonical value; then compute regularInputTokens as either
(inputTokens) when inputTokensAreUncached is true, or Math.max(0, inputTokens -
cacheTotal) when false. Update the logic around regularInputTokens, inputTokens,
and cacheTotal accordingly and adjust all call sites to pass the new flag (or to
supply canonical inputTokens) so cost/budget math no longer depends on caller
convention.

---

Outside diff comments:
In `@src/modules/triage-respond.js`:
- Around line 79-103: The merged context can exceed the requested limit because
you fetch history (historyOptions.limit = limit) plus RECENT_LIMIT and then
merge; update the logic in the block around historyOptions, RECENT_LIMIT, merged
and toContextEntry to ensure the final returned array respects the caller's
limit: either reduce historyOptions.limit by RECENT_LIMIT when bufferSnapshot[0]
exists or, after building merged, trim the sorted array to the first limit
entries before mapping with toContextEntry; make sure you still dedupe using
bufferSnapshot and seen sets.

In `@tests/modules/triage-parse.test.js`:
- Around line 187-200: The test for parseRespondResult should assert the
expected response object shape, not just array length: update the spec in the
'should return parsed result on success' case to check that
parseRespondResult(sdkMessage, 'ch1') returns a responses array of length 1 and
that responses[0] contains the camelCase field targetMessageId with value 'm1'
(or that the key exists), and that the response text is present (e.g.,
'response' or expected property); locate the test block using the spec name and
the parseRespondResult reference and replace the loose length-only assertion
with these explicit shape checks to catch schema regressions.

---

Duplicate comments:
In `@tests/utils/aiClient.test.js`:
- Around line 147-157: The test uses 'minimax' which hits KNOWN_BASE_URLS
instead of exercising the env-var convention; update the test in
aiClient.test.js to use a provider name not listed in KNOWN_BASE_URLS (e.g.,
'acme' or 'someother'), change the model string passed to generate (e.g.,
'acme:SomeModel'), and ensure the test sets the corresponding environment
variables for that provider following the convention (e.g., ACME_BASE_URL or
ACME_API_KEY) so that generate (and the code path in src/utils/aiClient.js that
resolves provider via env vars) triggers and mockCreateAnthropic is asserted
against the expected baseURL.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 62fda2d6-677c-4666-9e7f-e384ec11c726

📥 Commits

Reviewing files that changed from the base of the PR and between cf1e1d3 and bfbd078.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (14)
  • .env.example
  • .github/copilot-instructions.md
  • docs/test-coverage-analysis.md
  • package.json
  • src/modules/triage-config.js
  • src/modules/triage-respond.js
  • src/modules/triage.js
  • src/utils/aiClient.js
  • src/utils/aiCost.js
  • src/utils/errors.js
  • tests/modules/triage-parse.test.js
  • tests/utils/aiClient.test.js
  • tests/utils/errors.test.js
  • web/src/types/config.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Greptile Review
🧰 Additional context used
📓 Path-based instructions (13)
**/package.json

📄 CodeRabbit inference engine (AGENTS.md)

Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing

Files:

  • package.json
**/*.{js,ts,jsx,tsx,json}

📄 CodeRabbit inference engine (AGENTS.md)

Welcome-message variables use double braces only, like {{user}}; single braces are plain text and should not be documented, inserted, or parsed as variables

Files:

  • package.json
  • src/modules/triage-respond.js
  • src/modules/triage-config.js
  • web/src/types/config.ts
  • tests/modules/triage-parse.test.js
  • tests/utils/errors.test.js
  • src/utils/aiCost.js
  • src/modules/triage.js
  • src/utils/errors.js
  • src/utils/aiClient.js
  • tests/utils/aiClient.test.js
**/*.{js,ts,jsx,tsx,mjs,mts}

📄 CodeRabbit inference engine (AGENTS.md)

Use ESM only; do not use CommonJS

Files:

  • src/modules/triage-respond.js
  • src/modules/triage-config.js
  • web/src/types/config.ts
  • tests/modules/triage-parse.test.js
  • tests/utils/errors.test.js
  • src/utils/aiCost.js
  • src/modules/triage.js
  • src/utils/errors.js
  • src/utils/aiClient.js
  • tests/utils/aiClient.test.js
**/*.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{js,ts,jsx,tsx}: Use src/logger.js; do not use console.*
Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Use parameterized SQL only; do not use string concatenation for SQL queries
Community features should be gated behind config..enabled; moderation commands are the exception

Files:

  • src/modules/triage-respond.js
  • src/modules/triage-config.js
  • web/src/types/config.ts
  • tests/modules/triage-parse.test.js
  • tests/utils/errors.test.js
  • src/utils/aiCost.js
  • src/modules/triage.js
  • src/utils/errors.js
  • src/utils/aiClient.js
  • tests/utils/aiClient.test.js
src/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use ESM syntax (import/export) in JavaScript files; never use require() or module.exports

Never use console.log(), console.error(), or other console.* methods; use the Winston logger instead via import logger from '../logger.js'

Use generic cache utilities from src/utils/cache.js for Redis primary + in-memory fallback caching

Use src/utils/discordCache.js for caching Discord API responses (channels, roles, members)

Use src/utils/reputationCache.js for caching leaderboard and user reputation data

Auto-invalidate all caches when config changes occur

Files:

  • src/modules/triage-respond.js
  • src/modules/triage-config.js
  • src/utils/aiCost.js
  • src/modules/triage.js
  • src/utils/errors.js
  • src/utils/aiClient.js
{src,web/src}/**/*.{js,ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use single quotes for strings; never use double quotes (except in JSON files)

Always include semicolons at the end of statements

Use 2-space indentation with spaces, never tabs

Always use trailing commas in multi-line arrays and objects

Limit lines to 100 characters or less

Files:

  • src/modules/triage-respond.js
  • src/modules/triage-config.js
  • web/src/types/config.ts
  • src/utils/aiCost.js
  • src/modules/triage.js
  • src/utils/errors.js
  • src/utils/aiClient.js
{src/**/*.js,migrations/**/*.cjs}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Always use parameterized queries for database operations; never use string interpolation in SQL

Files:

  • src/modules/triage-respond.js
  • src/modules/triage-config.js
  • src/utils/aiCost.js
  • src/modules/triage.js
  • src/utils/errors.js
  • src/utils/aiClient.js
{src/commands/**/*.js,src/modules/**/*.js}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Always use safeReply(), safeSend(), or safeEditReply() instead of raw Discord.js methods when sending messages

Files:

  • src/modules/triage-respond.js
  • src/modules/triage-config.js
  • src/modules/triage.js
{src/modules/**/*.js,src/commands/**/*.js}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Gate all community features behind config.<feature>.enabled checks; moderation commands are always available

Files:

  • src/modules/triage-respond.js
  • src/modules/triage-config.js
  • src/modules/triage.js
src/modules/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Create new bot feature modules in src/modules/ following the module pattern

Files:

  • src/modules/triage-respond.js
  • src/modules/triage-config.js
  • src/modules/triage.js
web/src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

web/src/**/*.{ts,tsx}: New dashboard routes need title wiring in web/src/lib/page-titles.ts; use createPageMetadata() for SSR and keep DashboardTitleSync aligned for client navigation
Dashboard clients that need the guild list should consume GuildDirectoryProvider; do not stack extra /api/guilds fetch loops in leaf components
Recharts dashboard views should use web/src/components/ui/stable-responsive-container.tsx; raw ResponsiveContainer mounts can spam width(-1)/height(-1) warnings when panels render before layout settles
Any visual dashboard or landing page change must be verified with Chrome DevTools MCP before calling it done; take a screenshot after the change
Check both light and dark themes if colors or theming changed
Check responsive behavior on mobile, tablet, and desktop if layout changed

Files:

  • web/src/types/config.ts
web/src/**/*.{ts,tsx,css,scss}

📄 CodeRabbit inference engine (AGENTS.md)

Follow the design system when making changes to the UI/UX; see DESIGN.md for the design system and color palette

Files:

  • web/src/types/config.ts
tests/**/*.test.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Place bot test files in tests/ mirroring the src/ directory structure

Maintain minimum code coverage thresholds: statements 85%, branches 82%, functions 85%, lines 85% for bot tests; 85% across all metrics for web tests

Use Vitest 4 with vi.mock() for module mocking and vi.fn() for function stubs; do not use other testing frameworks

Files:

  • tests/modules/triage-parse.test.js
  • tests/utils/errors.test.js
  • tests/utils/aiClient.test.js
🧠 Learnings (11)
📓 Common learnings
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use the pnpm package manager exclusively; never use npm or yarn
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Run `pnpm lint` before and after making changes to verify code style and formatting; use `pnpm lint:fix` to auto-fix issues
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Run `pnpm test:coverage` before pushing to main to verify test coverage thresholds are met
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use Node.js 22 or higher; the project enforces this via the `engines` field in `package.json`
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use Conventional Commits for commit messages with prefixes: `feat:`, `fix:`, `chore:`, `docs:`, `refactor:`, `style:`, `test:`, `ci:`
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use branch naming conventions: `feat/`, `fix/`, `chore/`, `docs/`, `refactor/`, `style/`, `test/`, `ci/` prefixes
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use `pnpm changeset` to create release notes for user-facing changes before merging to main
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use PostgreSQL 17+ with node-pg-migrate for database migrations; always create migrations with `pnpm migrate:create`
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use Redis 7+ (ioredis) for caching with automatic fallback to in-memory caching if Redis is unavailable
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use the Vercel AI SDK (`ai`, `ai-sdk/anthropic`) for Claude AI features
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use discord.js v14 for bot interactions and Discord API calls
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use Next.js 16, React 19, and Tailwind CSS 4 for the web dashboard
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Use Biome 2 as the linter and formatter; configuration is in `biome.json`
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: CI pipeline must pass all checks: Biome lint, Vitest coverage, web typecheck, lint, and build, plus Docker build validation
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Biome linting applies to `src/**/*.js` (bot) and `web/src/**/*.{ts,tsx}` (web); run `pnpm lint` from root to check both
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Read this coding instructions file at the start of every session to understand project conventions
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Run `pnpm lint` and `pnpm test` before making changes to see the existing project state
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-13T08:32:41.892Z
Learning: Run `pnpm lint` and `pnpm test:coverage` after making changes to verify nothing breaks
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/package.json : Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing

Applied to files:

  • package.json
📚 Learning: 2026-04-09T20:31:03.264Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.264Z
Learning: Use Node 22+ and LATEST version of pnpm

Applied to files:

  • package.json
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Write bot tests using Vitest 4 with the `node` environment, matching the `src/` structure in the `tests/` directory

Applied to files:

  • .github/copilot-instructions.md
  • tests/utils/errors.test.js
  • tests/utils/aiClient.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to web/tests/**/*.test.{ts,tsx} : Write web dashboard tests using Vitest 4 with the `jsdom` environment and React Testing Library, matching the `web/src/` structure

Applied to files:

  • .github/copilot-instructions.md
  • tests/utils/errors.test.js
  • tests/utils/aiClient.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Maintain test coverage thresholds: statements 85%, branches 82%, functions 85%, lines 85%; never lower thresholds—add tests to cover new code instead

Applied to files:

  • .github/copilot-instructions.md
  • docs/test-coverage-analysis.md
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to web/tests/**/*.test.{ts,tsx} : Maintain test coverage thresholds of 85% across all metrics (statements, branches, functions, lines) for web dashboard tests

Applied to files:

  • docs/test-coverage-analysis.md
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to config.json : All community features in `config.json` should be gated behind a `config.<feature>.enabled` flag; moderation commands are always available regardless of config settings

Applied to files:

  • web/src/types/config.ts
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/*.{js,ts,jsx,tsx} : Community features should be gated behind config.<feature>.enabled; moderation commands are the exception

Applied to files:

  • web/src/types/config.ts
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to **/*.js : Use ESM-only syntax: `import`/`export`, never `require()`/`module.exports`

Applied to files:

  • tests/utils/errors.test.js
📚 Learning: 2026-04-09T20:31:03.263Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.263Z
Learning: Applies to **/*.{js,ts,jsx,tsx,mjs,mts} : Use ESM only; do not use CommonJS

Applied to files:

  • tests/utils/errors.test.js
🪛 GitHub Check: CodeQL
src/utils/aiClient.js

[failure] 173-173: Use of password hash with insufficient computational effort
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.
Password from an access to apiKey is hashed insecurely.

🪛 GitHub Check: SonarCloud Code Analysis
src/modules/triage.js

[failure] 436-436: Refactor this function to reduce its Cognitive Complexity from 48 to the 15 allowed.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuVRkud7vHWqFg8x&open=AZ2EEuVRkud7vHWqFg8x&pullRequest=479


[failure] 521-527: Unexpected await of a non-Promise (non-"Thenable") value.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuVRkud7vHWqFg8y&open=AZ2EEuVRkud7vHWqFg8y&pullRequest=479

src/utils/aiClient.js

[warning] 199-201: Extract this nested ternary operation into an independent statement.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2F9_R2sH2ptetjjmqF&open=AZ2F9_R2sH2ptetjjmqF&pullRequest=479


[warning] 559-559: Remove the declaration of the unused 'providerName' variable.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8u&open=AZ2EEuS3kud7vHWqFg8u&pullRequest=479


[warning] 249-249: knownTools should be a Set, and use knownTools.has() to check existence or non-existence.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8r&open=AZ2EEuS3kud7vHWqFg8r&pullRequest=479


[warning] 559-559: Remove this useless assignment to variable "providerName".

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8v&open=AZ2EEuS3kud7vHWqFg8v&pullRequest=479


[warning] 94-94: Prefer Number.parseFloat over parseFloat.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8p&open=AZ2EEuS3kud7vHWqFg8p&pullRequest=479


[failure] 493-498: Unexpected await of a non-Promise (non-"Thenable") value.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8t&open=AZ2EEuS3kud7vHWqFg8t&pullRequest=479


[failure] 365-370: Unexpected await of a non-Promise (non-"Thenable") value.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8s&open=AZ2EEuS3kud7vHWqFg8s&pullRequest=479

🔇 Additional comments (3)
tests/modules/triage-parse.test.js (3)

61-68: Nice coverage improvement for fenced JSON parsing.

This case is strong: it validates leading-whitespace fence stripping and asserts a nested payload field, which protects the parser behavior well.


107-165: Classifier-path test updates look solid.

Good migration to sdkMessage.text fixtures, plus explicit assertions for needsThinking and needsSearch in the success path.


173-185: Good null/empty guards for responder parsing.

These tests cover both null and '' input paths and lock in expected null behavior cleanly.

Also applies to: 217-229

Comment thread .env.example
const tokenRecycleLimit = triageConfig.tokenRecycleLimit ?? 20000;
const thinkingTokens = triageConfig.thinkingTokens ?? 4096;
const streaming = triageConfig.streaming ?? false;
const thinkingTokens = triageConfig.thinkingTokens ?? 0;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't make needsThinking a silent no-op for configs that omit thinkingTokens.

src/modules/triage.js only enables thinking when resolvedConfig.thinkingTokens > 0. With a zero fallback here, any existing guild config that never set thinkingTokens will stop honoring the classifier's needsThinking signal after upgrade. If zero-thinking is required for some providers, gate it by capability or migrate the setting explicitly instead of changing the shared default.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/triage-config.js` at line 38, The current defaulting line sets
thinkingTokens to 0 which silences any previously-omitted configs and prevents
needsThinking from working; change the assignment in
src/modules/triage-config.js to preserve "missing" rather than coerce to zero
(e.g., let thinkingTokens = triageConfig.thinkingTokens; or use
triageConfig.hasOwnProperty('thinkingTokens') ? triageConfig.thinkingTokens :
undefined) so that resolvedConfig can distinguish omitted values from an
explicit zero; if you must support explicit zero for some providers, gate that
behavior by provider capability or add an explicit migration step instead of
using a global zero fallback.

Comment thread src/modules/triage.js Outdated
Comment thread src/modules/triage.js Outdated
Comment thread src/utils/aiClient.js Outdated
Comment thread src/utils/aiClient.js Outdated
Comment thread src/utils/aiClient.js
Comment on lines +445 to +488
try {
const result = await withRetry(
async () => {
const streamResult = streamText({
model,
system,
prompt,
...(hasTools && { tools, stopWhen: stepCountIs(5) }),
maxTokens,
providerOptions,
abortSignal: signal,
onChunk: ({ chunk }) => {
if (chunk.type === 'tool-call' && userOnChunk) {
try {
const cbResult = userOnChunk(chunk.toolName, chunk.args);
if (cbResult?.catch)
cbResult.catch((err) =>
logError('onChunk callback error', { error: err.message }),
);
} catch (err) {
logError('onChunk callback error (sync)', { error: err.message });
}
}
},
});

const text = await streamResult.text;
const usage = (await streamResult.totalUsage) ?? (await streamResult.usage) ?? {};
const finishReason = await streamResult.finishReason;
const sources = (await streamResult.sources) ?? [];
const providerMetadata = (await streamResult.providerMetadata) ?? {};

return {
text,
usage,
finishReason,
sources,
providerMetadata,
};
},
{ maxRetries: 3, signal },
);
timings.streamStarted = Date.now();
timings.streamComplete = Date.now();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The stream latency breakdown is currently wrong.

timings.streamStarted and timings.streamComplete are both assigned after withRetry()
returns, i.e. after the whole stream has already been consumed. That makes streamConsumeMs
effectively 0 and rolls the full provider latency into streamInitMs, so the new timing
logs are misleading.

📈 Minimal fix
   try {
     const result = await withRetry(
       async () => {
+        timings.streamStarted = Date.now();
         const streamResult = streamText({
           model,
           system,
           prompt,
           ...(hasTools && { tools, stopWhen: stepCountIs(5) }),
@@
         const usage = (await streamResult.totalUsage) ?? (await streamResult.usage) ?? {};
         const finishReason = await streamResult.finishReason;
         const sources = (await streamResult.sources) ?? [];
         const providerMetadata = (await streamResult.providerMetadata) ?? {};
+        timings.streamComplete = Date.now();

         return {
           text,
           usage,
           finishReason,
@@
       },
       { maxRetries: 3, signal },
     );
-    timings.streamStarted = Date.now();
-    timings.streamComplete = Date.now();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
const result = await withRetry(
async () => {
const streamResult = streamText({
model,
system,
prompt,
...(hasTools && { tools, stopWhen: stepCountIs(5) }),
maxTokens,
providerOptions,
abortSignal: signal,
onChunk: ({ chunk }) => {
if (chunk.type === 'tool-call' && userOnChunk) {
try {
const cbResult = userOnChunk(chunk.toolName, chunk.args);
if (cbResult?.catch)
cbResult.catch((err) =>
logError('onChunk callback error', { error: err.message }),
);
} catch (err) {
logError('onChunk callback error (sync)', { error: err.message });
}
}
},
});
const text = await streamResult.text;
const usage = (await streamResult.totalUsage) ?? (await streamResult.usage) ?? {};
const finishReason = await streamResult.finishReason;
const sources = (await streamResult.sources) ?? [];
const providerMetadata = (await streamResult.providerMetadata) ?? {};
return {
text,
usage,
finishReason,
sources,
providerMetadata,
};
},
{ maxRetries: 3, signal },
);
timings.streamStarted = Date.now();
timings.streamComplete = Date.now();
try {
const result = await withRetry(
async () => {
timings.streamStarted = Date.now();
const streamResult = streamText({
model,
system,
prompt,
...(hasTools && { tools, stopWhen: stepCountIs(5) }),
maxTokens,
providerOptions,
abortSignal: signal,
onChunk: ({ chunk }) => {
if (chunk.type === 'tool-call' && userOnChunk) {
try {
const cbResult = userOnChunk(chunk.toolName, chunk.args);
if (cbResult?.catch)
cbResult.catch((err) =>
logError('onChunk callback error', { error: err.message }),
);
} catch (err) {
logError('onChunk callback error (sync)', { error: err.message });
}
}
},
});
const text = await streamResult.text;
const usage = (await streamResult.totalUsage) ?? (await streamResult.usage) ?? {};
const finishReason = await streamResult.finishReason;
const sources = (await streamResult.sources) ?? [];
const providerMetadata = (await streamResult.providerMetadata) ?? {};
timings.streamComplete = Date.now();
return {
text,
usage,
finishReason,
sources,
providerMetadata,
};
},
{ maxRetries: 3, signal },
);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/aiClient.js` around lines 445 - 488, The timing code is placed
after withRetry completes so streamStarted/streamComplete are both set too late;
move the assignments into the async closure around the stream consumption: set
timings.streamStarted immediately after obtaining streamResult from streamText
(before awaiting streamResult.text/usage/finishReason/sources/providerMetadata)
and set timings.streamComplete after all awaits that consume the stream,
ensuring streamConsumeMs measures the await time and streamInitMs measures
initiation latency; adjust the positions inside the withRetry async function
(referencing withRetry, streamText, streamResult, timings.streamStarted,
timings.streamComplete).

Comment thread src/utils/aiCost.js Outdated
Volvox Bot added 3 commits April 17, 2026 03:00
- P0 #2: remove ANTHROPIC_API_KEY fallback for non-Anthropic providers.
  Previously a MiniMax (or any unknown) provider without MINIMAX_API_KEY
  would silently reuse the Anthropic key against a third-party endpoint.
  Now: loud-fail at resolveModel with AIClientError('api') naming the
  missing <PROVIDER>_API_KEY env var.
- Decision A: replace SHA-256 provider cache key (flagged by CodeQL as
  "insecure password hash") with a plain \x00-joined string. The cache
  lives in-process and is never serialized — hashing was never needed.
- Decision F: lock aiCost contract to "inputTokens is ALWAYS the total
  input count reported by the SDK, including cache-read and cache-write
  tokens." Clamp at 0 to defend against provider metadata mismatches.
- Safe #1: drop stale `await` on synchronous `calculateCost()` in both
  generate() and stream() (SonarCloud flag).
- Safe #2: normalize `err.status ?? err.statusCode` everywhere so
  errors thrown with either field expose the correct retry-after status.
- Safe #3: drop unused `providerName` from the `warmConnection`
  destructure.
- Tests: add beforeEach env-var setup for ANTHROPIC_API_KEY across all
  describe blocks; wrap minimax tests in try/finally with
  MINIMAX_API_KEY; flip sync calculateCost mocks to `mockReturnValue`;
  replace the obsolete "fall back to ANTHROPIC_API_KEY" test with a
  loud-fail assertion.
- P0 #5: classifier/responder config was frozen at startTriage() time,
  so per-guild overrides for model/timeout/apiKey/baseUrl and the
  responder's system prompt never took effect until a full restart.
  Replace the frozen module objects with prompt-only bindings and two
  `buildClassifierConfig(evalConfig)` / `buildResponderConfig(evalConfig)`
  helpers that pick up per-guild overrides on every evaluation.
- P0 #3: treat `AIClientError.reason === 'aborted'` the same as
  `'timeout'` — both are routine (superseded eval or provider cut off),
  must NOT log as errors or surface "Sorry, I'm having trouble thinking"
  to the channel. evaluateAndRespond now warns once with
  `{ channelId, reason }` before rethrowing; evaluateNow silently
  returns for both reasons.
- Safe #6: update triage-parse.js JSDoc from "Raw CLI result message"
  to "Vercel AI SDK result object".
- Tests: two new parseClassifyResult cases covering non-boolean
  `needsThinking`/`needsSearch` coercion and absent-field defaults
  (Safe #11); new coverage test for the `aborted` reason path;
  existing timeout test updated for the new structured log shape.
Safe #15: the AI client routes ALL providers through `createAnthropic`,
so non-Anthropic-compatible APIs (OpenAI-native, Mistral-native, etc.)
cannot be wired in through the <PROVIDER>_API_KEY convention. Document
this explicitly.

Also call out that there is NO fallback to ANTHROPIC_API_KEY for
non-Anthropic providers — missing a provider-specific key is a hard
failure. This mirrors the aiClient loud-fail behavior added in the
credential-leak fix.

Drop the fake CODEX example (would not actually work) and replace it
with a generic Anthropic-compatible third-party placeholder.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/modules/triage.js (1)

699-712: 🧹 Nitpick | 🔵 Trivial

Minor: reuse fetchChannelCached in the error-message fallback path.

Every other channel lookup in this module goes through fetchChannelCached(evalClient, channelId); the raw evalClient.channels.fetch(channelId) on L702 bypasses the cache on what is already a slow/unhappy path. Switching it keeps the caching boundary consistent and honors the shared-helper convention used elsewhere in triage.

♻️ Suggested change
-        const channel = await evalClient.channels.fetch(channelId).catch(() => null);
+        const channel = await fetchChannelCached(evalClient, channelId);
         if (channel) {
           await safeSend(
             channel,
             "Sorry, I'm having trouble thinking right now. Try again in a moment!",
           );
         }

As per coding guidelines src/**/*.js: Use safeReply()/safeSend()/safeEditReply() instead of raw Discord.js methods for message operations to handle errors gracefully — the safeSend call is already compliant; this is just about the channel-fetch half of the pair.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/triage.js` around lines 699 - 712, The fallback error path is
bypassing the module's caching helper by calling
evalClient.channels.fetch(channelId); replace that call with the shared helper
fetchChannelCached(evalClient, channelId) so the cache is honored and the same
null-on-failure semantics are preserved, then pass the returned channel into the
existing safeSend call (the surrounding code references fetchChannelCached,
evalClient.channels.fetch, and safeSend).
♻️ Duplicate comments (1)
src/utils/aiClient.js (1)

457-521: ⚠️ Potential issue | 🟠 Major

Stream timing breakdown is still wrong — streamStarted / streamComplete are set after withRetry resolves.

timings.streamStarted and timings.streamComplete are both assigned on lines 499–500, i.e. after await withRetry(...) has already consumed the entire stream. That means streamConsumeMs ≈ 0 and streamInitMs silently absorbs the full provider latency — the debug log on lines 513–521 is misleading in exactly the way flagged previously. The fix is to move these assignments inside the withRetry closure (before the first await streamResult.text and after the last await).

📈 Proposed fix
     const result = await withRetry(
       async () => {
         const streamResult = streamText({
           model,
           system,
           prompt,
           ...(hasTools && { tools, stopWhen: stepCountIs(5) }),
           maxTokens,
           providerOptions,
           abortSignal: signal,
           onChunk: ({ chunk }) => { /* ... */ },
         });

+        timings.streamStarted = Date.now();
         const text = await streamResult.text;
         const usage = (await streamResult.totalUsage) ?? (await streamResult.usage) ?? {};
         const finishReason = await streamResult.finishReason;
         const sources = (await streamResult.sources) ?? [];
         const providerMetadata = (await streamResult.providerMetadata) ?? {};
+        timings.streamComplete = Date.now();

         return { text, usage, finishReason, sources, providerMetadata };
       },
       { maxRetries: 3, signal },
     );
-    timings.streamStarted = Date.now();
-    timings.streamComplete = Date.now();

Note: on retry, the closure runs more than once so streamStarted will reflect the last attempt — that is already the behavior you want for "time spent on the successful attempt."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/aiClient.js` around lines 457 - 521, The stream timing markers are
set after withRetry resolves, so streamConsumeMs is effectively zero; move the
assignments for timings.streamStarted and timings.streamComplete into the
withRetry closure around the actual streaming awaits: set timings.streamStarted
= Date.now() immediately before the first await (await streamResult.text) and
set timings.streamComplete = Date.now() after the final awaited properties
(after await streamResult.finishReason / sources / providerMetadata) inside the
async closure passed to withRetry so they reflect the duration of the successful
attempt (keep timings declared as they are so retries update the markers for the
last attempt).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/utils/aiClient.js`:
- Around line 93-98: Replace the use of the global parseFloat when computing
retryAfter with Number.parseFloat to satisfy SonarCloud's preference;
specifically update the expression that computes retryAfter from
(parseFloat(err.headers?.['retry-after']) || 0) * 1000 to use Number.parseFloat
so the retryAfter variable calculation (near the exponential/jitter/delay logic)
uses Number.parseFloat(err.headers?.['retry-after']) || 0.

---

Outside diff comments:
In `@src/modules/triage.js`:
- Around line 699-712: The fallback error path is bypassing the module's caching
helper by calling evalClient.channels.fetch(channelId); replace that call with
the shared helper fetchChannelCached(evalClient, channelId) so the cache is
honored and the same null-on-failure semantics are preserved, then pass the
returned channel into the existing safeSend call (the surrounding code
references fetchChannelCached, evalClient.channels.fetch, and safeSend).

---

Duplicate comments:
In `@src/utils/aiClient.js`:
- Around line 457-521: The stream timing markers are set after withRetry
resolves, so streamConsumeMs is effectively zero; move the assignments for
timings.streamStarted and timings.streamComplete into the withRetry closure
around the actual streaming awaits: set timings.streamStarted = Date.now()
immediately before the first await (await streamResult.text) and set
timings.streamComplete = Date.now() after the final awaited properties (after
await streamResult.finishReason / sources / providerMetadata) inside the async
closure passed to withRetry so they reflect the duration of the successful
attempt (keep timings declared as they are so retries update the markers for the
last attempt).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: b36346fb-87ba-4e4f-8b8f-421bdc1ff75a

📥 Commits

Reviewing files that changed from the base of the PR and between bfbd078 and 2252aa9.

📒 Files selected for processing (8)
  • .env.example
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiClient.js
  • src/utils/aiCost.js
  • tests/modules/triage-parse.test.js
  • tests/modules/triage.coverage.test.js
  • tests/utils/aiClient.test.js
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Greptile Review
  • GitHub Check: Analyze (javascript-typescript)
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{js,ts,jsx,tsx,mjs,mts}

📄 CodeRabbit inference engine (AGENTS.md)

Use ESM only; do not use CommonJS

Files:

  • tests/modules/triage.coverage.test.js
  • tests/modules/triage-parse.test.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • tests/utils/aiClient.test.js
  • src/utils/aiClient.js
**/*.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{js,ts,jsx,tsx}: Use src/logger.js; do not use console.*
Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Use parameterized SQL only; do not use string concatenation for SQL queries
Community features should be gated behind config..enabled; moderation commands are the exception

Files:

  • tests/modules/triage.coverage.test.js
  • tests/modules/triage-parse.test.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • tests/utils/aiClient.test.js
  • src/utils/aiClient.js
**/*.{js,ts,jsx,tsx,json}

📄 CodeRabbit inference engine (AGENTS.md)

Welcome-message variables use double braces only, like {{user}}; single braces are plain text and should not be documented, inserted, or parsed as variables

Files:

  • tests/modules/triage.coverage.test.js
  • tests/modules/triage-parse.test.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • tests/utils/aiClient.test.js
  • src/utils/aiClient.js
**/*.{js,cjs}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use ESM syntax (import/export) in all JavaScript files. Database migrations use .cjs extension due to node-pg-migrate ESM incompatibility.

Files:

  • tests/modules/triage.coverage.test.js
  • tests/modules/triage-parse.test.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • tests/utils/aiClient.test.js
  • src/utils/aiClient.js
**/*.{js,ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use single quotes instead of double quotes (except in JSON files)

Always include semicolons at the end of statements

Use 2-space indentation with spaces, not tabs

Always use trailing commas in multi-line constructs

Keep lines to a maximum of 100 characters

Never use console.* methods (console.log, console.error, etc.). Use Winston logger instead: import logger from '../logger.js'; and call logger.info(), logger.warn(), logger.error(), logger.debug()

Files:

  • tests/modules/triage.coverage.test.js
  • tests/modules/triage-parse.test.js
  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • tests/utils/aiClient.test.js
  • src/utils/aiClient.js
tests/**/*.test.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Bot test files must be JavaScript (ESM) and match the src/ structure. Use Vitest with vi.mock() for module mocking and vi.fn() for function stubs. Test timeout is 10 seconds.

Files:

  • tests/modules/triage.coverage.test.js
  • tests/modules/triage-parse.test.js
  • tests/utils/aiClient.test.js
**/*.test.{js,ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Maintain test coverage thresholds: Bot tests require 85% statements, 82% branches, 85% functions, 85% lines. Web tests require 85% across all metrics. Do not lower thresholds; add tests instead.

Files:

  • tests/modules/triage.coverage.test.js
  • tests/modules/triage-parse.test.js
  • tests/utils/aiClient.test.js
src/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

All non-migration JavaScript files must use ESM with import/export syntax, never require() or module.exports

Import the Winston logger with import logger from '../logger.js'; (adjust relative path as needed) and use methods: logger.info(), logger.warn(), logger.error(), logger.debug()

Use safeReply(), safeSend(), or safeEditReply() instead of raw Discord.js methods for message operations to handle errors gracefully (deleted channels, missing permissions)

Use parameterized queries for all database operations. Never use string interpolation for SQL.

Files:

  • src/modules/triage-parse.js
  • src/modules/triage.js
  • src/utils/aiCost.js
  • src/utils/aiClient.js
src/modules/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Feature modules should manage bot functionality and be gated behind config.<feature>.enabled. Moderation commands are always available regardless of config.

Files:

  • src/modules/triage-parse.js
  • src/modules/triage.js
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Use pnpm workspaces for the monorepo. Run `pnpm mono:dev`, `pnpm mono:build`, `pnpm mono:lint`, `pnpm mono:test` for workspace-wide operations.
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Follow Conventional Commits for commit messages: `feat:`, `fix:`, `chore:`, `docs:`, `refactor:`, `style:`, `test:`, `ci:`. Use branch naming conventions: `feat/`, `fix/`, `chore/`, `docs/`, `refactor/`, `style/`, `test/`, `ci/`. Always target `main` for PRs.
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Use `pnpm changeset` to document user-facing changes for release notes
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Before making changes, run `pnpm lint` and `pnpm test` to see existing state. After changes, run `pnpm lint` and `pnpm test:coverage` to verify nothing breaks. Use `pnpm lint:fix` to auto-fix formatting and lint issues.
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: The project uses Biome 2 as the linter and formatter. Run `pnpm lint` to check both bot (`src/**/*.js`) and web (`web/src/**/*.{ts,tsx}`) code. CI enforces all Biome checks.
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: The tech stack includes: Node.js 22+ ESM JavaScript for the bot, discord.js v14, Vercel AI SDK (`ai`, `ai-sdk/anthropic`), PostgreSQL 17+ with node-pg-migrate, Redis 7+ with ioredis, Express 5 REST API, Winston logging, Next.js 16 TypeScript web dashboard with React 19 and Tailwind CSS 4, Vitest 4 testing, and Turborepo for build orchestration.
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Database migrations must be created with `pnpm migrate:create -- <name>`, use sequential numbering (`001_`, `002_`, etc.), and use parameterized queries. Use `pnpm migrate` to apply and `pnpm migrate:down` to rollback.
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: For local development, use `docker compose up` to start PostgreSQL, Redis, bot, and web dashboard. Docker Compose sets `DATABASE_URL`, `REDIS_URL`, and `NEXTAUTH_URL` environment variables that override `.env` values.
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T03:02:07.957Z
Learning: Environment variables are defined in `.env`. Key variables include: `DISCORD_TOKEN`, `ANTHROPIC_API_KEY`, `DATABASE_URL`, `DISCORD_CLIENT_ID` (required); `REDIS_URL`, `BOT_API_SECRET`, `SENTRY_DSN`, `LOG_LEVEL` (optional). Copy from `.env.example`.
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Maintain test coverage thresholds: statements 85%, branches 82%, functions 85%, lines 85%; never lower thresholds—add tests to cover new code instead

Applied to files:

  • tests/modules/triage.coverage.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to src/**/*.js : Always use `safeReply()`, `safeSend()`, or `safeEditReply()` instead of raw Discord.js methods for safe Discord messaging that handles errors gracefully

Applied to files:

  • tests/modules/triage.coverage.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to tests/**/*.test.js : Write bot tests using Vitest 4 with the `node` environment, matching the `src/` structure in the `tests/` directory

Applied to files:

  • tests/utils/aiClient.test.js
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to web/tests/**/*.test.{ts,tsx} : Write web dashboard tests using Vitest 4 with the `jsdom` environment and React Testing Library, matching the `web/src/` structure

Applied to files:

  • tests/utils/aiClient.test.js
🪛 GitHub Check: SonarCloud Code Analysis
src/modules/triage.js

[failure] 483-483: Refactor this function to reduce its Cognitive Complexity from 48 to the 15 allowed.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuVRkud7vHWqFg8x&open=AZ2EEuVRkud7vHWqFg8x&pullRequest=479


[failure] 568-574: Unexpected await of a non-Promise (non-"Thenable") value.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuVRkud7vHWqFg8y&open=AZ2EEuVRkud7vHWqFg8y&pullRequest=479

src/utils/aiClient.js

[warning] 95-95: Prefer Number.parseFloat over parseFloat.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2ZZBcFy8hqXC4KQ1QA&open=AZ2ZZBcFy8hqXC4KQ1QA&pullRequest=479


[failure] 166-166: Refactor this function to reduce its Cognitive Complexity from 18 to the 15 allowed.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2ZZBcFy8hqXC4KQ1QB&open=AZ2ZZBcFy8hqXC4KQ1QB&pullRequest=479


[warning] 209-211: Extract this nested ternary operation into an independent statement.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2F9_R2sH2ptetjjmqF&open=AZ2F9_R2sH2ptetjjmqF&pullRequest=479


[warning] 259-259: knownTools should be a Set, and use knownTools.has() to check existence or non-existence.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuS3kud7vHWqFg8r&open=AZ2EEuS3kud7vHWqFg8r&pullRequest=479

🔇 Additional comments (10)
src/utils/aiCost.js (1)

67-96: LGTM — contract and clamp address prior ambiguity.

The explicit "inputTokens is total reported by SDK" contract plus the Math.max(0, …) clamp makes the cache-token accounting deterministic, and unknown models fall back to 0 instead of throwing. The prior ambiguity around regularInputTokens is resolved by the documented caller contract.

src/utils/aiClient.js (1)

278-303: LGTM — pre-aborted external signal now honored.

Synchronously aborting the local controller (and clearing the timer) when externalSignal.aborted is already true fixes the prior bug where a pre-cancelled caller signal would still let the SDK call proceed to timeout. Combined with the externalSignal?.aborted check in the catch blocks (lines 404, 535), caller cancellations are correctly surfaced as reason: 'aborted' rather than 'timeout'.

.env.example (1)

25-52: LGTM — matches new provider-routing contract.

Making MINIMAX_API_KEY required for the default triage models, narrowing the generic <PROVIDER>_* section to Anthropic-compatible endpoints, and explicitly documenting the no-fallback-to-ANTHROPIC_API_KEY rule all line up with the behavior in src/utils/aiClient.js (resolveModel loud-fails on missing provider keys and only sends Anthropic credentials to the Anthropic path).

tests/utils/aiClient.test.js (2)

159-206: LGTM — env-var resolution path now actually exercised.

The codex:some-model tests (lines 193–224) hit the generic <PROVIDER>_API_KEY / <PROVIDER>_BASE_URL path rather than the KNOWN_BASE_URLS shortcut, so a regression in the env-var convention will now fail tests. The loud-fail test on lines 176–191 also verifies that the Anthropic fallback can't leak into foreign providers.


384-417: LGTM — timeout vs external abort are now distinguished in tests.

Assertions on reason: 'aborted'/message: 'Request was cancelled' (lines 398, 416) and reason: 'timeout'/message: 'Request timed out' (line 380) pin down the distinction added in aiClient.js, including the pre-aborted signal case. This directly guards the caller-cancellation path used by triage.js.

tests/modules/triage.coverage.test.js (1)

270-302: LGTM — locks in the silent-on-aborted contract.

The new aborted test (lines 288–301) asserts that evaluateNow resolves cleanly and only emits a structured 'Triage evaluation aborted' warn — guarding the exact regression where a superseded evaluation would otherwise send "trouble thinking" to the channel. Paired with the existing timeout case, this pins down both silent branches in evaluateAndRespond (triage.js L692) and evaluateNow (L995).

tests/modules/triage-parse.test.js (1)

135-200: LGTM — needsThinking / needsSearch normalization is now pinned by tests.

The success case (lines 135–151), the non-boolean coercion case (lines 167–184), and the absent-fields default case (lines 186–200) collectively enforce the strict-boolean normalization in parseClassifyResult (triage-parse.js L98–99). That's the correct regression guard given downstream triage.js uses these flags to decide thinking tokens and WebSearch tool enablement.

src/modules/triage-parse.js (1)

83-102: LGTM — strict-boolean normalization prevents leaking non-boolean values downstream.

Using === true defends against the classifier returning "true"/1/null for needsThinking/needsSearch, which is important because triage.js:368-369 uses these in strict-boolean conditionals (classification.needsThinking ? resolvedConfig.thinkingTokens : 0). Defaulting to false on the truncated-recovery path (L61–62) keeps the contract consistent there too.

src/modules/triage.js (2)

85-116: LGTM — per-guild settings now resolved per evaluation.

Moving classifier/responder config construction into buildClassifierConfig/buildResponderConfig and calling them from runClassification (L169) and runResponder (L367) resolves the prior "frozen snapshot" bug: classifyModel, respondModel, timeout, classifyBaseUrl/respondBaseUrl, classifyApiKey/respondApiKey, and evalConfig.ai?.systemPrompt are all re-read from the live guild config each evaluation, so admin changes take effect without a restart. Holding only classifySystemPrompt / respondSystemDefault / respondJsonSchemaAppend at module scope (static prompt strings) is the right boundary.


688-712: LGTM — aborted and timeout are now both silent no-ops.

Treating AIClientError with reason === 'aborted' the same as 'timeout' at L692 (and mirrored at L995 in evaluateNow) prevents the superseded-evaluation abort from logging as a failure and posting "trouble thinking" to the channel. The structured 'Triage evaluation aborted' warn with { channelId, reason } also gives clean observability for both branches.

Comment thread src/utils/aiClient.js
Comment on lines +93 to +98
// Calculate delay: min(1000 * 2^attempt, 15_000) + jitter
const retryAfter =
status === 429 ? (parseFloat(err.headers?.['retry-after']) || 0) * 1000 : 0;
const exponential = Math.min(1000 * 2 ** attempt, 15_000);
const jitter = Math.random() * 1000;
const delay = Math.max(exponential + jitter, retryAfter);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Nit: prefer Number.parseFloat over the global parseFloat.

SonarCloud flags this; the stricter Number.parseFloat is functionally identical here but avoids the coercion-prone global. Low priority.

♻️ Suggested change
-      const retryAfter =
-        status === 429 ? (parseFloat(err.headers?.['retry-after']) || 0) * 1000 : 0;
+      const retryAfter =
+        status === 429 ? (Number.parseFloat(err.headers?.['retry-after']) || 0) * 1000 : 0;
🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis

[warning] 95-95: Prefer Number.parseFloat over parseFloat.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2ZZBcFy8hqXC4KQ1QA&open=AZ2ZZBcFy8hqXC4KQ1QA&pullRequest=479

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/aiClient.js` around lines 93 - 98, Replace the use of the global
parseFloat when computing retryAfter with Number.parseFloat to satisfy
SonarCloud's preference; specifically update the expression that computes
retryAfter from (parseFloat(err.headers?.['retry-after']) || 0) * 1000 to use
Number.parseFloat so the retryAfter variable calculation (near the
exponential/jitter/delay logic) uses
Number.parseFloat(err.headers?.['retry-after']) || 0.

Resolved SDK vs CLI migration conflicts by superseding main's CLI path:
dropped @anthropic-ai/claude-code, kept @ai-sdk/anthropic + ai.

- package.json: took main's dependency bumps (@sentry/node, discord.js,
  dotenv) and pnpm override restructure; regenerated pnpm-lock.yaml.
- docker-compose.yml: preserved bot/web/docs services required for local
  testing. DISABLE_PROMPT_CACHING env var dropped (no-op on SDK path;
  caching is disabled by default since we don't send cacheControl).
- src/modules/aiAutoMod.js: kept Vercel AI SDK generate() call; adopted
  main's expanded JSDoc and guildId/channelId logging fields.
- src/modules/triage.js: adopted main's improved runClassification JSDoc;
  kept our AbortSignal parameter (documented).
- tests/modules/cli-process.test.js: removed (module no longer exists).
- tests/modules/aiAutoMod.test.js: kept aiClient mock (SDK path).
- web/src/components/ui/discord-markdown-editor.tsx and
  web/src/components/dashboard/config-categories/ai-automation.tsx:
  taken from main wholesale (frontend refactors out of scope for this PR).
Copilot AI review requested due to automatic review settings April 17, 2026 04:46
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 38 out of 39 changed files in this pull request and generated 4 comments.

Files not reviewed (1)
  • pnpm-lock.yaml: Language not supported

Comment thread src/utils/aiClient.js
Comment on lines +200 to +214
const baseUrl =
overrides.baseUrl ||
(isAnthropic ? undefined : process.env[`${envPrefix}_BASE_URL`]) ||
KNOWN_BASE_URLS[providerName];

providerCache.set(
cacheKey,
createAnthropic({
...(isAnthropic
? isAnthropicAuthToken(apiKey)
? { authToken: apiKey }
: { apiKey }
: { authToken: apiKey }),
...(baseUrl && { baseURL: baseUrl }),
}),
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For non-Anthropic providers not present in KNOWN_BASE_URLS, baseUrl can be undefined (no _BASE_URL and no override). In that case we still create an Anthropic client and will send the provider's authToken to the default Anthropic endpoint, which is both confusing and an unintended credential destination. Consider failing fast for non-anthropic providers when no base URL is resolved, with an error that instructs setting <PROVIDER>_BASE_URL or passing baseUrl override.

Copilot uses AI. Check for mistakes.
Comment thread src/utils/aiClient.js
Comment on lines +107 to +116
await new Promise((resolve, reject) => {
const timer = setTimeout(resolve, delay);
if (signal) {
const onAbort = () => {
clearTimeout(timer);
reject(err);
};
signal.addEventListener('abort', onAbort, { once: true });
}
});
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

withRetry adds an abort event listener on the provided signal for each retry delay, but if the timer completes normally the listener is never removed (the { once: true } only removes it if abort fires). Over time, repeated retries can accumulate listeners and create a slow memory leak. Consider removing the listener after the delay resolves/rejects, or refactoring the delay to race setTimeout with an abort promise that cleans itself up.

Copilot uses AI. Check for mistakes.
Comment thread src/utils/aiClient.js
Comment on lines +499 to +520
timings.streamStarted = Date.now();
timings.streamComplete = Date.now();

const providerMeta = getProviderMetadata(result.providerMetadata, providerName);
const cachedInputTokens = providerMeta.cacheReadInputTokens ?? 0;
const cacheCreationInputTokens = providerMeta.cacheCreationInputTokens ?? 0;
const costUsd = calculateCost(providerName, modelId, {
inputTokens: result.usage.inputTokens,
outputTokens: result.usage.outputTokens,
cachedInputTokens,
cacheCreationInputTokens,
});

const durationMs = Date.now() - timings.start;
debug('aiClient.stream timing breakdown', {
model: modelString,
sdkResolveMs: timings.sdkResolved - timings.start,
preApiSetupMs: timings.preApiCall - timings.sdkResolved,
streamInitMs: timings.streamStarted - timings.preApiCall,
streamConsumeMs: timings.streamComplete - timings.streamStarted,
postProcessMs: Date.now() - timings.streamComplete,
totalMs: durationMs,
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stream timing instrumentation sets timings.streamStarted and timings.streamComplete back-to-back after withRetry returns, so streamConsumeMs will always be ~0 and streamInitMs will include the entire stream consumption time. If these numbers are used to diagnose latency, they’ll be misleading. Consider capturing streamStarted immediately before/after the streamText(...) call, and streamComplete after awaiting streamResult.text/usage/etc.

Copilot uses AI. Check for mistakes.
Comment thread .env.example
Comment on lines +44 to +46
# providers. Missing a provider-specific key is a hard failure at startup —
# this is deliberate so we never ship the Anthropic credential to a
# third-party endpoint.
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment says missing a provider-specific key is a “hard failure at startup”, but the current startup path pre-warms providers via warmConnection(...).catch(() => {}) and the actual key check happens when generate/stream are invoked. Consider rewording to reflect that missing <PROVIDER>_API_KEY will fail the first request for that provider (and may be logged), rather than necessarily failing startup.

Suggested change
# providers. Missing a provider-specific key is a hard failure at startup —
# this is deliberate so we never ship the Anthropic credential to a
# third-party endpoint.
# providers. Missing a provider-specific key will cause the first request for
# that provider to fail (and warm-up may log the issue) — this is deliberate
# so we never ship the Anthropic credential to a third-party endpoint.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/modules/triage.js (2)

759-765: ⚠️ Potential issue | 🟡 Minor

Stale JSDoc: triage no longer spawns CLI processes.

After the SDK migration, startTriage loads prompts and pre-warms provider connections; it doesn't boot CLI processes.

✏️ Proposed fix
-/**
- * Start the triage module: create and boot classifier + responder CLI processes.
+/**
+ * Start the triage module: load static prompts and best-effort pre-warm the
+ * classifier and responder provider connections via warmConnection().
 *
 * `@param` {import('discord.js').Client} discordClient - Discord client
 * `@param` {Object} botConfig - Bot configuration
 * `@param` {Object} [monitor] - Health monitor instance
 */
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/triage.js` around lines 759 - 765, The JSDoc for startTriage is
stale—remove the line saying it "create and boot classifier + responder CLI
processes" and update the description to state that startTriage now loads
prompts and pre-warms provider connections (and any other current
responsibilities); update parameter docs if needed to match current signature
(keep import('discord.js').Client and botConfig descriptions, and mark monitor
as optional if it still is). Ensure the function name startTriage is referenced
in the JSDoc and that no CLI-related wording remains.

1-8: ⚠️ Potential issue | 🟡 Minor

Header comment still references "Haiku classifier + Sonnet responder" and "Two AI SDK calls handle classification".

With the provider-agnostic switch (now defaulting to MiniMax in config.json), the Haiku/Sonnet specifics and the "~10x lower cost than Sonnet" claim are misleading. Re-word generically.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/triage.js` around lines 1 - 8, Update the header comment at the
top of the Triage Module (src/modules/triage.js) to remove the specific "Haiku
classifier + Sonnet responder" wording and the "~10x lower cost than Sonnet"
claim; instead describe it generically as a provider-agnostic, two-stage triage
that uses a lightweight/cheap classifier call followed by a more expensive
responder call only when needed, and mention that provider selection is
configurable (defaults to MiniMax in config.json). Ensure the updated comment
still conveys the two-stage flow and cost rationale without referencing
Haiku/Sonnet or specific cost multiples.
src/modules/aiAutoMod.js (1)

51-58: ⚠️ Potential issue | 🟡 Minor

Stale "Claude"-specific wording in docstrings and log messages.

After the Vercel AI SDK migration, the analyzer is provider-agnostic. Update the JSDoc on analyzeMessage ("Analyze a message using Claude AI") and the log label at line 105 ("failed to parse Claude response") so they don't lie about the runtime when a non-Anthropic provider is configured.

✏️ Proposed fix
-/**
- * Analyze a message using Claude AI.
+/**
+ * Analyze a message using the configured AI model via the Vercel AI SDK.
...
-    logError('AI auto-mod: failed to parse Claude response', { text });
+    logError('AI auto-mod: failed to parse model response', { text });

Also applies to: 105-105

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/aiAutoMod.js` around lines 51 - 58, Update the stale
Claude-specific wording: change the JSDoc summary for the analyzeMessage
function ("Analyze a message using Claude AI") to a provider-agnostic
description (e.g., "Analyze a message using configured AI provider") and replace
the log label string "failed to parse Claude response" used around the response
parsing code with a neutral message like "failed to parse AI provider response"
so it no longer names a specific provider; ensure the function name
analyzeMessage and the existing log call location (the statement that currently
uses "failed to parse Claude response") are updated to the new wording.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/modules/triage-respond.js`:
- Around line 79-88: When bufferSnapshot is empty the code does two redundant
Discord API calls because historyOptions (used in
channel.messages.fetch(historyOptions)) without a before returns the most recent
messages already covering the RECENT_LIMIT fetch; update the logic in
triage-respond.js to skip the second fetch when oldest is falsy: check
bufferSnapshot[0] (oldest) and if undefined only call
channel.messages.fetch(historyOptions) otherwise perform the Promise.all of both
channel.messages.fetch(historyOptions) and channel.messages.fetch({ limit:
RECENT_LIMIT }); this removes the unnecessary channel.messages.fetch({ limit:
RECENT_LIMIT }) API call on cold-start while keeping the parallel fetch when
oldest exists.
- Around line 357-365: The provider-extraction logic in triage-respond.js
duplicates aiClient.resolveModel's parsing (using
classifySeparator/respondSeparator on resolved.classifyModel/respondModel);
replace this duplication by importing and using a shared helper from
src/utils/aiClient.js (e.g., export parseProviderModel(modelString) that returns
{ providerName, modelId }) or by having resolveTriageConfig/aiClient return
classifyProvider/respondProvider on the resolved object; update
triage-respond.js to call parseProviderModel(resolved.classifyModel) and
parseProviderModel(resolved.respondModel) (or read
resolved.classifyProvider/resolved.respondProvider) and remove
classifySeparator/respondSeparator logic to avoid drift.

---

Outside diff comments:
In `@src/modules/aiAutoMod.js`:
- Around line 51-58: Update the stale Claude-specific wording: change the JSDoc
summary for the analyzeMessage function ("Analyze a message using Claude AI") to
a provider-agnostic description (e.g., "Analyze a message using configured AI
provider") and replace the log label string "failed to parse Claude response"
used around the response parsing code with a neutral message like "failed to
parse AI provider response" so it no longer names a specific provider; ensure
the function name analyzeMessage and the existing log call location (the
statement that currently uses "failed to parse Claude response") are updated to
the new wording.

In `@src/modules/triage.js`:
- Around line 759-765: The JSDoc for startTriage is stale—remove the line saying
it "create and boot classifier + responder CLI processes" and update the
description to state that startTriage now loads prompts and pre-warms provider
connections (and any other current responsibilities); update parameter docs if
needed to match current signature (keep import('discord.js').Client and
botConfig descriptions, and mark monitor as optional if it still is). Ensure the
function name startTriage is referenced in the JSDoc and that no CLI-related
wording remains.
- Around line 1-8: Update the header comment at the top of the Triage Module
(src/modules/triage.js) to remove the specific "Haiku classifier + Sonnet
responder" wording and the "~10x lower cost than Sonnet" claim; instead describe
it generically as a provider-agnostic, two-stage triage that uses a
lightweight/cheap classifier call followed by a more expensive responder call
only when needed, and mention that provider selection is configurable (defaults
to MiniMax in config.json). Ensure the updated comment still conveys the
two-stage flow and cost rationale without referencing Haiku/Sonnet or specific
cost multiples.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: a92e121a-11ba-405c-af86-4a2b0f781b00

📥 Commits

Reviewing files that changed from the base of the PR and between 2252aa9 and 838e8cb.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (8)
  • config.json
  • package.json
  • src/api/utils/configValidation.js
  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/modules/triage.js
  • web/src/components/dashboard/config-workspace/config-categories.ts
  • web/src/types/config.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Agent
  • GitHub Check: Greptile Review
  • GitHub Check: E2E Tests (1/2)
  • GitHub Check: E2E Tests (2/2)
  • GitHub Check: Test
  • GitHub Check: Docker Build Validation
  • GitHub Check: Analyze (javascript-typescript)
🧰 Additional context used
📓 Path-based instructions (14)
**/package.json

📄 CodeRabbit inference engine (AGENTS.md)

Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing

Files:

  • package.json
**/*.{js,ts,jsx,tsx,json}

📄 CodeRabbit inference engine (AGENTS.md)

Welcome-message variables use double braces only, like {{user}}; single braces are plain text and should not be documented, inserted, or parsed as variables

Files:

  • package.json
  • web/src/components/dashboard/config-workspace/config-categories.ts
  • config.json
  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/api/utils/configValidation.js
  • web/src/types/config.ts
  • src/modules/triage.js
**/*.{js,ts,jsx,tsx,mjs,mts}

📄 CodeRabbit inference engine (AGENTS.md)

Use ESM only; do not use CommonJS

Files:

  • web/src/components/dashboard/config-workspace/config-categories.ts
  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/api/utils/configValidation.js
  • web/src/types/config.ts
  • src/modules/triage.js
**/*.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{js,ts,jsx,tsx}: Use src/logger.js; do not use console.*
Use the safe Discord messaging helpers in src/utils/safeSend.js instead of raw reply/send/edit calls
Use parameterized SQL only; do not use string concatenation for SQL queries
Community features should be gated behind config..enabled; moderation commands are the exception

Files:

  • web/src/components/dashboard/config-workspace/config-categories.ts
  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/api/utils/configValidation.js
  • web/src/types/config.ts
  • src/modules/triage.js
web/src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

web/src/**/*.{ts,tsx}: New dashboard routes need title wiring in web/src/lib/page-titles.ts; use createPageMetadata() for SSR and keep DashboardTitleSync aligned for client navigation
Dashboard clients that need the guild list should consume GuildDirectoryProvider; do not stack extra /api/guilds fetch loops in leaf components
Recharts dashboard views should use web/src/components/ui/stable-responsive-container.tsx; raw ResponsiveContainer mounts can spam width(-1)/height(-1) warnings when panels render before layout settles
Any visual dashboard or landing page change must be verified with Chrome DevTools MCP before calling it done; take a screenshot after the change
Check both light and dark themes if colors or theming changed
Check responsive behavior on mobile, tablet, and desktop if layout changed

Files:

  • web/src/components/dashboard/config-workspace/config-categories.ts
  • web/src/types/config.ts
web/src/**/*.{ts,tsx,css,scss}

📄 CodeRabbit inference engine (AGENTS.md)

Follow the design system when making changes to the UI/UX; see DESIGN.md for the design system and color palette

Files:

  • web/src/components/dashboard/config-workspace/config-categories.ts
  • web/src/types/config.ts
**/*.{js,ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use single quotes instead of double quotes for strings

Always include semicolons at the end of statements

Use 2-space indentation with spaces, never tabs

Always use trailing commas in multi-line objects, arrays, and function parameters

Enforce 100-character maximum line width

Never use console.* methods; use Winston logger instead (import logger from '../logger.js' and call logger.info(), logger.warn(), logger.error(), logger.debug())

Use Biome linter and formatter (run pnpm lint to check, pnpm lint:fix to auto-fix)

Files:

  • web/src/components/dashboard/config-workspace/config-categories.ts
  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/api/utils/configValidation.js
  • web/src/types/config.ts
  • src/modules/triage.js
{config.json,src/api/utils/configAllowlist.js}

📄 CodeRabbit inference engine (AGENTS.md)

Config-backed features must be added to config.json and src/api/utils/configAllowlist.js; if a key is missing from SAFE_CONFIG_KEYS, the dashboard cannot save it

Files:

  • config.json
**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use ESM only (import/export), never use require() or module.exports

Files:

  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/api/utils/configValidation.js
  • src/modules/triage.js
**/{migrations,src/{api,modules}}/**/*.{js,cjs}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Always use parameterized queries for database operations; never use string interpolation

Files:

  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/api/utils/configValidation.js
  • src/modules/triage.js
src/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Always use safe Discord messaging methods: safeReply(), safeSend(), or safeEditReply() instead of raw Discord.js methods

Use config system via src/modules/config.js; call getConfig(guildId) to retrieve merged global + guild-level config

Use Redis caching with in-memory fallback via src/utils/cache.js for generic caching

Files:

  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/api/utils/configValidation.js
  • src/modules/triage.js
src/{commands,modules}/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Gate all community features behind config..enabled check; moderation commands are always available regardless of config

Files:

  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/modules/triage.js
src/{modules,commands,api}/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use src/utils/discordCache.js for Discord API response caching (channels, roles, members)

Files:

  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/api/utils/configValidation.js
  • src/modules/triage.js
src/{modules,api}/**/*.js

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use src/utils/reputationCache.js for leaderboard and user reputation data caching

All caches auto-invalidate when config changes; do not manually invalidate them when config is updated

Files:

  • src/modules/aiAutoMod.js
  • src/modules/triage-respond.js
  • src/api/utils/configValidation.js
  • src/modules/triage.js
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Always use pnpm as the package manager, never npm or yarn
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Create bot feature modules in src/modules/ and add config section to config.json
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Run pnpm lint and pnpm test before making changes and after making changes to verify nothing breaks
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Use Conventional Commits for commit messages (feat:, fix:, chore:, docs:, refactor:, style:, test:, ci:)
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Use branch naming conventions: feat/, fix/, chore/, docs/, refactor/, style/, test/, ci/
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Always target PR merges to the main branch
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Use pnpm changeset to create release notes for user-facing changes
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Require Node.js 22+ (enforced by engines field in package.json)
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Require pnpm 10.30.3+ (enforced by engine-strict=true in .npmrc)
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Run pnpm migrate for database migrations, pnpm migrate:down for rollbacks, pnpm migrate:create for new migrations
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Never lower code coverage thresholds; add tests to cover new code instead
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Use pnpm install --frozen-lockfile to install dependencies with exact versions from lock files
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Bot runtime uses Node.js 22+ with ESM JavaScript and discord.js v14
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Bot AI features use Vercel AI SDK (ai, ai-sdk/anthropic)
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Database is PostgreSQL 17+ with node-pg-migrate
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Caching uses Redis 7+ (ioredis) with in-memory fallback when Redis is unavailable
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Web dashboard uses Next.js 16, TypeScript, React 19, Tailwind CSS 4, and Radix UI
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Use the web workspace for web dashboard development: pnpm --prefix web <command>
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: The web/ directory has its own package.json, pnpm-lock.yaml, and node_modules; install with pnpm --prefix web install
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Biome linter includes scopes for src/**/*.js and web/src/**/*.{ts,tsx}
Learnt from: CR
URL: 
File: .github/copilot-instructions.md:undefined-undefined
Timestamp: 2026-04-17T04:47:09.721Z
Learning: Read the project guidelines file (this file) in every session before making changes
📚 Learning: 2026-04-09T20:31:03.276Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.276Z
Learning: Applies to **/package.json : Use the latest version of ALL package.json dependencies; dependencies should always be upgraded to the latest version before committing

Applied to files:

  • package.json
📚 Learning: 2026-03-10T23:21:49.730Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T23:21:49.730Z
Learning: Applies to web/src/components/dashboard/config-workspace/**/*.{ts,tsx} : Web dashboard config editor should use category workspace navigation with reusable SettingsFeatureCard pattern (header + master toggle + Basic/Advanced blocks)

Applied to files:

  • web/src/components/dashboard/config-workspace/config-categories.ts
📚 Learning: 2026-03-12T02:03:36.493Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-12T02:03:36.493Z
Learning: Applies to config.json : All community features in `config.json` should be gated behind a `config.<feature>.enabled` flag; moderation commands are always available regardless of config settings

Applied to files:

  • web/src/types/config.ts
📚 Learning: 2026-04-09T20:31:03.276Z
Learnt from: CR
Repo: VolvoxLLC/volvox-bot PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-09T20:31:03.276Z
Learning: Applies to **/*.{js,ts,jsx,tsx} : Community features should be gated behind config.<feature>.enabled; moderation commands are the exception

Applied to files:

  • web/src/types/config.ts
🪛 GitHub Check: SonarCloud Code Analysis
src/modules/triage.js

[failure] 491-491: Refactor this function to reduce its Cognitive Complexity from 48 to the 15 allowed.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuVRkud7vHWqFg8x&open=AZ2EEuVRkud7vHWqFg8x&pullRequest=479


[failure] 576-582: Unexpected await of a non-Promise (non-"Thenable") value.

See more on https://sonarcloud.io/project/issues?id=VolvoxLLC_volvox-bot&issues=AZ2EEuVRkud7vHWqFg8y&open=AZ2EEuVRkud7vHWqFg8y&pullRequest=479

🔇 Additional comments (7)
web/src/components/dashboard/config-workspace/config-categories.ts (1)

142-143: Good cleanup of obsolete triage search metadata.

Line 142 and Line 143 correctly remove stale “streaming” references from debug search content, keeping search indexing aligned with current triage settings.

src/modules/triage.js (1)

576-582: SonarCloud "unexpected await of non-Promise" on runClassification is a false positive — safe to ignore.

runClassification is declared async at line 146 and returns a Promise<{…}|null>, so await is correct. If the SonarCloud noise keeps failing the check, consider annotating/configuring a suppression rather than changing the call site.

package.json (1)

36-38: LGTM — dependency versions match the current latest stable releases.

Prior review feedback on bumping to @ai-sdk/anthropic ^3.0.68 and ai ^6.0.158 has been addressed.

web/src/types/config.ts (1)

237-242: LGTM — TriageConfig now mirrors the validation schema's nullable semantics.

Prior feedback on making statusReactions, dailyBudgetUsd, confidenceThreshold, and responseCooldownMs T | null rather than just optional has been applied, and debugFooterLevel is likewise nullable.

config.json (1)

26-28: No changes needed. minimax:MiniMax-M2.7 is a valid, current MiniMax model identifier listed as their top text generation model with 204,800 token context window on their official API documentation.

src/modules/aiAutoMod.js (1)

17-34: No changes required — the default model string is correct.

The resolveModel() function in src/utils/aiClient.js explicitly supports bare model names (without provider: prefix) by defaulting them to the 'anthropic' provider. The logic on lines 194–195 shows:

const colonIdx = modelString.indexOf(':');
const providerName = colonIdx > 0 ? modelString.slice(0, colonIdx) : 'anthropic';

This design is documented in the header comments: "Bare model names default to Anthropic." The DEFAULTS in src/modules/aiAutoMod.js and widespread usage of unprefixed model names throughout the test suite confirm this is intentional and working as designed. No routing failure or provider resolution error will occur.

			> Likely an incorrect or invalid review comment.
src/api/utils/configValidation.js (1)

197-200: No action required. The triage section is already in SAFE_CONFIG_KEYS, which permits saves to all fields within that section, including the four new keys. The allowlist operates at the section level, not the individual field level; the four new triage fields do not need separate entries.

			> Likely an incorrect or invalid review comment.

Comment on lines +79 to +88
// Fetch historical context + recent messages in parallel
const oldest = bufferSnapshot[0];
const historyOptions = { limit };
if (oldest) historyOptions.before = oldest.messageId;
const historyFetched = await channel.messages.fetch(historyOptions);

// Also fetch the most recent messages (catches replies that arrived after
// the buffer started accumulating — fixes the "already being helped" blind spot)
const RECENT_LIMIT = 5;
const recentFetched = await channel.messages.fetch({ limit: RECENT_LIMIT });

const [historyFetched, recentFetched] = await Promise.all([
channel.messages.fetch(historyOptions),
channel.messages.fetch({ limit: RECENT_LIMIT }),
]);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Redundant fetch when bufferSnapshot is empty.

When oldest is undefined, historyOptions has no before, so channel.messages.fetch({ limit }) returns the most recent limit messages — which fully contains the second fetch({ limit: RECENT_LIMIT }). The parallel call still works (dedup handles it) but wastes a Discord API call on every cold-start/empty-buffer evaluation.

Consider skipping the recent fetch when !oldest:

♻️ Proposed change
-    const [historyFetched, recentFetched] = await Promise.all([
-      channel.messages.fetch(historyOptions),
-      channel.messages.fetch({ limit: RECENT_LIMIT }),
-    ]);
+    const fetches = [channel.messages.fetch(historyOptions)];
+    if (oldest) fetches.push(channel.messages.fetch({ limit: RECENT_LIMIT }));
+    const [historyFetched, recentFetched] = await Promise.all(fetches);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/triage-respond.js` around lines 79 - 88, When bufferSnapshot is
empty the code does two redundant Discord API calls because historyOptions (used
in channel.messages.fetch(historyOptions)) without a before returns the most
recent messages already covering the RECENT_LIMIT fetch; update the logic in
triage-respond.js to skip the second fetch when oldest is falsy: check
bufferSnapshot[0] (oldest) and if undefined only call
channel.messages.fetch(historyOptions) otherwise perform the Promise.all of both
channel.messages.fetch(historyOptions) and channel.messages.fetch({ limit:
RECENT_LIMIT }); this removes the unnecessary channel.messages.fetch({ limit:
RECENT_LIMIT }) API call on cold-start while keeping the parallel fetch when
oldest exists.

Comment on lines +357 to +365
const classifySeparator = resolved.classifyModel?.indexOf(':') ?? -1;
const respondSeparator = resolved.respondModel?.indexOf(':') ?? -1;
const classifyProvider = classifySeparator > 0
? resolved.classifyModel.slice(0, classifySeparator)
: 'anthropic';
const respondProvider = respondSeparator > 0
? resolved.respondModel.slice(0, respondSeparator)
: 'anthropic';

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Duplicates provider-extraction logic from aiClient.resolveModel.

This same provider:model parsing (colon index > 0, default 'anthropic') is implemented in src/utils/aiClient.js::resolveModel (lines 161-171 per the provided snippet). Duplicating it here risks drift if the canonical rules ever change (e.g., allowing schemes, case handling, or additional defaults).

Prefer exporting a small helper from aiClient.js (e.g., parseProviderModel(modelString) returning { providerName, modelId }) and reusing it here, or — better — having resolveTriageConfig/the AI client surface the already-resolved classifyProvider/respondProvider on resolved so callers don't re-parse strings.

♻️ Proposed refactor
-  const classifySeparator = resolved.classifyModel?.indexOf(':') ?? -1;
-  const respondSeparator = resolved.respondModel?.indexOf(':') ?? -1;
-  const classifyProvider = classifySeparator > 0
-    ? resolved.classifyModel.slice(0, classifySeparator)
-    : 'anthropic';
-  const respondProvider = respondSeparator > 0
-    ? resolved.respondModel.slice(0, respondSeparator)
-    : 'anthropic';
+  const { parseProviderModel } = await import('../utils/aiClient.js');
+  const classifyProvider = parseProviderModel(resolved.classifyModel).providerName;
+  const respondProvider = parseProviderModel(resolved.respondModel).providerName;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/triage-respond.js` around lines 357 - 365, The
provider-extraction logic in triage-respond.js duplicates
aiClient.resolveModel's parsing (using classifySeparator/respondSeparator on
resolved.classifyModel/respondModel); replace this duplication by importing and
using a shared helper from src/utils/aiClient.js (e.g., export
parseProviderModel(modelString) that returns { providerName, modelId }) or by
having resolveTriageConfig/aiClient return classifyProvider/respondProvider on
the resolved object; update triage-respond.js to call
parseProviderModel(resolved.classifyModel) and
parseProviderModel(resolved.respondModel) (or read
resolved.classifyProvider/resolved.respondProvider) and remove
classifySeparator/respondSeparator logic to avoid drift.

…on override

- biome.json: bump $schema to 2.4.12 to match @biomejs/biome ^2.4.12 devDep
- package.json: drop brace-expansion override; it forced ^2.0.2 which lacks the
  named `expand` export that minimatch@10.2.5 imports, breaking 32+ test files
  via node-pg-migrate -> glob -> minimatch. Natural pnpm resolution already
  picks post-CVE-2025-5889 versions across the dep tree (1.1.12, 2.1.0, 5.0.5).
- pnpm-lock.yaml: regenerated
- src/modules/triage-respond.js: biome auto-format (single-line ternary)
- tests/utils/aiCost.test.js: drop unused `key` destructuring in for-of
@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
6.6% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In Review

Development

Successfully merging this pull request may close these issues.

4 participants