Make limits runtime configurable#251
Conversation
|
@kimanicode Great news! 🎉 Based on an automated assessment of this PR, the linked Wave issue(s) no longer count against your application limits. You can now already apply to more issues while waiting for a review of this PR. Keep up the great work! 🚀 |
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
📝 WalkthroughWalkthroughAdds a new DB-backed, runtime-tunable limits model and migration, replaces static compile-time limits with an async cached snapshot loaded from the Changes
Sequence DiagramsequenceDiagram
actor Client
participant LimitsService as Limits Service
participant Cache as TTL Cache
participant DB as Database
Client->>LimitsService: checkDepositLimits(audience,...)
LimitsService->>LimitsService: call getLimitConfig(audience)
LimitsService->>Cache: check snapshot for audience
alt Cache hit
Cache-->>LimitsService: return cached snapshot
else Cache miss/expired
LimitsService->>DB: query limits_config (findMany)
DB-->>LimitsService: return scope records
LimitsService->>LimitsService: merge DB overrides with env defaults
LimitsService->>Cache: store snapshot with TTL
Cache-->>LimitsService: return cached snapshot
end
LimitsService->>LimitsService: await getCircuitBreakerMinReserveRatio()
LimitsService->>DB: (if implemented) read circuit_breaker scope OR read from snapshot
DB-->>LimitsService: return thresholds
LimitsService->>Client: return limit check decision
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
src/services/limits/limitsService.test.ts (1)
15-24: Mock signature mismatch with new asyncgetLimitConfig.
getLimitConfigis nowasync(returnsPromise<LimitConfig>), but the mock at Line 16 usesmockReturnValue(...)which returns a plain object. Whileawaiton a non-thenable still resolves to the value, this is inconsistent with the production type signature and with the siblingmockResolvedValuecalls on Lines 22–23. Aligning the mock makes the test intent explicit and protects against future changes (e.g., if a caller checksresult instanceof Promiseor uses.then).♻️ Proposed change
jest.mock("../../config/limits", () => ({ - getLimitConfig: jest.fn().mockReturnValue({ + getLimitConfig: jest.fn().mockResolvedValue({ depositDailyUsd: 5000, depositMonthlyUsd: 50000, withdrawalSingleCurrencyDailyUsd: 10000, withdrawalSingleCurrencyMonthlyUsd: 80000, }), getCircuitBreakerReserveWeightThresholdPct: jest.fn().mockResolvedValue(10), getCircuitBreakerMinReserveRatio: jest.fn().mockResolvedValue(1.02), }));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/limits/limitsService.test.ts` around lines 15 - 24, The test mock for getLimitConfig is inconsistent with its async signature—replace the mockReturnValue(...) on getLimitConfig with mockResolvedValue(...) so it returns a Promise that resolves to the LimitConfig object (keeping the same fields), matching the other mocks (getCircuitBreakerReserveWeightThresholdPct and getCircuitBreakerMinReserveRatio) and ensuring callers that expect a Promise behave correctly.src/config/limits.ts (1)
160-162:typeof row.values !== "object"does not exclude arrays or null edge cases.
typeof []is"object", so a row whose JSON value happens to be an array bypasses this guard and gets cast toRecord<string, unknown>. The downstreamnumberFromJson(overrides.depositDailyUsd)is benign (returnsundefined), so this isn't a runtime bug today — but if values are ever indexed or iterated differently in the future, this could surprise. The leading!row.valuesalready coversnull, but tightening the type guard is cheap.♻️ Suggested tighten
- if (!row.values || typeof row.values !== "object") continue; + if ( + !row.values || + typeof row.values !== "object" || + Array.isArray(row.values) + ) { + continue; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/limits.ts` around lines 160 - 162, The current guard in the for loop over rows only checks typeof row.values === "object", which allows arrays; change the condition so row.values is non-null, not an array, and an object (e.g., check row.values !== null && !Array.isArray(row.values) && typeof row.values === "object") before casting to Record<string, unknown> — update the check around the for (const row of rows) block that references row.values so downstream uses (the cast to values and subsequent calls like numberFromJson) operate on a proper object.src/services/limits/limitsService.ts (1)
173-181: Stale doc comment after dynamic threshold.The comment "reserve ratio below 102%" hardcodes the old default but the threshold is now sourced from
getCircuitBreakerMinReserveRatio()and is runtime-tunable vialimits_config. Consider rewording to avoid confusion when ops change the value.♻️ Proposed change
/** - * Circuit breaker: return true if new minting should be paused (reserve ratio below 102%). + * Circuit breaker: return true if new minting should be paused + * (reserve ratio below the configured minimum reserve ratio). */ export async function isMintingPaused(): Promise<boolean> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/limits/limitsService.ts` around lines 173 - 181, Update the stale doc comment above isMintingPaused(): remove the hardcoded "reserve ratio below 102%" wording and replace it with a dynamic description that references the runtime-tunable threshold returned by getCircuitBreakerMinReserveRatio() (and configured via limits_config); e.g., state that minting is paused when the reserve ratio falls below the configured circuit-breaker minimum rather than a fixed percent, and keep the comment next to isMintingPaused and reserveTracker.calculateReserveRatio(ReserveTracker.SEGMENT_TRANSACTIONS) for context.src/config/limits.test.ts (1)
22-80: Solid coverage; consider adding a DB-error fallback case.The three tests cover override precedence, cache refresh, and env fallback well. One gap: there is no test for what happens when
prisma.limitConfig.findManyrejects (DB outage / driver error). Currently insrc/config/limits.tsloadLimitsSnapshotdoes not catch the error, so callers (checkDepositLimits,isMintingPaused, etc.) will reject — that may or may not be the intended behavior (vs. falling back to env limits). A test pinning this contract would help avoid regressions either way.♻️ Suggested test (sketch)
+ it("propagates DB errors from getLimitConfig (current behavior)", async () => { + mockFindMany.mockRejectedValueOnce(new Error("db down")); + await expect(getLimitConfig("retail")).rejects.toThrow("db down"); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/limits.test.ts` around lines 22 - 80, Add a test that verifies behavior when the DB call fails by mocking mockFindMany to reject (e.g. mockFindMany.mockRejectedValueOnce(new Error("DB down"))) and asserting the current contract: calls like getLimitConfig("retail") (and optionally getCircuitBreakerMinReserveRatio()) propagate the rejection (i.e. expect(...).rejects.toThrow("DB down")). This pins the existing behavior of loadLimitsSnapshot/getLimitConfig against a prisma error without changing source code; use the same test setup helpers (mockFindMany, invalidateLimitsConfigCache) and set any needed env vars (LIMIT_CONFIG_CACHE_TTL_MS) as in other tests.prisma/migrations/20260426000000_add_limits_config/migration.sql (1)
16-21: Make seed inserts idempotent.If this migration is ever replayed on a database where any of these scope rows already exist (e.g., manual partial seed in lower environments, branch switching during dev), the
INSERTwill fail on thelimits_config_scope_keyunique constraint and abort the migration. AddingON CONFLICT DO NOTHINGmakes the seed safe to re-run and avoids blocking deploys.♻️ Proposed change
INSERT INTO "limits_config" ("id", "scope", "values") VALUES ('00000000-0000-0000-0000-000000000161', 'retail', '{}'::jsonb), ('00000000-0000-0000-0000-000000000162', 'business', '{}'::jsonb), ('00000000-0000-0000-0000-000000000163', 'government', '{}'::jsonb), - ('00000000-0000-0000-0000-000000000164', 'circuit_breaker', '{}'::jsonb); + ('00000000-0000-0000-0000-000000000164', 'circuit_breaker', '{}'::jsonb) +ON CONFLICT ("scope") DO NOTHING;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prisma/migrations/20260426000000_add_limits_config/migration.sql` around lines 16 - 21, The INSERT in the migration adds rows to limits_config and will fail if a row with the same scope already exists; update the INSERT statement that inserts the four scopes ('retail','business','government','circuit_breaker') to be idempotent by adding an ON CONFLICT clause (e.g., ON CONFLICT ON CONSTRAINT limits_config_scope_key DO NOTHING or ON CONFLICT (scope) DO NOTHING) so re-running the migration won’t error; locate the INSERT into "limits_config" in migration.sql and append the ON CONFLICT DO NOTHING behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/config/limits.ts`:
- Around line 142-201: loadLimitsSnapshot currently only falls back to env
defaults when the Prisma delegate is absent but lets runtime DB errors bubble
up; additionally getLimitsSnapshot has no single‑flight protection so concurrent
callers can stampede the DB on cache expiry. Fix by wrapping the
delegate.findMany call in a try/catch and on any error log the failure and
return the env snapshot (snapshot from envLimits()) instead of throwing; also
add a shared in‑flight promise (e.g., module scope loadingPromise used by
getLimitsSnapshot and cleared on resolve/reject) so concurrent callers reuse the
same loadLimitsSnapshot() invocation rather than issuing N parallel
delegate.findMany calls; keep invalidateLimitsConfigCache behavior intact (clear
cachedSnapshot/cacheExpiresAt and also clear loadingPromise).
---
Nitpick comments:
In `@prisma/migrations/20260426000000_add_limits_config/migration.sql`:
- Around line 16-21: The INSERT in the migration adds rows to limits_config and
will fail if a row with the same scope already exists; update the INSERT
statement that inserts the four scopes
('retail','business','government','circuit_breaker') to be idempotent by adding
an ON CONFLICT clause (e.g., ON CONFLICT ON CONSTRAINT limits_config_scope_key
DO NOTHING or ON CONFLICT (scope) DO NOTHING) so re-running the migration won’t
error; locate the INSERT into "limits_config" in migration.sql and append the ON
CONFLICT DO NOTHING behavior.
In `@src/config/limits.test.ts`:
- Around line 22-80: Add a test that verifies behavior when the DB call fails by
mocking mockFindMany to reject (e.g. mockFindMany.mockRejectedValueOnce(new
Error("DB down"))) and asserting the current contract: calls like
getLimitConfig("retail") (and optionally getCircuitBreakerMinReserveRatio())
propagate the rejection (i.e. expect(...).rejects.toThrow("DB down")). This pins
the existing behavior of loadLimitsSnapshot/getLimitConfig against a prisma
error without changing source code; use the same test setup helpers
(mockFindMany, invalidateLimitsConfigCache) and set any needed env vars
(LIMIT_CONFIG_CACHE_TTL_MS) as in other tests.
In `@src/config/limits.ts`:
- Around line 160-162: The current guard in the for loop over rows only checks
typeof row.values === "object", which allows arrays; change the condition so
row.values is non-null, not an array, and an object (e.g., check row.values !==
null && !Array.isArray(row.values) && typeof row.values === "object") before
casting to Record<string, unknown> — update the check around the for (const row
of rows) block that references row.values so downstream uses (the cast to values
and subsequent calls like numberFromJson) operate on a proper object.
In `@src/services/limits/limitsService.test.ts`:
- Around line 15-24: The test mock for getLimitConfig is inconsistent with its
async signature—replace the mockReturnValue(...) on getLimitConfig with
mockResolvedValue(...) so it returns a Promise that resolves to the LimitConfig
object (keeping the same fields), matching the other mocks
(getCircuitBreakerReserveWeightThresholdPct and
getCircuitBreakerMinReserveRatio) and ensuring callers that expect a Promise
behave correctly.
In `@src/services/limits/limitsService.ts`:
- Around line 173-181: Update the stale doc comment above isMintingPaused():
remove the hardcoded "reserve ratio below 102%" wording and replace it with a
dynamic description that references the runtime-tunable threshold returned by
getCircuitBreakerMinReserveRatio() (and configured via limits_config); e.g.,
state that minting is paused when the reserve ratio falls below the configured
circuit-breaker minimum rather than a fixed percent, and keep the comment next
to isMintingPaused and
reserveTracker.calculateReserveRatio(ReserveTracker.SEGMENT_TRANSACTIONS) for
context.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0bac94aa-40c6-4bc8-a336-f8ddb5bb76a1
📒 Files selected for processing (6)
prisma/migrations/20260426000000_add_limits_config/migration.sqlprisma/schema.prismasrc/config/limits.test.tssrc/config/limits.tssrc/services/limits/limitsService.test.tssrc/services/limits/limitsService.ts
Summary
Fixes #161 / B-046 by making backend limits runtime configurable instead of fixed at module import time.
Changes
limits_configdatabase table for runtime limit overrides.retailbusinessgovernmentcircuit_breakersrc/config/limits.tsto load limits from DB with env/default fallback.LIMIT_CONFIG_CACHE_TTL_MSso limit updates apply without redeploy.Operational Impact
Ops can update values in
limits_config.valuesand the running backend will pick them up after cache expiry, without redeploying the service.Example row shape:
{ "depositDailyUsd": 2500, "depositMonthlyUsd": 50000, "withdrawalSingleCurrencyDailyUsd": 10000, "withdrawalSingleCurrencyMonthlyUsd": 80000 } <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Operational limits can now be adjusted at runtime via a database-backed configuration with scope-based overrides (retail, business, government, circuit breaker). * Circuit-breaker thresholds are now read at runtime, allowing live tuning without redeploys. * In-memory caching with TTL reduces DB load while supporting live refreshes. * **Tests** * Tests added/updated to validate runtime config loading, cache behavior, fallback on failures, and concurrent refresh deduplication. <!-- end of auto-generated comment: release notes by coderabbit.ai -->