fix: add migration gate and stabilize CI#256
Conversation
📝 WalkthroughWalkthroughAdds a CI destructive-migration gate and workflow updates; introduces new Prisma models and migration for user devices, salary, and bulk transfers; implements CSV bulk-transfer parsing and related controller/service changes; expands config, tracing, audit, and many controller/service/type refinements across the backend. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant GitHub as GitHub Actions
participant Runner as CI runner
participant Repo as Repository (git)
participant CI_Script as destructive-migration script
participant Prisma as Prisma CLI
GitHub->>Runner: PR event (opened/synchronize/reopened/label changes)
Runner->>Repo: checkout (fetch-depth: 0)
Runner->>CI_Script: run destructive-migration gate (env label + base SHA)
CI_Script->>Repo: git diff --name-only base..head
CI_Script->>CI_Script: scan changed `prisma/migrations/**/migration.sql` for destructive SQL
alt prisma schema changed
CI_Script->>Prisma: run `prisma migrate diff` to generate destructive SQL
Prisma-->>CI_Script: destructive SQL output
CI_Script->>CI_Script: dedupe findings
end
alt destructive statements found and label missing
CI_Script->>Runner: exit 1 (fail CI)
else
Runner->>Prisma: `prisma:migrate:deploy` step proceeds
Runner->>Runner: continue lint/test steps
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 60 minutes.Comment |
|
@SamuelOlawuyi Great news! 🎉 Based on an automated assessment of this PR, the linked Wave issue(s) no longer count against your application limits. You can now already apply to more issues while waiting for a review of this PR. Keep up the great work! 🚀 |
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
src/jobs/withdrawalProcessingJob.ts (1)
143-156:⚠️ Potential issue | 🟠 MajorFix the failover logic for single-provider currencies and eliminate double-disbursement risk.
Two actionable issues:
Failover logic broken for single-provider currencies: For KES, ZAR, GHS, EGP, MAD, TZS, UGX, XOF (which map only to
flutterwave), the failover setssimulateOutageFor: "flutterwave"viagetPreferredProviderId, which never returnsnull. The subsequentgetProvidercall skips flutterwave in its loop, then fails to find an alternative since the fallback condition!providerIds.includes("flutterwave")is false. This throws an error instead of attempting failover. Either prevent the failover attempt for single-provider currencies or refactorgetPreferredProviderIdto returnnullwhen there is no alternative.Real double-disbursement risk from retry without idempotency key: The three provider implementations (Flutterwave, Paystack, MTN MoMo) generate their own ephemeral references per call (
acbu-${timestamp}or similar), not tied totransactionId. If a network timeout occurs after the provider accepts the transfer but before the response is received, retrying with a different provider will create a distinct reference. The provider cannot detect it's a duplicate of the same logical transaction. PasstransactionIdtodisburseFundsand include it as the idempotency key in each provider's request, or confirm all three providers are guaranteed idempotent on the server side.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/jobs/withdrawalProcessingJob.ts` around lines 143 - 156, The failover block can remove/skip unsafe failover for single-provider currencies and make disbursements idempotent: before calling router.getProvider(...) with simulateOutageFor use router.getAvailableProviders(currency) or check router.getPreferredProviderId(currency) against the available provider list and if there is only one provider (e.g., flutterwave-only currencies) skip the failover path to avoid throwing; also change the call to provider.disburseFunds(...) to pass transactionId (e.g., provider.disburseFunds(amount, currency, recipient, transactionId)) and update the provider implementations (Flutterwave, Paystack, MTN MoMo) to accept that idempotency key and include it as the idempotency key/reference in their HTTP requests so retries cannot create double-disbursements. Ensure usedProviderId logic remains correct after these changes.src/services/transfer/transferService.ts (1)
115-115:⚠️ Potential issue | 🟠 Major
amountInSmallestUnittruncates Stellar precision and uses float arithmetic — risk of incorrect financial event logs.Line 74 validates up to 7 decimal places (Stellar's native precision / stroops), but here the conversion uses
* 100(cents), andMath.roundover a float. Two problems:
- Precision loss for amounts < 0.005 ACBU: e.g.
"0.0000001"→Number(...) * 100 = 0.00001→Math.round = 0, so a non-zero transfer is logged asamount: 0intransfer.initiated/completed/failed. This corrupts audit logs and any downstream reconciliation/metrics keyed off these events.- Float arithmetic:
Number("0.1") * 100 = 10.000000000000002style errors creep in for ACBU values that are otherwise exactDecimalstrings.Since
acbuAmountis already persisted asDecimal(amount), the financial event should use the same precision (either scaled to 1e7 stroops via integer math, or pass the string/Decimal through iflogFinancialEventaccepts non-number amounts).🛡️ Suggested fix using Decimal-based stroops
- const correlationId = options?.correlationId ?? crypto.randomUUID(); - const amountInSmallestUnit = Math.round(Number(amount) * 100); + const correlationId = options?.correlationId ?? crypto.randomUUID(); + // ACBU/Stellar uses 7-decimal precision (stroops). Use Decimal to avoid + // float-rounding artefacts in audit/financial logs. + const amountInSmallestUnit = new Decimal(amount) + .mul(10_000_000) + .toFixed(0);(adjust the type accepted by
logFinancialEvent.amountif needed — or passamountas a string).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/transfer/transferService.ts` at line 115, The transfer log currently computes amountInSmallestUnit with float math and Math.round which loses precision for Stroop-scale ACBU values; replace this by deriving the smallest-unit amount from the persisted Decimal(acbuAmount) (or from the original amount string) using Decimal-safe integer scaling (multiply by 1e7 stroops via Decimal integer math) and pass that precise integer (or the original Decimal/string) into logFinancialEvent and the transfer.initiated/completed/failed events instead of using Number(...) * 100 and Math.round; if logFinancialEvent.amount's type prevents non-number values, update its type/signature to accept a string/Decimal or emit the integer stroop value computed with Decimal to avoid float rounding/truncation.src/controllers/onrampController.ts (1)
86-101:⚠️ Potential issue | 🟡 MinorFloat precision risk when converting XLM to stroops for the financial event log.
Math.round(xlmNum * 1e7)goes through IEEE-754 doubles. Stroop counts aboveNumber.MAX_SAFE_INTEGER(≈9.007×10¹⁵, i.e. ~900M XLM) silently lose precision, and even values below that may round differently than the canonical Stellar representation. Sincexlm_amountis already a validated decimal string, parse it directly and avoid the float round-trip — important because this value goes into the financial audit log and may be used for downstream reconciliation.🛡️ Suggested fix using string-based stroop conversion
- amount: Math.round(xlmNum * 1e7), // XLM in stroops (7 decimal places) + amount: xlmStringToStroops(xlm_amount), // exact 7-decimal stroop conversionWhere
xlmStringToStroopspads/truncates the fractional part to exactly 7 digits and parses asBigInt(or returns a string), avoiding any float arithmetic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/onrampController.ts` around lines 86 - 101, The financial event currently computes stroops via Math.round(xlmNum * 1e7) which uses floating-point and can lose precision; instead parse the validated decimal string (xlm_amount or the original string used to produce xlmNum) using a string-based conversion (e.g., xlmStringToStroops) that pads/truncates the fractional part to exactly 7 digits and returns a BigInt or canonical string stroop value, then pass that stroop value into logFinancialEvent as the amount (replace the Math.round(...) expression). Locate the call to logFinancialEvent and swap.id/xlmNum references and ensure the code uses the new xlmStringToStroops conversion (or equivalent) so no float arithmetic is used for stroop calculation.src/controllers/webhookController.ts (1)
64-109:⚠️ Potential issue | 🟠 MajorInconsistent error response shape across the same handler.
Within
verifyFlutterwaveSignature(andverifyPaystackSignature) the code now mixes two contracts:
- Config error path still
throw new AppError(...)→ goes througherrorHandlerand yields{ error: { code, message, statusCode, details? } }.- Body / header / signature error paths use
res.status(...).json({ error: "..." })directly, whereerroris a string, not an object.Clients that introspect
error.codefor the AppError shape will break on these direct responses. Two recommended options:
- Route everything through
next(new AppError(...))for a uniform shape:♻️ Unify on AppError
- if (!rawBody || !Buffer.isBuffer(rawBody)) { - res.status(400).json({ error: "Raw body required for verification" }); - return; - } + if (!rawBody || !Buffer.isBuffer(rawBody)) { + return next(new AppError("Raw body required for verification", 400, "RAW_BODY_REQUIRED")); + } @@ - if (!received) { - res.status(401).json({ error: "Missing verif-hash header" }); - return; - } + if (!received) { + return next(new AppError("Missing verif-hash header", 401, "MISSING_SIGNATURE_HEADER")); + } @@ - logger.warn("Flutterwave webhook signature mismatch"); - res.status(401).json({ error: "Invalid signature" }); + logger.warn("Flutterwave webhook signature mismatch"); + return next(new AppError("Invalid signature", 401, "INVALID_SIGNATURE"));(Apply the same to
verifyPaystackSignature, including the still-throwconfig block —next(new AppError(...))is preferred overthrowin Express handlers.)
- Or, if the intent is to short-circuit before any auth/log middleware sees the request, at minimum match the project-wide error envelope (object, with
code/message/statusCode).Webhook-source providers will retry on non-2xx, and if a downstream consumer parses these bodies for diagnostics the divergent shapes complicate alerting.
Also applies to: 131-181
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/webhookController.ts` around lines 64 - 109, The error responses in verifyFlutterwaveSignature (and similarly verifyPaystackSignature) are inconsistent: the config path throws AppError while body/header/signature checks use res.json({ error: "..." }) returning a string; update all failure paths to use next(new AppError(...)) so every error goes through the shared errorHandler and uses the same envelope (include appropriate statusCode and code values when creating AppError), and replace the initial throw with next(new AppError(...)) as well to ensure uniform error shape across the handler.src/services/enterpriseService.ts (4)
226-307:⚠️ Potential issue | 🟡 MinorNo error path for the bulk job — failures leave rows stuck in
processingwith noerrorMessage.If the chunk loop or any
prisma.bulkTransferJob.updatethrows, the job row created at L226 stays instatus: "processing"forever. The schema even reserves aBulkTransferJob.errorMessagecolumn, but it's never written. Wrap the loop and final update in atry { ... } catch (err) { await prisma.bulkTransferJob.update({ where: { id: job.id }, data: { status: "failed", errorMessage: ..., completedAt: new Date() } }); throw; }so operators can recover, retry, or page on stalled jobs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/enterpriseService.ts` around lines 226 - 307, Wrap the processing loop and final completion update in a try/catch around the code that runs after prisma.bulkTransferJob.create (the chunk loop that uses prisma.$transaction, updates processedRows via prisma.bulkTransferJob.update, and the final prisma.bulkTransferJob.update that sets status/completedAt); in the catch block call prisma.bulkTransferJob.update({ where: { id: job.id }, data: { status: "failed", errorMessage: (stringify the caught error), completedAt: new Date() } }) and then rethrow the error so failures don’t leave the created job stuck in "processing" and the reserved BulkTransferJob.errorMessage column is populated for operators to inspect.
329-349:⚠️ Potential issue | 🟡 Minor
skippedCounthardcoded to 0 means re-fetching a job loses information.
processBulkTransferreturns a realskippedCount, butgetBulkTransferJobalways returns0because the count isn't persisted. Either persistskippedCountonBulkTransferJob(recommended — also tightenfailureReport/counter parity) or derive it fromfailureReport/results stored on the row. Otherwise client polling endpoints will show divergent values vs. the synchronous response.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/enterpriseService.ts` around lines 329 - 349, getBulkTransferJob currently returns skippedCount: 0 which loses the real skipped value; update the implementation and data model so skippedCount is surfaced consistently: either (preferred) add a skippedCount column to the BulkTransferJob model and set it in processBulkTransfer when the job finishes, then change getBulkTransferJob to return job.skippedCount; or, if you cannot change the schema, compute skippedCount in getBulkTransferJob from the persisted failureReport/row results (e.g., count entries with result === 'skipped') before returning. Ensure the same symbol names are used: BulkTransferJob (schema), processBulkTransfer (where to set skippedCount), getBulkTransferJob (where to read it), and failureReport (fallback source if not persisted).
47-113:⚠️ Potential issue | 🟡 MinorCSV parser does not honor RFC-4180 multi-line quoted fields.
The parser splits on
\r?\nbefore invokingparseCsvLine, causing any cell containing an embedded newline inside"..."to be mis-parsed as separate rows. This violates RFC-4180, which explicitly permits CR/LF within quoted fields. While unlikely for current use cases (addresses, amounts), notes or memos with newlines will silently fail validation. Either parse the buffer in a single pass trackinginQuotesacross newlines, or document this limitation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/enterpriseService.ts` around lines 47 - 113, The CSV parser currently splits the file into physical lines before parsing which breaks RFC‑4180 quoted fields that contain embedded newlines; update parseCsvBuffer to produce logical CSV records by scanning the entire text char-by-char while tracking an inQuotes state (reuse the same inQuotes logic as parseCsvLine) so newline characters inside quotes are appended to the current field instead of splitting into a new record, then feed each completed logical record string into parseCsvLine as before; keep the existing header/required‑header checks and error throws (AppError) but ensure the scanner returns an array of logical lines/records rather than splitting on /\r?\n/ so multi-line quoted cells are handled correctly.
245-280:⚠️ Potential issue | 🔴 CriticalChunk transactions are not actually transactional — atomicity claim is false.
prisma.$transaction(async () => { ... })is invoked without binding the callback's transactional client.processRowuses the module-scopeprismafortransaction.findUniqueandtransaction.create(lines 134, 153, 176), so every read/write inside the callback runs outside the transaction. The promise that "each chunk is isolated in its own transaction" does not hold:
- A mid-chunk failure does not roll back rows already inserted in the same chunk.
- The
$transactionwrapper adds latency/connection pressure with no isolation benefit.P2002retry logic inprocessRowinteracts with the un-rolled-back state.🔧 Suggested fix: thread the transactional client through
- const chunkResults = await prisma.$transaction(async () => { + const chunkResults = await prisma.$transaction(async (tx) => { const results: BulkTransferRowResult[] = []; for (let i = 0; i < chunk.length; i += 1) { const rawRow = chunk[i]; const validation = bulkTransferRowSchema.safeParse(rawRow); if (!validation.success) { ... continue; } results.push( await processRow( + tx, params.organizationId, params.senderUserId, validation.data, start + i, ), ); } return results; });And in
processRow, accepttx: Prisma.TransactionClientand usetx.transaction.findUnique/tx.transaction.createinstead of the module-scopeprisma.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/enterpriseService.ts` around lines 245 - 280, The chunk-level prisma.$transaction wrapper is ineffective because processRow uses the module-scoped prisma rather than the transaction client; update processRow to accept a Prisma.TransactionClient parameter (e.g., add tx: Prisma.TransactionClient to processRow signature) and replace all module-scope prisma calls inside processRow (transaction.findUnique, transaction.create, etc.) with the passed tx (tx.transaction.findUnique / tx.transaction.create or tx.<model> calls); then in the chunk loop call processRow with the transactional client returned to the $transaction callback (e.g., await processRow(params.organizationId, params.senderUserId, validation.data, start + i, tx)) so all reads/writes in the callback run inside the same transaction and will roll back on failure.src/controllers/enterpriseController.ts (1)
34-48:⚠️ Potential issue | 🟡 MinorMIME sniffing accepts any file with
.csvextension regardless of MIME type.The final
|| hasCsvExtensionshort-circuits all the MIME checks, soapplication/zipnamedevil.csvpasses. Given the downstream parser is text/utf-8 only and size is bounded, exploit risk is low, but if you intend CSV‑only uploads the trailing fallback should require eithertext/*orapplication/octet-streampaired with the extension. Minor — not blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/enterpriseController.ts` around lines 34 - 48, The isCsvUpload function currently accepts any file with a ".csv" extension because of the trailing "|| hasCsvExtension"; change the logic so the extension-only fallback is allowed only when the mimetype indicates text or generic binary (e.g., mimetype startsWith("text/") or includes "application/octet-stream"). Specifically, update the return expression in isCsvUpload to remove the unconditional hasCsvExtension and instead allow hasCsvExtension && (mimetype.startsWith("text/") || mimetype.includes("application/octet-stream")), keeping the existing explicit CSV mimetype checks (text/csv, application/csv, application/vnd.ms-excel) intact.
🧹 Nitpick comments (17)
src/services/audit/auditService.test.ts (1)
61-107: Optional: add coverage forBREAK_GLASS_KEY.The new tests only exercise
ADMIN_KEY. SincevalidateAdminAttributiontreatsBREAK_GLASS_KEYidentically and break-glass is the higher-risk path, a parallel rejection test (and a happy-path test) forBREAK_GLASS_KEYwould lock in the contract and catch any future divergence between the two branches.🧪 Suggested additional cases
+ it("should reject break-glass audit entries with missing attribution fields", async () => { + mockChannel.sendToQueue.mockReturnValue(true); + + await expect( + logAudit({ + eventType: "auth", + action: "break_glass_used", + keyType: "BREAK_GLASS_KEY", + performedBy: "user-1", + // actorType, organizationId, reason missing + }), + ).rejects.toThrow( + "Admin audit entries require performedBy, actorType, organizationId, and reason", + ); + + expect(mockChannel.sendToQueue).not.toHaveBeenCalled(); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/audit/auditService.test.ts` around lines 61 - 107, Add parallel tests for the BREAK_GLASS_KEY path to mirror the existing ADMIN_KEY cases: create one test that calls logAudit with keyType: "BREAK_GLASS_KEY" missing organizationId and reason and assert it rejects with the same error message and that mockChannel.sendToQueue and fs.appendFileSync are not called, and a second test that supplies performedBy, actorType, organizationId, and reason and asserts mockChannel.sendToQueue was called with QUEUES.AUDIT_LOGS, expect.any(Buffer), { persistent: true }; reference the same helpers used in the file (logAudit, validateAdminAttribution behavior, mockChannel.sendToQueue, QUEUES.AUDIT_LOGS, and the fs spies) so the tests mirror the ADMIN_KEY branch exactly.src/services/feePolicy/feePolicyService.ts (1)
44-45: Sensible fix, but consider documenting the tolerance.The epsilon-tolerant boundary check correctly absorbs floating-point noise from the
(actualWeight / targetWeight) * 100computation (max relative error ~1e-13 in this regime, well within the chosen 1e-9). Behavior at exact 85.0 / 115.0 is unchanged from before (still maps to the normal tier), but values that drift by sub-epsilon amounts no longer flip tiers — which is the intended stabilization.Two optional follow-ups:
- Update the tier-spec docstring (lines 12–15) and the
getBurnFeeBpsJSDoc (lines 100–104) so the documented bounds match the implemented semantics. Right now they read as strict<85%/>115%but the code now treats[85 − ε, 115 + ε]as normal.- Consider naming/locating the constant alongside the threshold constants and noting that it is an absolute tolerance on a percent value; it is not a relative tolerance, so it would need revisiting if the threshold scale ever changes (e.g., if these became fractions in
[0,1]instead of percents).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/feePolicy/feePolicyService.ts` around lines 44 - 45, The RESERVE_THRESHOLD_EPSILON constant was added to stabilize tier boundary checks but the public documentation/comments still describe strict "<85%" / ">115%" bounds; update the tier-spec docstring (near the existing tier description) and the JSDoc for getBurnFeeBps to reflect that the implementation treats values within ±RESERVE_THRESHOLD_EPSILON of the percent thresholds as “normal” (i.e., [85 − ε, 115 + ε]); also consider moving/naming RESERVE_THRESHOLD_EPSILON near the other threshold constants and add a short note that it is an absolute tolerance on percent values (not a relative tolerance) so maintainers know to revisit it if thresholds change scale.src/services/feePolicy/__tests__/feePolicyService.test.ts (1)
1-7: Smoke test is fine; consider a slightly tighter assertion.
getSpreadBps()is computed once at module load fromNumber(process.env.SPREAD_BPS || "25").> 0will silently pass even ifSPREAD_BPSis set to a non-numeric string in CI (Number("abc") === NaN, andNaN > 0isfalse— so that path actually fails, which is fine), but it would also pass for unreasonable values like1e6. A tighter assertion adds little cost:♻️ Optional tightening
- it("returns a positive spread basis points value", () => { - expect(getSpreadBps()).toBeGreaterThan(0); - }); + it("returns a finite, positive spread in basis points", () => { + const bps = getSpreadBps(); + expect(Number.isFinite(bps)).toBe(true); + expect(bps).toBeGreaterThan(0); + expect(bps).toBeLessThanOrEqual(500); // matches MAX_SANITY_FEE_BPS + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/feePolicy/__tests__/feePolicyService.test.ts` around lines 1 - 7, The test's single assertion expect(getSpreadBps()).toBeGreaterThan(0) is too loose; update the test in src/services/feePolicy/__tests__/feePolicyService.test.ts to more tightly validate the module-level value returned by getSpreadBps (which is computed from Number(process.env.SPREAD_BPS || "25")) by asserting it's a finite number and within a reasonable range (for example > 0 and < 10000, or a narrower bound like <= 1000) so non-numeric or absurd values fail the test.src/controllers/burnController.ts (2)
163-167: Redundantnew Decimal(...)wrapping.
localDecimal(Line 131) andfeeAcbuDecimal(Line 111) are alreadyDecimalinstances, sonew Decimal(localDecimal)/new Decimal(feeAcbuDecimal)just allocates a copy. Likewise,new Decimal(acbuNum)round-trips through a JSnumber— within the validator's 7-decimal limit there's no precision loss today, butacbuDecimalis the authoritative value and avoids the boundary conversion entirely.♻️ Suggested cleanup
- acbuAmountBurned: new Decimal(acbuNum), + acbuAmountBurned: acbuDecimal, localCurrency: currency, - localAmount: new Decimal(localDecimal), + localAmount: localDecimal, recipientAccount: recipient_account as object, - fee: new Decimal(feeAcbuDecimal), + fee: feeAcbuDecimal,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/burnController.ts` around lines 163 - 167, The object construction is creating unnecessary Decimal copies: stop wrapping existing Decimal instances and use the authoritative acbuDecimal and the already-Decimal locals directly; specifically, set acbuAmountBurned to acbuDecimal (not new Decimal(acbuNum)), set localAmount to localDecimal (not new Decimal(localDecimal)), and set fee to feeAcbuDecimal (not new Decimal(feeAcbuDecimal)) while keeping recipientAccount and localCurrency as-is so types remain consistent.
176-196: Tighten P2002 detection with explicitmeta.targetcheck.The new P2002 catch handler has two issues:
Over-broad error detection:
isDuplicateBurnHashmatches anyP2002, not just those triggered byblockchainTxHash. If thetransactiontable gains another unique constraint (e.g., a composite key including other fields), a P2002 from that constraint could be caught here. The fallbackfindFirstlookup prevents an incorrect response only by accident (no row matches). Prisma providescreateError.meta.targetto identify which constraint(s) failed — checking it explicitly prevents silent masking of schema changes.Redundant guard: The condition
!isDuplicateBurnHash || !blockchain_tx_hash(line 183) is dead code:isDuplicateBurnHashalready requiresBoolean(blockchain_tx_hash), so the right operand adds no runtime safety, only minor TypeScript narrowing.The codebase uses Prisma 5.19.0, and test evidence (
tests/burn.idempotency.test.tsline 150) confirmsmeta.targetis an array:["type", "blockchainTxHash"]. Check it explicitly:♻️ Suggested refinement
} catch (createError) { - const isDuplicateBurnHash = - burningEnabled && - Boolean(blockchain_tx_hash) && - createError instanceof Prisma.PrismaClientKnownRequestError && - createError.code === "P2002"; - - if (!isDuplicateBurnHash || !blockchain_tx_hash) { + if ( + !burningEnabled || + !blockchain_tx_hash || + !(createError instanceof Prisma.PrismaClientKnownRequestError) || + createError.code !== "P2002" + ) { throw createError; } + // Ensure this P2002 is for blockchainTxHash, not another constraint + const target = createError.meta?.target; + const isBlockchainTxHashConstraint = Array.isArray(target) + ? target.includes("blockchainTxHash") + : target === "blockchainTxHash"; + if (!isBlockchainTxHashConstraint) { + throw createError; + } const existing = await prisma.transaction.findFirst({ where: { type: "burn", blockchainTxHash: blockchain_tx_hash }, }); if (!existing) { throw createError; } respondFromExistingBurnTx(res, existing, blockchain_tx_hash); return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/burnController.ts` around lines 176 - 196, The P2002 handler is too broad and contains a redundant guard; refine it to only treat errors as duplicate-burn-hash when createError is a Prisma.PrismaClientKnownRequestError and its meta?.target array explicitly includes the constraint columns (e.g., "blockchainTxHash" and optionally "type") rather than any P2002; update the isDuplicateBurnHash check to test createError instanceof Prisma.PrismaClientKnownRequestError and Array.isArray(createError.meta?.target) && createError.meta.target.includes("blockchainTxHash") (or both "type" and "blockchainTxHash" if you want exact match), remove the unnecessary `|| !blockchain_tx_hash` branch since the boolean check is duplicated, and keep the existing fallback lookup (prisma.transaction.findFirst) and respondFromExistingBurnTx(res, existing, blockchain_tx_hash) flow unchanged.src/jobs/withdrawalProcessingJob.ts (1)
113-116: DRY: re-deriveproviderNamefrom the router instead of duplicating the mapping.This inline
{ NGN: "paystack", RWF: "mtn_momo" }literal is a third source of truth for currency→preferred-provider routing alongsideDEFAULT_CURRENCY_PROVIDERSand the newgetPreferredProviderIdhelper used a few lines below. If the router config ever changes (e.g., KES gets a dedicated provider, or NGN priority flips),logFinancialEventwill silently log a stale provider for the financial audit trail while the actual disbursement uses the new one.Recommend hoisting the router lookup and reusing the helper:
Proposed refactor
- const providerName = - ({ NGN: "paystack", RWF: "mtn_momo" } as Record<string, string>)[ - currency - ] ?? "flutterwave"; + const router = getFintechRouter(); + const providerName = router.getPreferredProviderId(currency) ?? "flutterwave";…and drop the inner
const router = getFintechRouter();at line 132.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/jobs/withdrawalProcessingJob.ts` around lines 113 - 116, The inline currency→provider mapping that sets providerName should be removed and replaced by deriving the provider from the router via the existing helper; call getFintechRouter() once (reuse the router already used for disbursement) and use getPreferredProviderId(router, currency) (or the same helper used below) to compute providerName so logFinancialEvent and the actual disbursement use the same source of truth (drop the duplicate const router = getFintechRouter() later and reference DEFAULT_CURRENCY_PROVIDERS only via the helper).src/services/reserve/ReserveTracker.ts (1)
161-162: Minor: duplicategetBalancecall per attempt.
router.getProvider(currency)already callsprovider.getBalance(currency)as its internal health check (seesrc/services/fintech/router.tsL57), so this path issues the same balance request twice perwithRetryattempt — up to 6 calls in the worst case across the 3-attempt budget. Not a correctness issue, but worth tightening if the underlying providers rate-limit or charge per call.Possible micro-refactor (only if router exposes balance from health check)
If you'd rather avoid the duplicate call, an option is to expose a
getProviderWithBalance(currency)on the router that returns the cached balance from its health probe; otherwise, leaving as-is is acceptable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/reserve/ReserveTracker.ts` around lines 161 - 162, The current withRetry attempt in ReserveTracker calls (await fintechRouter.getProvider(currency)).getBalance(currency) which duplicates the provider.getBalance(currency) already performed inside fintechRouter.getProvider(currency); to fix it, either change ReserveTracker to call a router-level method that returns the already-probed balance (e.g., fintechRouter.getProviderWithBalance(currency) or fintechRouter.getBalance(currency)) instead of invoking provider.getBalance again, or modify fintechRouter.getProvider to return an object { provider, balance } and then use that cached balance in ReserveTracker's async attempt; update the call site in ReserveTracker (the async ()=>... passed to withRetry) to use the router-exposed balance field rather than calling provider.getBalance a second time.src/services/transfer/transferService.ts (1)
165-232: Silent no-op whengetKeyreturns null leaves an "initiated" event without a terminal one.If
getSenderSigningKeyis provided but resolves tonull/undefined, the function:
- emits
transfer.initiated(line 118),- skips submission,
- returns
status: "pending"without emitting anytransfer.failed/transfer.deferredevent and without updating the row.For audit / observability hygiene, consider either logging a
transfer.deferred(or similar) event in this branch or asserting via logger that the deferral is intentional, so dashboards aren't left with danglinginitiatedevents that never resolve in the same request.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/transfer/transferService.ts` around lines 165 - 232, The branch where getSenderSigningKey (referenced as getKey) is provided but returns null/undefined must not be a silent no-op: after calling getKey(senderUserId) and getting a falsy secretKey, update the transaction row using prisma.transaction.update to set a terminal status (e.g., "deferred" or "failed"), emit a corresponding financial event via logFinancialEvent (e.g., event: "transfer.deferred" with transactionId, idempotencyKey, userId, accountId, destinationId, amount, currency, correlationId and a reason), and write a clear log via logger.warn or logger.info indicating the deferral and reason; ensure the new branch sits where getKey returns falsy (the existing getKey/secretKey check) so the function no longer returns status "pending" without an emitted terminal event.src/config/tracing.ts (1)
1-3: No-opinitTracingplaceholder — consider tracking implementation as TODO.The function is a deliberate no-op. If real tracing instrumentation (OpenTelemetry, etc.) is planned, please add a TODO referencing the relevant ticket so the placeholder is not silently shipped to production with observability gaps.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/tracing.ts` around lines 1 - 3, The no-op initTracing function is missing a TODO so the placeholder isn't silently shipped; update the initTracing() implementation to include a clear TODO comment referencing the tracking ticket or issue (e.g., add "TODO: implement OpenTelemetry tracing - see ISSUE-1234" or your team's equivalent) and a short note describing expected work (e.g., add OpenTelemetry instrumentation, exporters, and init/shutdown hooks) so future reviewers can find the planned tracing implementation; keep the function behavior unchanged until the ticket is implemented.src/controllers/authController.ts (1)
52-58: Remove redundantreq.connectionfallback — it's a deprecated Node.js v13+ alias.
req.connectionis a deprecated alias forreq.socketthat has been superseded for over a decade. The fallback adds unnecessary type complexity without improving coverage. Simplify to:♻️ Proposed simplification
-function getRequestIp(req: AuthRequest): string { - const connection = (req as AuthRequest & { - connection?: { remoteAddress?: string | null }; - }).connection; - - return req.ip || req.socket?.remoteAddress || connection?.remoteAddress || "unknown"; -} +function getRequestIp(req: AuthRequest): string { + return req.ip || req.socket?.remoteAddress || "unknown"; +}
⚠️ Important:req.iprequiresapp.set("trust proxy", ...)to correctly reflect the real client IP when behind a proxy. No such configuration was found in this codebase — ensure it's set in your Express initialization, or IP-based rate limiting and security checks will use the proxy's IP instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/authController.ts` around lines 52 - 58, The getRequestIp function currently uses a deprecated req.connection fallback; remove the connection extraction and simplify the return to only use req.ip, req.socket?.remoteAddress, and a final "unknown" default (update the function that references getRequestIp accordingly), and drop the extra intersection type for connection; also add a note/comment or checklist to ensure Express is configured with app.set("trust proxy", ...) elsewhere so req.ip yields the real client IP when behind proxies.README.md (1)
246-255: Clarify whether bothdevanddevelopbranches are intentionally used, or consolidate to a single development branch.The workflow file confirms both
devanddevelopare configured as CI triggers (lines 5-8 of.github/workflows/ci.yml), and the README documentation correctly reflects this. However, supporting two separate development branches is unusual. If only one is canonical, consolidate both the workflow and README to use a single branch name to prevent confusion and drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 246 - 255, The README and CI workflow currently list both 'dev' and 'develop' branches as CI triggers which is confusing; pick a single canonical development branch name (either 'dev' or 'develop'), update the CI workflow triggers in the workflow configuration to remove the other branch, and update README.md to reflect only the chosen branch; also search for and update any related references (PR templates, contributing docs, branch protection rules, and any scripts that reference 'dev' or 'develop') so all tooling and docs consistently use the single branch name.prisma/migrations/20260427000000_add_user_devices/migration.sql (1)
18-22: Redundant single-column index onuser_id.PostgreSQL can use the leading column of the unique composite index
(user_id, fingerprint)to satisfy lookups byuser_idalone. The separateidx_user_devices_user_idadds write amplification and storage overhead with no read benefit. Recommend dropping it (or removing it from the Prisma schema) unless there is a specific planner choice this index is meant to force.♻️ Suggested change
CREATE UNIQUE INDEX "user_devices_user_id_fingerprint_key" ON "user_devices"("user_id", "fingerprint"); -CREATE INDEX "idx_user_devices_user_id" - ON "user_devices"("user_id"); - CREATE INDEX "idx_user_devices_fingerprint" ON "user_devices"("fingerprint");Don't forget to drop the corresponding
@@index([userId])inprisma/schema.prismaso futureprisma migrateruns don't re-create it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prisma/migrations/20260427000000_add_user_devices/migration.sql` around lines 18 - 22, Drop the redundant single-column index "idx_user_devices_user_id" on the "user_devices" table (it's covered by the unique composite index "user_devices_user_id_fingerprint_key") and remove the corresponding @@index([userId]) entry in prisma/schema.prisma so Prisma won't recreate it; update the migration to include DROP INDEX "idx_user_devices_user_id" (or remove the CREATE INDEX line) and ensure the schema change is reflected before running prisma migrate.src/controllers/enterpriseController.ts (1)
93-98: Redundantinstanceof AppErrorbranch — both arms callnext(e).The two paths are identical. Drop the conditional or, if the intent was to differentiate (e.g., log non‑AppError differently), make that explicit.
♻️ Simplification
- } catch (e) { - if (e instanceof AppError) { - return next(e); - } - next(e); - } + } catch (e) { + next(e); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/enterpriseController.ts` around lines 93 - 98, The catch block in enterpriseController.ts redundantly checks "if (e instanceof AppError)" but both branches call next(e); remove the conditional and replace the entire catch body with a single next(e) call, or if you intended different handling for non-AppError, add the explicit behavior (e.g., processLogger.error(e) or wrap in a generic AppError) before calling next(e); refer to the catch block around the controller method where AppError and next are used to locate and update the code.prisma/schema.prisma (3)
337-340: Emergency-access fields lack invariants — consider DB-level checks.
emergencyReasonandemergencyExpiresAtare independent nullable columns alongsidekeyType. Operationally you almost certainly want:
emergencyExpiresAt IS NOT NULLwheneverkeyType = 'BREAK_GLASS_KEY'.emergencyReason IS NOT NULLand length ≥ N for the same condition.- An index on
emergencyExpiresAt(or(keyType, emergencyExpiresAt)) so a sweeper can find expired break-glass keys.Prisma can't express
CHECKconstraints directly, but you can add a raw SQLCHECKin the migration. This is exactly the kind of invariant the destructive-migration gate is meant to protect, so it's worth getting right at introduction time.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prisma/schema.prisma` around lines 337 - 340, The schema allows emergencyReason and emergencyExpiresAt to be NULL even when keyType = BREAK_GLASS_KEY; add DB-level invariants by creating a migration that injects raw SQL: a CHECK constraint enforcing (key_type != 'BREAK_GLASS_KEY') OR (emergency_expires_at IS NOT NULL AND emergency_reason IS NOT NULL AND length(emergency_reason) >= N) and optionally another CHECK to validate the timestamp semantics, and add an index on emergency_expires_at or a composite index (key_type, emergency_expires_at) so the sweeper can efficiently find expired BREAK_GLASS_KEY rows; target the columns/enum names keyType / ApiKeyType / emergencyExpiresAt / emergencyReason in the migration to ensure the constraints and index apply to the correct table.
255-258:AuditTrailfield-name alignment + missing@db.VarChar(20)parity foractorTypebetweenUser/OrganizationandAuditTrail.Minor: in
UserandOrganization,actorTypeisVarChar(20). Here it's alsoVarChar(20)✓. ButkeyTypeisVarChar(32)while the newApiKey.keyTypeuses theApiKeyTypeenum (USER_KEY/ADMIN_KEY/BREAK_GLASS_KEY, max 15 chars). The mismatch isn't a bug, but consider using the enum onAuditTrail.keyTypetoo (ApiKeyType?) so the values are validated rather than free-form strings, or document the allowed values explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prisma/schema.prisma` around lines 255 - 258, AuditTrail currently defines keyType as a free-form String; change it to use the ApiKeyType enum (make it ApiKeyType? named keyType) so values are validated the same way as ApiKey.keyType, and ensure the field still maps correctly (keep `@map`("key_type") and an appropriate `@db.VarChar` length if you want to preserve column type); update the AuditTrail model's keyType declaration and keep actorType parity with User/Organization (actorType String? `@db.VarChar`(20)) to maintain consistency.
28-28: Field alignment / formatting nit — single space before type drops the column out of alignment with the rest ofOrganization.- bulkTransferJobs BulkTransferJob[] + bulkTransferJobs BulkTransferJob[]Run
prisma formatto normalize. Cosmetic only.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prisma/schema.prisma` at line 28, The field declaration for bulkTransferJobs in the Organization model is misaligned due to an extra/missing space; open prisma/schema.prisma, locate the Organization model and the bulkTransferJobs BulkTransferJob[] field, fix the spacing to match the other fields (one consistent space/tab before the type), then run `prisma format` to normalize formatting across the file.scripts/ci/check-destructive-migrations.js (1)
1-263: File-wide: this.jsfile violates project Prettier rules (single vs double quotes, trailing commas, line wrapping).ESLint reports ~50 prettier errors across the file. Run
npx prettier --write scripts/ci/check-destructive-migrations.jsbefore merging so the lint job stays green; the rules are mechanical and not worth handling per-line in review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/ci/check-destructive-migrations.js` around lines 1 - 263, This file fails Prettier formatting across many lines (quotes, trailing commas, wrapping); run the project's Prettier formatter on scripts/ci/check-destructive-migrations.js (npx prettier --write scripts/ci/check-destructive-migrations.js) and commit the result so functions like normalizePath, splitSqlStatements, findDestructiveStatements and main keep their implementations but follow project formatting rules (double quotes, proper trailing commas, line wrapping), ensuring exports at module.exports are preserved.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e9640d85-6aaa-476a-84ea-949cbb5a75ee
📒 Files selected for processing (46)
.github/workflows/ci.ymlREADME.mdprisma/migrations/20260427000000_add_user_devices/migration.sqlprisma/schema.prismascripts/ci/check-destructive-migrations.jssrc/config/env.tssrc/config/tracing.tssrc/controllers/authController.tssrc/controllers/billsController.tssrc/controllers/burnController.tssrc/controllers/enterpriseController.tssrc/controllers/investmentController.tssrc/controllers/mintController.test.tssrc/controllers/mintController.tssrc/controllers/onrampController.tssrc/controllers/salaryController.tssrc/controllers/transactionController.tssrc/controllers/webhookController.tssrc/index.tssrc/jobs/usdcConversionJob.tssrc/jobs/withdrawalProcessingJob.tssrc/middleware/errorHandler.tssrc/routes/authRoutes.tssrc/routes/index.tssrc/routes/investmentRoutes.tssrc/routes/recoveryRoutes.tssrc/services/audit/auditService.test.tssrc/services/audit/auditService.tssrc/services/enterpriseService.tssrc/services/feePolicy/__tests__/feePolicyService.pbt.test.tssrc/services/feePolicy/__tests__/feePolicyService.test.tssrc/services/feePolicy/feePolicyService.tssrc/services/fintech/router.tssrc/services/investment/allocationService.tssrc/services/oracle/integrationService.tssrc/services/reserve/RebalancingEngine.tssrc/services/reserve/ReserveTracker.tssrc/services/transfer/transferService.tssrc/services/wallet/walletService.test.tssrc/utils/jwt.tstests/destructiveMigrationGate.test.tstests/enterprise/bulkTransfer.test.tstests/openapi-drift.test.tstests/recovery.test.tstests/setup.tstests/transfer.test.ts
💤 Files with no reviewable changes (2)
- src/controllers/mintController.ts
- src/services/wallet/walletService.test.ts
| id String @id @default(uuid()) @db.Uuid | ||
| userId String? @map("user_id") @db.Uuid | ||
| organizationId String? @map("organization_id") @db.Uuid | ||
| idempotencyKey String? @unique(map: "idx_transactions_idempotency_key") @map("idempotency_key") @db.VarChar(255) |
There was a problem hiding this comment.
Transaction.idempotencyKey is now globally unique — confirm no historical conflicts before deploy.
Adding @unique to an existing populated column with a destructive migration pathway will fail at deploy time if any duplicates exist (which can happen if the field was previously written by mistake or by a different feature). Make sure the migration includes a backfill/dedup or a guard query, and run a SELECT idempotency_key, COUNT(*) FROM transactions WHERE idempotency_key IS NOT NULL GROUP BY 1 HAVING COUNT(*) > 1 against staging before promoting. Also worth verifying that no other write path (e.g. salaryItem, withdrawals) sets a colliding key with the new deriveIdempotencyKey in enterpriseService.ts.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prisma/schema.prisma` at line 86, You added `@unique` to
Transaction.idempotencyKey which will fail migration if duplicates exist; before
deploying, run the suggested dedupe query (SELECT idempotency_key, COUNT(*) ...
HAVING COUNT(*) > 1) against staging and include a migration that
backfills/cleans duplicates or a guard that nulls/confirms conflicting rows,
then create a safe migration script to dedupe (merge/remove duplicates or append
suffixes) and add tests; also audit any write paths that generate idempotency
keys (search for deriveIdempotencyKey and usages in enterpriseService.ts,
salaryItem, withdrawals, etc.) to ensure no new collision logic is introduced
and update those code paths to use the canonical deriveIdempotencyKey or to
avoid producing conflicting values before applying the unique constraint.
| model SalaryItem { | ||
| id String @id @default(uuid()) @db.Uuid | ||
| batchId String @map("batch_id") @db.Uuid | ||
| recipientId String? @map("recipient_id") @db.Uuid | ||
| recipientAddress String @map("recipient_address") @db.VarChar(56) | ||
| amount Decimal @db.Decimal(20, 8) | ||
| status String @default("pending") @db.VarChar(20) | ||
| transactionId String? @unique @map("transaction_id") @db.Uuid | ||
| errorMessage String? @map("error_message") @db.Text | ||
| createdAt DateTime @default(now()) @map("created_at") @db.Timestamp(6) | ||
| updatedAt DateTime @updatedAt @map("updated_at") @db.Timestamp(6) | ||
| batch SalaryBatch @relation(fields: [batchId], references: [id], onDelete: Cascade) | ||
| transaction Transaction? @relation(fields: [transactionId], references: [id], onDelete: SetNull) | ||
|
|
||
| @@index([batchId], map: "idx_salary_item_batch_id") | ||
| @@index([status], map: "idx_salary_item_status") | ||
| @@map("salary_items") | ||
| } | ||
|
|
||
| model SalarySchedule { | ||
| id String @id @default(uuid()) @db.Uuid | ||
| organizationId String? @map("organization_id") @db.Uuid | ||
| userId String @map("user_id") @db.Uuid | ||
| name String @db.VarChar(100) | ||
| cron String @db.VarChar(100) | ||
| amountConfig Json @map("amount_config") @db.JsonB | ||
| currency String @default("ACBU") @db.VarChar(10) | ||
| status String @default("active") @db.VarChar(20) | ||
| lastRunAt DateTime? @map("last_run_at") @db.Timestamp(6) | ||
| nextRunAt DateTime? @map("next_run_at") @db.Timestamp(6) | ||
| createdAt DateTime @default(now()) @map("created_at") @db.Timestamp(6) | ||
| updatedAt DateTime @updatedAt @map("updated_at") @db.Timestamp(6) | ||
| organization Organization? @relation(fields: [organizationId], references: [id], onDelete: SetNull) | ||
| user User @relation(fields: [userId], references: [id]) | ||
|
|
||
| @@index([organizationId], map: "idx_salary_schedule_org_id") | ||
| @@index([status], map: "idx_salary_schedule_status") | ||
| @@index([nextRunAt], map: "idx_salary_schedule_next_run") | ||
| @@map("salary_schedules") | ||
| } |
There was a problem hiding this comment.
SalaryItem / SalarySchedule referential actions deserve a quick second look.
SalaryItem.transactionusesonDelete: SetNull— good (transactions are append-only and shouldn't normally be deleted, but if they are, the salary item shouldn't cascade).SalarySchedule.organizationusesonDelete: SetNullwhileSalarySchedule.userisRestrict(no explicit action ⇒ default). Mixed semantics: deleting an org orphans schedules (which then run with no org context on next cron tick), while deleting a user blocks org deletion entirely. Make surenextRunAt/ cron worker toleratesorganizationId === nullrows, or change toCascade/Restrictconsistently.- Consider
@@unique([organizationId, name])onSalaryScheduleto prevent duplicate "Monthly payroll" rows from racing API calls.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prisma/schema.prisma` around lines 570 - 609, The referential actions are
inconsistent and may orphan or block deletes: review and align the onDelete
behavior for SalaryItem.transaction, SalarySchedule.organization and
SalarySchedule.user (e.g., make SalarySchedule.organization use onDelete:
SetNull if the cron/worker tolerates organizationId === null, or change both
organization and user to Restrict/Cascade for consistent semantics) and ensure
the cron worker that uses cron/nextRunAt handles organizationId === null; also
add a uniqueness constraint on SalarySchedule (@@unique([organizationId, name]))
to prevent duplicate schedules like "Monthly payroll" from racing API calls.
| model BulkTransferJob { | ||
| id String @id @default(uuid()) @db.Uuid | ||
| organizationId String @map("organization_id") @db.Uuid | ||
| status String @default("pending") @db.VarChar(20) | ||
| totalRows Int @map("total_rows") | ||
| processedRows Int @default(0) @map("processed_rows") | ||
| successCount Int @default(0) @map("success_count") | ||
| failureCount Int @default(0) @map("failure_count") | ||
| failureReport Json? @map("failure_report") | ||
| createdAt DateTime @default(now()) @map("created_at") @db.Timestamp(6) | ||
| completedAt DateTime? @map("completed_at") @db.Timestamp(6) | ||
| errorMessage String? @map("error_message") @db.Text | ||
| organization Organization @relation(fields: [organizationId], references: [id], onDelete: Cascade) | ||
|
|
||
| @@index([organizationId], map: "idx_bulk_transfer_jobs_organization_id") | ||
| @@index([status], map: "idx_bulk_transfer_jobs_status") | ||
| @@index([createdAt], map: "idx_bulk_transfer_jobs_created_at") | ||
| @@map("bulk_transfer_jobs") | ||
| } |
There was a problem hiding this comment.
BulkTransferJob cascades on org delete and lacks job-level idempotency / size bounds.
Two operational concerns worth a sanity check:
onDelete: Cascadeonorganizationwill silently delete bulk-transfer audit history if an org row is removed. Given the per-rowTransaction.idempotencyKeylives ontransactions(which retain viaOrganizationasRestrict-equivalent through theorganizationId?FK with no cascade), the loss is limited to job metadata, but for audit/regulatory replay you may wantonDelete: RestrictorSetNullon a nullableorganizationId(and a soft-delete onOrganization).- No
idempotencyKey(or(organizationId, fileChecksum)unique) on the job means a retried CSV upload creates a duplicateBulkTransferJobrow. Per-row dedup still prevents duplicate transactions, but operator dashboards will count phantom jobs. failureReport Json?is unbounded; on a large failing CSV this could become a multi-MB row. Consider capping to N entries plus a count, or moving overflow to object storage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prisma/schema.prisma` around lines 611 - 629, The BulkTransferJob model
currently uses onDelete: Cascade for the organization relation and lacks
job-level dedup/idempotency and bounds on the failureReport; change the relation
from onDelete: Cascade to either onDelete: Restrict or onDelete: SetNull (if you
choose SetNull make organizationId nullable) to prevent accidental wholesale
deletion of job history, add a job-level uniqueness constraint such as a
nullable idempotencyKey column on BulkTransferJob or a unique composite index on
(organizationId, fileChecksum) to prevent duplicate jobs on retries, and
mitigate unbounded failureReport growth by limiting stored entries (e.g., cap to
N entries + a failureCount) or move overflow entries to external object storage
and store only a summarized JSON/URL in failureReport; reference
BulkTransferJob, organizationId, onDelete: Cascade, failureReport,
idempotencyKey and fileChecksum (and Transaction.idempotencyKey for per-row
dedup) when making these changes.
| const DESTRUCTIVE_PATTERNS = [ | ||
| { kind: 'DROP TABLE', regex: /\bDROP\s+TABLE\b/i }, | ||
| { kind: 'DROP COLUMN', regex: /\bDROP\s+COLUMN\b/i }, | ||
| { kind: 'TRUNCATE', regex: /\bTRUNCATE(?:\s+TABLE)?\b/i }, | ||
| ]; |
There was a problem hiding this comment.
Destructive-pattern set misses several lossy migrations.
The acceptance criterion for #170 is "fail on destructive migrations." The current matcher only catches DROP TABLE, DROP COLUMN, TRUNCATE. It will silently pass these equally destructive operations:
ALTER TABLE … RENAME COLUMN/RENAME TO(orphans existing data references / clients).ALTER TABLE … ALTER COLUMN … TYPE(potentially lossy / table rewrite).ALTER TABLE … DROP CONSTRAINT(PK/FK/unique drops; can erase invariants).DROP INDEX(sometimes intentional but worth labeling on prod tables).DROP SCHEMA,DROP DATABASE,DROP VIEW,DROP MATERIALIZED VIEW,DROP SEQUENCE,DROP TYPE,DROP FUNCTION.
Recommend extending the pattern list (still allowing the allow-destructive-migration label to bypass).
🛡️ Suggested extension
const DESTRUCTIVE_PATTERNS = [
- { kind: 'DROP TABLE', regex: /\bDROP\s+TABLE\b/i },
- { kind: 'DROP COLUMN', regex: /\bDROP\s+COLUMN\b/i },
- { kind: 'TRUNCATE', regex: /\bTRUNCATE(?:\s+TABLE)?\b/i },
+ { kind: 'DROP TABLE', regex: /\bDROP\s+TABLE\b/i },
+ { kind: 'DROP COLUMN', regex: /\bDROP\s+COLUMN\b/i },
+ { kind: 'DROP CONSTRAINT', regex: /\bDROP\s+CONSTRAINT\b/i },
+ { kind: 'DROP INDEX', regex: /\bDROP\s+INDEX\b/i },
+ { kind: 'DROP SCHEMA', regex: /\bDROP\s+SCHEMA\b/i },
+ { kind: 'DROP VIEW', regex: /\bDROP\s+(?:MATERIALIZED\s+)?VIEW\b/i },
+ { kind: 'DROP TYPE', regex: /\bDROP\s+TYPE\b/i },
+ { kind: 'TRUNCATE', regex: /\bTRUNCATE(?:\s+TABLE)?\b/i },
+ { kind: 'RENAME COLUMN', regex: /\bRENAME\s+COLUMN\b/i },
+ { kind: 'RENAME TABLE/TO', regex: /\bRENAME\s+TO\b/i },
+ { kind: 'ALTER COLUMN TYPE', regex: /\bALTER\s+COLUMN\s+\S+\s+(?:SET\s+DATA\s+)?TYPE\b/i },
];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const DESTRUCTIVE_PATTERNS = [ | |
| { kind: 'DROP TABLE', regex: /\bDROP\s+TABLE\b/i }, | |
| { kind: 'DROP COLUMN', regex: /\bDROP\s+COLUMN\b/i }, | |
| { kind: 'TRUNCATE', regex: /\bTRUNCATE(?:\s+TABLE)?\b/i }, | |
| ]; | |
| const DESTRUCTIVE_PATTERNS = [ | |
| { kind: 'DROP TABLE', regex: /\bDROP\s+TABLE\b/i }, | |
| { kind: 'DROP COLUMN', regex: /\bDROP\s+COLUMN\b/i }, | |
| { kind: 'DROP CONSTRAINT', regex: /\bDROP\s+CONSTRAINT\b/i }, | |
| { kind: 'DROP INDEX', regex: /\bDROP\s+INDEX\b/i }, | |
| { kind: 'DROP SCHEMA', regex: /\bDROP\s+SCHEMA\b/i }, | |
| { kind: 'DROP VIEW', regex: /\bDROP\s+(?:MATERIALIZED\s+)?VIEW\b/i }, | |
| { kind: 'DROP TYPE', regex: /\bDROP\s+TYPE\b/i }, | |
| { kind: 'TRUNCATE', regex: /\bTRUNCATE(?:\s+TABLE)?\b/i }, | |
| { kind: 'RENAME COLUMN', regex: /\bRENAME\s+COLUMN\b/i }, | |
| { kind: 'RENAME TABLE/TO', regex: /\bRENAME\s+TO\b/i }, | |
| { kind: 'ALTER COLUMN TYPE', regex: /\bALTER\s+COLUMN\s+\S+\s+(?:SET\s+DATA\s+)?TYPE\b/i }, | |
| ]; |
🧰 Tools
🪛 ESLint
[error] 13-13: Replace 'DROP·TABLE' with "DROP·TABLE"
(prettier/prettier)
[error] 14-14: Replace 'DROP·COLUMN' with "DROP·COLUMN"
(prettier/prettier)
[error] 15-15: Replace 'TRUNCATE' with "TRUNCATE"
(prettier/prettier)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/ci/check-destructive-migrations.js` around lines 12 - 16, The
DESTRUCTIVE_PATTERNS list in scripts/ci/check-destructive-migrations.js only
checks DROP TABLE/COLUMN/TRUNCATE and misses many lossy operations; update the
DESTRUCTIVE_PATTERNS constant to include additional entries for ALTER TABLE …
RENAME (match patterns like /\bALTER\s+TABLE\b[\s\S]*\bRENAME\b/i), ALTER TABLE
… ALTER COLUMN … TYPE
(/\bALTER\s+TABLE\b[\s\S]*\bALTER\s+COLUMN\b[\s\S]*\bTYPE\b/i), ALTER TABLE …
DROP CONSTRAINT (/\bALTER\s+TABLE\b[\s\S]*\bDROP\s+CONSTRAINT\b/i), DROP INDEX
(/\bDROP\s+INDEX\b/i), and other DROP kinds (DROP SCHEMA, DROP DATABASE, DROP
VIEW, DROP MATERIALIZED VIEW, DROP SEQUENCE, DROP TYPE, DROP FUNCTION — each
with a simple word-boundary regex like /\bDROP\s+VIEW\b/i etc.); keep the
existing allow-destructive-migration label bypass logic intact so these checks
can still be overridden when explicitly allowed.
| function splitSqlStatements(sql) { | ||
| return sql | ||
| .split(/;\s*(?:\r?\n|$)/) | ||
| .map((statement) => statement.trim()) | ||
| .filter(Boolean) | ||
| .map((statement) => (statement.endsWith(';') ? statement : `${statement};`)); | ||
| } |
There was a problem hiding this comment.
splitSqlStatements will mis-split (or coalesce) edge cases.
- Requires
;followed by\r?\nor end-of-string, soDROP TABLE a; DROP TABLE b;on a single line collapses into one statement (still flagged here, but truncated output is misleading). - It splits inside Postgres dollar-quoted blocks (
DO $$ ... ; ... $$), producing false positives if aDOblock legitimately contains the destructive keyword in anEXECUTEstring. - Doesn't strip
--line comments or/* ... */block comments before scanning, so a comment like-- DROP COLUMN was revertedwill trigger the gate.
Prisma-generated migrations are usually well-behaved, but a hand-edited migration could trip these. Consider stripping comments first and either using a real SQL tokenizer (e.g. pgsql-parser) or at least skipping content between $$ markers.
🧰 Tools
🪛 ESLint
[error] 33-33: Replace ·(statement.endsWith(';')·?·statement·:·${statement};) with ⏎······statement.endsWith(";")·?·statement·:·${statement};,⏎····
(prettier/prettier)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/ci/check-destructive-migrations.js` around lines 28 - 34, The
splitSqlStatements function mis-splits SQL because it splits simply on
semicolons and doesn't strip comments or respect Postgres dollar-quoted blocks;
update splitSqlStatements to first remove/ignore SQL comments (both -- line
comments and /* ... */ block comments), then parse and skip content inside
dollar-quoted ($$...$$ or $tag$...$tag$) sections when scanning for
statement-terminating semicolons so semicolons inside $$ blocks are not treated
as separators, and finally split on semicolons found only outside those quoted
blocks; alternatively, if you prefer a simpler/safer route, replace the custom
logic with a call to a real SQL tokenizer/parser (e.g., pgsql-parser) and use it
to emit statements—refer to splitSqlStatements for where to apply the change.
| * device_fingerprint: | ||
| * type: object | ||
| * description: Optional client fingerprint metadata used for device verification and recovery risk checks | ||
| * properties: | ||
| * user_agent: | ||
| * type: string | ||
| * ip: | ||
| * type: string | ||
| * accept_language: | ||
| * type: string | ||
| * accept_encoding: | ||
| * type: string | ||
| * timezone: | ||
| * type: string | ||
| * screen_resolution: | ||
| * type: string | ||
| * platform: | ||
| * type: string |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that recoveryController validates device_fingerprint with a strict schema.
fd -e ts recoveryController
rg -nP "device_fingerprint|deviceFingerprint" --type=ts -C3Repository: Pi-Defi-world/acbu-backend
Length of output: 27679
🏁 Script executed:
cat -n src/controllers/recoveryController.ts | head -50Repository: Pi-Defi-world/acbu-backend
Length of output: 2044
The device_fingerprint schema is already validated strictly by Zod; no runtime validation gaps exist.
Zod objects strip extra properties by default without requiring explicit .strict(). The 7 documented fields (user_agent, ip, accept_language, accept_encoding, timezone, screen_resolution, platform) are the only properties passed through to business logic via the spread operator. However, the OpenAPI documentation lacks additionalProperties: false, creating a mismatch: the docs suggest an open schema when the runtime is strict.
For clarity and documentation consistency, consider adding .strict() to both Zod schemas in recoveryController.ts (lines 14 and 28) and additionalProperties: false to the OpenAPI definitions in recoveryRoutes.ts (lines 40–57 and 85–102) to explicitly document the closed-schema behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/routes/recoveryRoutes.ts` around lines 40 - 57, Add explicit
closed-schema documentation and runtime enforcement: update the two Zod schemas
in recoveryController.ts (the request body/device_fingerprint schemas defined at
the two occurrences mentioned) to call .strict() so extra properties are
rejected at runtime, and update the OpenAPI object schemas in recoveryRoutes.ts
(the device_fingerprint definitions at both blocks) to include
"additionalProperties: false" so the generated docs match the runtime behavior;
ensure you target the exact schema objects that list user_agent, ip,
accept_language, accept_encoding, timezone, screen_resolution, and platform so
the changes align with the existing fields.
| getPreferredProviderId(currency: string): FintechProviderId | null { | ||
| return this.currencyProviders[currency]?.[0] ?? "flutterwave"; | ||
| } |
There was a problem hiding this comment.
Return type doesn't match implementation; | null branch is unreachable.
Declared as FintechProviderId | null, but the ?? "flutterwave" fallback makes null impossible. As a result, the ?? undefined guard at the failover call site in src/jobs/withdrawalProcessingJob.ts is dead code, and unknown currencies are silently coerced to "flutterwave" instead of being surfaced to callers.
Pick one of the two options below:
Option A — drop the null from the type (current behavior)
- getPreferredProviderId(currency: string): FintechProviderId | null {
- return this.currencyProviders[currency]?.[0] ?? "flutterwave";
- }
+ getPreferredProviderId(currency: string): FintechProviderId {
+ return this.currencyProviders[currency]?.[0] ?? "flutterwave";
+ }Option B — actually return null when unmapped (lets callers decide)
- getPreferredProviderId(currency: string): FintechProviderId | null {
- return this.currencyProviders[currency]?.[0] ?? "flutterwave";
- }
+ getPreferredProviderId(currency: string): FintechProviderId | null {
+ return this.currencyProviders[currency]?.[0] ?? null;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| getPreferredProviderId(currency: string): FintechProviderId | null { | |
| return this.currencyProviders[currency]?.[0] ?? "flutterwave"; | |
| } | |
| getPreferredProviderId(currency: string): FintechProviderId { | |
| return this.currencyProviders[currency]?.[0] ?? "flutterwave"; | |
| } |
| getPreferredProviderId(currency: string): FintechProviderId | null { | |
| return this.currencyProviders[currency]?.[0] ?? "flutterwave"; | |
| } | |
| getPreferredProviderId(currency: string): FintechProviderId | null { | |
| return this.currencyProviders[currency]?.[0] ?? null; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/fintech/router.ts` around lines 84 - 86, The
getPreferredProviderId function currently always returns a string due to the "??
\"flutterwave\"" fallback while its signature includes | null; remove the
hardcoded fallback so the function truly returns null for unmapped currencies
(change implementation to return this.currencyProviders[currency]?.[0] ?? null)
and keep the return type FintechProviderId | null, then update the failover call
site that currently uses "?? undefined" to properly handle a null provider value
(explicit null-check or failover logic) instead of relying on a silent
"flutterwave" coercion.
| if (delta > driftThreshold) { | ||
| logger.warn('RESERVE DRIFT DETECTED: DB Ledger and On-Chain supply diverge', { | ||
| ledgerSupply, | ||
| onChainSupply, | ||
| delta, | ||
| }); | ||
| } | ||
|
|
||
| return onChainSupply; | ||
| } catch (e) { | ||
| logger.error('Failed to query Stellar for ACBU total supply, falling back to ledger', { error: e }); | ||
| return ledgerSupply; | ||
| } | ||
| } | ||
| , | ||
| ); | ||
|
|
||
| return this.getTotalAcbuSupplyFromLedger(); | ||
| } | ||
|
|
||
| try { | ||
| const server = stellarClient.getServer(); | ||
| const assets = await server | ||
| .assets() | ||
| .forCode(assetCode) | ||
| .forIssuer(issuer) | ||
| .call(); | ||
|
|
||
| if (assets.records.length === 0) { | ||
| logger.warn( | ||
| "ACBU asset not found on Stellar; supply is effectively zero.", | ||
| "RESERVE DRIFT DETECTED: DB Ledger and On-Chain supply diverge", | ||
| { | ||
| assetCode, | ||
| issuer, | ||
| ledgerSupply, | ||
| onChainSupply, | ||
| delta, | ||
| }, | ||
| ); | ||
| return 0; | ||
| } | ||
|
|
||
| // Assets response contains 'amount' which represents total circulating supply | ||
| const totalSupply = parseFloat((assets.records[0] as any).amount); | ||
| return totalSupply; | ||
| return onChainSupply; |
There was a problem hiding this comment.
Reserve drift is silently masked — should escalate, not just warn.
When DB ledger and on-chain supply diverge beyond driftThreshold, the code logs a warn and then returns onChainSupply, which is fed straight into overcollateralizationRatio and the health classification in getReserveStatus(). A genuine accounting drift can therefore flip the system from critical → healthy (or vice versa) with no operator-visible alert beyond a single log line — triggerAlert() is only invoked from trackReserves() based on the ratio, not from drift detection itself.
For a reserve-backed asset this is exactly the failure mode auditors will flag. Recommend publishing to the NOTIFICATIONS queue (mirroring triggerAlert) when drift is detected, and persisting the divergence to a dedicated table so reconciliation can be reviewed historically.
Want me to draft a reserveDrift event type that publishes to QUEUES.NOTIFICATIONS and (optionally) writes a ReserveDriftLog row, so drift is alertable and auditable?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/reserve/ReserveTracker.ts` around lines 444 - 455, When a
reserve drift (delta > driftThreshold) is detected in ReserveTracker (the branch
currently logging a warn and returning onChainSupply), escalate instead of
silently masking: publish a "reserveDrift" notification to QUEUES.NOTIFICATIONS
(mirroring triggerAlert() payload shape) and insert a persisted entry into a
ReserveDriftLog table with ledgerSupply, onChainSupply, delta, reserveId and
timestamp; perform both actions inside the same if (delta > driftThreshold)
block before returning onChainSupply so downstream
getReserveStatus()/trackReserves() still receive the on-chain value but
operators have an alert and an auditable log entry. Ensure you reuse the
existing triggerAlert/notification schema or create a new event type name
"reserveDrift" and include enough context for reconciliation.
| const { | ||
| findDestructiveStatements, | ||
| getChangedMigrationFiles, | ||
| } = require('../scripts/ci/check-destructive-migrations'); |
There was a problem hiding this comment.
Resolve the no-require-imports lint failure.
ESLint will fail CI on the require() import. Since the target is a plain .js script, switch to a default ESM import and let TS/Jest handle interop, or add a scoped override for this test file.
♻️ Suggested fix
-const {
- findDestructiveStatements,
- getChangedMigrationFiles,
-} = require('../scripts/ci/check-destructive-migrations');
+import {
+ findDestructiveStatements,
+ getChangedMigrationFiles,
+} from "../scripts/ci/check-destructive-migrations";If the script remains CommonJS and ESM interop is not configured, instead add a per-file override in .eslintrc:
overrides: [{ files: ["tests/destructiveMigrationGate.test.ts"], rules: { "@typescript-eslint/no-require-imports": "off" } }]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { | |
| findDestructiveStatements, | |
| getChangedMigrationFiles, | |
| } = require('../scripts/ci/check-destructive-migrations'); | |
| import { | |
| findDestructiveStatements, | |
| getChangedMigrationFiles, | |
| } from "../scripts/ci/check-destructive-migrations"; |
🧰 Tools
🪛 ESLint
[error] 4-4: A require() style import is forbidden.
(@typescript-eslint/no-require-imports)
[error] 4-4: Replace '../scripts/ci/check-destructive-migrations' with "../scripts/ci/check-destructive-migrations"
(prettier/prettier)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/destructiveMigrationGate.test.ts` around lines 1 - 4, Replace the
CommonJS require in tests/destructiveMigrationGate.test.ts by using a default
ESM import for the script so ESLint no-require-imports passes: import the
exported symbols (findDestructiveStatements and getChangedMigrationFiles) from
'../scripts/ci/check-destructive-migrations' instead of using require(), or if
you must keep the script as CommonJS, add a per-file ESLint override in your
config to disable "@typescript-eslint/no-require-imports" for this test file so
the linter no longer errors.
|
|
||
| describe("OpenAPI Drift vs Implementation", () => { | ||
| const paths = (swaggerSpec as any).paths || {}; | ||
| const schemaRegistry = routeSchemas as Record<string, any>; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect the declared type/shape of routeSchemas
fd -t f 'schemas.ts' src/controllers
rg -nP -C3 '\b(export\s+(const|type|interface)\s+routeSchemas)\b'
# See what zodToJsonSchema expects in the installed version
fd -t f 'index.d.ts' node_modules/zod-to-json-schema 2>/dev/null | head -n 5
rg -nP -C2 'export\s+(declare\s+)?function\s+zodToJsonSchema' --glob 'node_modules/zod-to-json-schema/**'Repository: Pi-Defi-world/acbu-backend
Length of output: 734
🏁 Script executed:
#!/bin/bash
# Check imports in schemas.ts to see where ZodSchema comes from
head -n 20 src/controllers/schemas.ts
# Check the test file to see how schemaRegistry is used
cat -n tests/openapi-drift.test.ts | head -n 75Repository: Pi-Defi-world/acbu-backend
Length of output: 3856
🏁 Script executed:
#!/bin/bash
# Check zod package types
rg 'export.*type.*ZodSchema' node_modules/zod/lib 2>/dev/null | head -5
# Check if ZodTypeAny exists in zod
rg 'export.*ZodTypeAny' node_modules/zod 2>/dev/null | head -5
# Search for zodToJsonSchema type definition in zod-to-json-schema
find node_modules/zod-to-json-schema -name '*.d.ts' -o -name '*.ts' | xargs grep -l 'zodToJsonSchema' 2>/dev/null | head -3
# Get the function signature if we can find it
rg -A5 'function zodToJsonSchema' node_modules/zod-to-json-schema 2>/dev/null | head -20Repository: Pi-Defi-world/acbu-backend
Length of output: 137
🏁 Script executed:
#!/bin/bash
# Check if zod-to-json-schema is in package.json
rg 'zod-to-json-schema' package.json
# Check zod types directly
rg 'export.*type.*ZodSchema|export.*ZodTypeAny' node_modules/zod --type ts | head -20
# See what's actually imported in the test file and schemas file
echo "=== Test file imports ==="
head -3 tests/openapi-drift.test.ts
echo ""
echo "=== Schemas file imports ==="
head -10 src/controllers/schemas.tsRepository: Pi-Defi-world/acbu-backend
Length of output: 677
🌐 Web query:
ZodSchema vs ZodTypeAny zod library difference
💡 Result:
ZodSchema and ZodTypeAny are both TypeScript types from the Zod library used for typing Zod schemas generically. ZodTypeAny is a utility type defined as ZodType<any, any>, serving as a broad type that matches any Zod schema. It is recommended for generic functions that accept any schema while preserving type inference for the specific schema subclass (e.g., ZodString). Example usage: function inferSchema(schema: T) { return schema; } inferSchema(z.string); // => ZodString To infer the output type correctly in such functions, use z.infer since TypeScript may otherwise infer any. ZodSchema appears to be an older or alternative alias, often used as ZodSchema to specify the inferred output type (e.g., ZodSchema for a schema parsing to type A). It was common in Zod v3 examples, such as typing functions returning specific schemas: function AParser: z.ZodSchema { return z.object({ foo: z.any }); } However, due to TypeScript assignability issues (e.g., with z.any making properties optional), it could cause type errors. In Zod v4 (stable as of 2025), the primary base class is ZodType<Output, Input>, extending core $ZodType. All schemas extend ZodType, and ZodTypeAny remains ZodType<any, any> (with discussions to change to unknown in v4, now using z.ZodType as shorthand for ZodType). Key differences: - ZodTypeAny: Opaque "any schema" type for generics, avoids any in ESLint-strict projects by inferring subclasses. - ZodSchema: Likely legacy alias for ZodType<any, any, any> or similar; less emphasized in recent docs favoring ZodTypeAny or generics like . For generic schema parameters, prefer T extends z.ZodTypeAny over z.ZodSchema for better inference. Official docs prioritize ZodType/ZodTypeAny for library authors and generics.
Citations:
- 1: ZodTypeAny unnecessarily uses any instead of unknown colinhacks/zod#4361
- 2: Type Error when used any() and ZodSchema colinhacks/zod#884
- 3: https://github.com/trevorsmith/zod
- 4: https://zod.dev/packages/zod
- 5: Type that allows any type of schema? colinhacks/zod#2462
- 6: https://github.com/colinhacks/zod/blob/676d619b4c05ca98ccfffbfe7e69ec23c4e639d2/src/types/base.ts
- 7: https://github.com/colinhacks/zod/blob/main/packages/zod/src/v4/classic/schemas.ts
- 8: https://github.com/colinhacks/zod
🌐 Web query:
zodToJsonSchema function signature zod-to-json-schema v3
💡 Result:
The zodToJsonSchema function from the zod-to-json-schema package (v3 and later) has the following signature: typescript zodToJsonSchema( schema: ZodSchema, nameOrOptions?: string | Options ): JsonSchema - schema: A Zod schema (ZodTypeAny). - nameOrOptions: Either a string (schema name for $ref in definitions) or an Options object with properties like name, target ("jsonSchema7" default), $refStrategy, etc. Example: typescript import { z } from "zod"; import { zodToJsonSchema } from "zod-to-json-schema"; const mySchema = z.object({ ... }); const jsonSchema = zodToJsonSchema(mySchema, "mySchema"); // or { name: "mySchema" } Note: Zod core has z.object({}).toJSONSchema(params?) since recent versions, but the query specifies zodToJsonSchema from zod-to-json-schema v3.
Citations:
- 1: https://www.npmjs.com/package/zod-to-json-schema
- 2: https://github.com/StefanTerdell/zod-to-json-schema/
🏁 Script executed:
#!/bin/bash
# Check if zodToJsonSchema actually requires the cast by looking at how it's called
# and what the error would be without the cast
# First, let's see the actual implementation or type stubs
find . -name 'package-lock.json' -o -name 'yarn.lock' | head -1
# Check if there's any type error context by looking at tsconfig
cat tsconfig.json 2>/dev/null || echo "No tsconfig.json"
# Check what version of zod is installed
rg '"zod"' package.json -A1Repository: Pi-Defi-world/acbu-backend
Length of output: 1347
🏁 Script executed:
#!/bin/bash
# Let's check the actual Zod version 3.23.8 type definitions
# to see if ZodSchema and ZodTypeAny are the same in v3
# Check how routeSchemas is actually used - see if removing the cast would cause issues
cat -n tests/openapi-drift.test.ts | sed -n '55,72p'
# Verify the exact type exported from schemas
grep -n "export const routeSchemas" src/controllers/schemas.ts -A 3Repository: Pi-Defi-world/acbu-backend
Length of output: 1007
🏁 Script executed:
#!/bin/bash
# Let's see the complete usage context to understand if the cast is truly necessary
# Check if there are any other usages of schemaRegistry that might explain the cast
cat -n tests/openapi-drift.test.ts | sed -n '1,80p' | tail -30
# Also check the Zod v3.23.8 type definitions to confirm ZodSchema compatibility
# by looking at what's actually exported
rg 'export.*ZodSchema|export.*ZodTypeAny' --type ts node_modules/zod/lib 2>/dev/null | head -10Repository: Pi-Defi-world/acbu-backend
Length of output: 1398
Remove unnecessary type casts that discard the correct ZodSchema type.
The schemaRegistry cast on line 7 is unnecessary—routeSchemas is already typed as Record<string, ZodSchema>, which zodToJsonSchema accepts directly. This cast to Record<string, any> discards the type information and forces the zodSchema as any cast on line 67. Simply remove the cast on line 7 and the downstream cast on line 67 will no longer be needed.
If a cast is required for compatibility, Record<string, ZodTypeAny> is safer than any, but the optimal solution is no cast at all.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/openapi-drift.test.ts` at line 7, Remove the unnecessary cast so
schemaRegistry preserves Zod types: change the declaration so schemaRegistry is
assigned directly from routeSchemas (no "as Record<string, any>"), which lets
zodToJsonSchema accept each zodSchema without needing "as any" later; remove the
downstream "as any" cast on the zodSchema usage as well (or if a cast is
unavoidable, use Record<string, ZodTypeAny>), ensuring references to
schemaRegistry, routeSchemas, and the zodSchema passed into zodToJsonSchema
remain strongly typed.
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (1)
src/config/env.ts (1)
124-150:⚠️ Potential issue | 🟠 MajorFail fast on S3 and bulk-transfer misconfiguration.
These new settings still bypass
envSchema, so malformed numeric values becomeNaN, missing bucket/credentials only fail on first S3 use, andscanWebhookSecretcan silently become"". Insrc/controllers/kycController.ts, an empty secret skips webhook signature verification entirely.Suggested fix
const envSchema = z.object({ NODE_ENV: z.string().default("development"), + AWS_REGION: z.string().optional(), + S3_REGION: z.string().optional(), + S3_BUCKET: z.string().optional(), + S3_ENDPOINT: z.string().optional(), + AWS_ACCESS_KEY_ID: z.string().optional(), + AWS_SECRET_ACCESS_KEY: z.string().optional(), + S3_ACCESS_KEY_ID: z.string().optional(), + S3_SECRET_ACCESS_KEY: z.string().optional(), + S3_UPLOAD_URL_TTL_SECONDS: z.coerce.number().int().positive().default(900), + S3_DOWNLOAD_URL_TTL_SECONDS: z.coerce.number().int().positive().default(300), + S3_SCAN_WEBHOOK_SECRET: z.string().optional(), + BULK_TRANSFER_CHUNK_SIZE: z.coerce.number().int().positive().default(100), + BULK_TRANSFER_MAX_FILE_SIZE_BYTES: z.coerce.number().int().positive().default(10 * 1024 * 1024), ... }); @@ if ( parsed.data.NODE_ENV === "production" && !parsed.data.PRISMA_ACCELERATE_URL ) { @@ } + +if (parsed.data.NODE_ENV === "production") { + if (!parsed.data.S3_BUCKET || !parsed.data.S3_SCAN_WEBHOOK_SECRET) { + throw new Error("Missing required S3 configuration"); + } +} @@ s3: { - region: process.env.AWS_REGION || process.env.S3_REGION || "us-east-1", - bucket: process.env.S3_BUCKET || "", + region: env.AWS_REGION || env.S3_REGION || "us-east-1", + bucket: env.S3_BUCKET || "", ... - scanWebhookSecret: process.env.S3_SCAN_WEBHOOK_SECRET || "", + scanWebhookSecret: env.S3_SCAN_WEBHOOK_SECRET || "", }, bulkTransfer: { - chunkSize: parseInt(process.env.BULK_TRANSFER_CHUNK_SIZE || "100", 10), - maxFileSizeBytes: parseInt( - process.env.BULK_TRANSFER_MAX_FILE_SIZE_BYTES || "10485760", - 10, - ), + chunkSize: env.BULK_TRANSFER_CHUNK_SIZE, + maxFileSizeBytes: env.BULK_TRANSFER_MAX_FILE_SIZE_BYTES, },#!/bin/bash # Verify that S3 and bulk-transfer envs still bypass envSchema and that an empty webhook secret disables HMAC verification. sed -n '6,150p' src/config/env.ts sed -n '280,290p' src/controllers/kycController.ts sed -n '39,65p' src/services/storage/s3Service.ts
🧹 Nitpick comments (3)
src/services/rates/currencyConverter.test.ts (1)
200-209: Test name and input type are inconsistent.The case says “decimal input strings” but passes a number. Rename the test or pass a string input in the precision variant test to avoid confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/rates/currencyConverter.test.ts` around lines 200 - 209, The test title says "decimal input strings" but calls convertLocalToUsd with a number; update the test to pass a string decimal (e.g. "100000.5") to match the title or alternatively rename the test to reflect numeric input; specifically modify the spec in currencyConverter.test.ts that calls convertLocalToUsd so it uses the string "100000.5" (or change the it(...) description) while keeping the mocked prisma.acbuRate and the expect(...).toBeCloseTo assertion unchanged.src/services/rates/currencyConverter.ts (2)
80-127: Consider extracting shared rate-fetch + conversion steps.Both exports repeat nearly identical validation/query/conversion blocks; extracting a shared internal helper will reduce drift and make future fixes safer.
Also applies to: 153-203
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/rates/currencyConverter.ts` around lines 80 - 127, Extract the duplicated rate-fetching and validation logic into a shared helper (e.g., getValidatedRatesForCurrency) that calls prisma.acbuRate.findFirst, checks for presence, resolves the rate field via CURRENCY_TO_RATE_FIELD, validates localToAcbuRate and latestRate.acbuUsd, and returns Decimal-safe values (localToAcbuRate and acbuUsd) or throws the same AppError messages; then update both conversion exports to call this helper and perform only the math (creating Decimal(localAmount), dividing by localToAcbuRate and multiplying by acbuUsd, returning usdAmount.toNumber()), reusing symbols localToAcbuRate, acbuUsdRate, Decimal, and AppError to keep behavior identical.
73-78: Normalize currency input once before lookup.A simple
trim().toUpperCase()step avoids avoidable rejects from casing/whitespace and aligns behavior with common API expectations.♻️ Suggested normalization patch
export async function convertLocalToUsd( localAmount: number, currency: string, ): Promise<number> { + const normalizedCurrency = currency.trim().toUpperCase(); + // Validate currency is supported - if (!CURRENCY_TO_RATE_FIELD[currency]) { + if (!CURRENCY_TO_RATE_FIELD[normalizedCurrency]) { @@ - const rateFieldName = CURRENCY_TO_RATE_FIELD[currency]; + const rateFieldName = CURRENCY_TO_RATE_FIELD[normalizedCurrency]; @@ export async function convertLocalToUsdWithPrecision( localAmount: string | number, currency: string, ): Promise<{ @@ + const normalizedCurrency = currency.trim().toUpperCase(); + // Validate currency is supported - if (!CURRENCY_TO_RATE_FIELD[currency]) { + if (!CURRENCY_TO_RATE_FIELD[normalizedCurrency]) { @@ - const rateFieldName = CURRENCY_TO_RATE_FIELD[currency]; + const rateFieldName = CURRENCY_TO_RATE_FIELD[normalizedCurrency];Also applies to: 93-93, 146-151, 166-166
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/rates/currencyConverter.ts` around lines 73 - 78, Normalize the incoming currency string once (e.g. assign a normalizedCurrency = currency.trim().toUpperCase()) and use that normalized value for all lookups and comparisons against CURRENCY_TO_RATE_FIELD and any currency equality checks instead of the raw input; update the error messages to reference the normalizedCurrency as well. Apply this change to the lookup at the current CURRENCY_TO_RATE_FIELD check and the other occurrences called out (the subsequent checks/usages around lines 93, 146-151, and 166) so all conversions and validations consistently use the trimmed/uppercased currency variable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/config/env.ts`:
- Around line 22-25: The schema entries for AUTH_RATE_LIMIT_WINDOW_MS,
AUTH_RATE_LIMIT_MAX_REQUESTS, MAX_SIGNIN_ATTEMPTS, and
SIGNIN_LOCKOUT_DURATION_MS currently allow zero, negatives and fractions; update
their Zod validators to coerce to numbers, require integers and enforce
positivity (e.g., use .int() and .positive() or .min(1) for counts, and
.int().positive() for durations) while keeping the same default values so the
environment will reject 0, negative, and fractional inputs and only accept
positive integers for these keys.
In `@src/services/auth/authService.ts`:
- Around line 348-358: The code increments failedSigninAttempts and calls
prisma.user.update for every resolveUserByIdentifier result, which updates the
synthetic dummy user and causes 500s; fix by skipping persistence for the dummy
lookup record: wrap the failedAttempts computation and the prisma.user.update
call in a guard that checks the resolved user is real (e.g., if (!user.isDummy)
or if (user.id !== 'dummy-id')) so failedSigninAttempts and lockoutUntil are
only computed/updated for real users; leave the uniform "Invalid credentials"
response logic unchanged.
- Around line 476-482: The Prisma user select is missing actorType and
organizationId, but those fields are later read from the returned user and
passed to logAudit; update the select object in the query (the block that
currently selects id, twoFaMethod, totpSecretEncrypted, lockoutUntil,
failedSigninAttempts) to also include actorType: true and organizationId: true
so the returned user has those properties and TypeScript exact-select typing and
audit logging succeed.
In `@src/services/rates/currencyConverter.test.ts`:
- Around line 121-130: The test "should handle case-insensitive currency codes"
currently only calls convertLocalToUsd with "NGN"; update the test to also call
convertLocalToUsd with lowercase and mixed-case variants (e.g., "ngn" and "NgN")
and assert the same numeric result (toBeCloseTo 50, 5) for each call; keep the
existing prisma.acbuRate.findFirst mock setup and timestamp/Decimal values
unchanged so the three calls verify convertLocalToUsd handles case-insensitive
currency codes.
In `@src/services/rates/currencyConverter.ts`:
- Around line 68-71: Add upfront validation in convertLocalToUsd to ensure
localAmount is a finite number greater than 0 and otherwise throw an AppError
with status 400; specifically check Number.isFinite(localAmount) and localAmount
> 0 (reject NaN, Infinity, null, undefined, 0, negative values). Apply the same
guard to the other currency conversion functions in this module that accept
amounts (the USD->local converter and any helper like the per-rate converter),
and use the same AppError(…, 400) pattern so all invalid inputs consistently
return a 400 Bad Request.
In `@src/services/treasury/TreasuryService.ts`:
- Around line 239-248: In reconcileTotals, handle the ledgerTotal === 0 edge
case explicitly: instead of forcing discrepancyPercentage to 0, check if
ledgerTotal === 0 and then set isReconciled = true only when calculatedTotal ===
0, otherwise mark as not reconciled and set discrepancyPercentage to 100 (or
another sentinel) so the health check fails; update the logic around
discrepancy, discrepancyPercentage, and isReconciled in the reconcileTotals
function (referencing ledgerTotal, calculatedTotal, discrepancyPercentage,
isReconciled, and DEFAULT_TOLERANCE_PERCENTAGE) and ensure the returned
ReconciliationResult reflects this special-case behavior.
- Around line 331-345: organizationId is currently logged but not used, so
getEnterpriseTreasury returns system-wide data; update the data fetches to use
the organization scope by passing organizationId into the calls (or calling the
organization-scoped variants) — e.g., change getLatestReservesBySegment() and
aggregateTransactionsBySegment() invocations inside getEnterpriseTreasury to
getLatestReservesBySegment(organizationId) and
aggregateTransactionsBySegment(organizationId) (or call
getLatestReservesBySegmentForOrganization /
aggregateTransactionsBySegmentForOrganization if those scoped helpers exist),
and handle undefined organizationId appropriately (fallback to system-wide only
when intended).
---
Nitpick comments:
In `@src/services/rates/currencyConverter.test.ts`:
- Around line 200-209: The test title says "decimal input strings" but calls
convertLocalToUsd with a number; update the test to pass a string decimal (e.g.
"100000.5") to match the title or alternatively rename the test to reflect
numeric input; specifically modify the spec in currencyConverter.test.ts that
calls convertLocalToUsd so it uses the string "100000.5" (or change the it(...)
description) while keeping the mocked prisma.acbuRate and the
expect(...).toBeCloseTo assertion unchanged.
In `@src/services/rates/currencyConverter.ts`:
- Around line 80-127: Extract the duplicated rate-fetching and validation logic
into a shared helper (e.g., getValidatedRatesForCurrency) that calls
prisma.acbuRate.findFirst, checks for presence, resolves the rate field via
CURRENCY_TO_RATE_FIELD, validates localToAcbuRate and latestRate.acbuUsd, and
returns Decimal-safe values (localToAcbuRate and acbuUsd) or throws the same
AppError messages; then update both conversion exports to call this helper and
perform only the math (creating Decimal(localAmount), dividing by
localToAcbuRate and multiplying by acbuUsd, returning usdAmount.toNumber()),
reusing symbols localToAcbuRate, acbuUsdRate, Decimal, and AppError to keep
behavior identical.
- Around line 73-78: Normalize the incoming currency string once (e.g. assign a
normalizedCurrency = currency.trim().toUpperCase()) and use that normalized
value for all lookups and comparisons against CURRENCY_TO_RATE_FIELD and any
currency equality checks instead of the raw input; update the error messages to
reference the normalizedCurrency as well. Apply this change to the lookup at the
current CURRENCY_TO_RATE_FIELD check and the other occurrences called out (the
subsequent checks/usages around lines 93, 146-151, and 166) so all conversions
and validations consistently use the trimmed/uppercased currency variable.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ed70527b-7c75-4747-a131-501e72f73815
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (17)
README.mdsrc/config/env.tssrc/config/mongodb.tssrc/controllers/complianceController.tssrc/controllers/enterpriseController.tssrc/controllers/salaryController.tssrc/controllers/userController.tssrc/middleware/auth.tssrc/routes/authRoutes.tssrc/services/auth/authService.tssrc/services/rates/currencyConverter.test.tssrc/services/rates/currencyConverter.tssrc/services/transfer/transferService.tssrc/services/treasury/TreasuryService.test.tssrc/services/treasury/TreasuryService.tssrc/types/permissions.tstests/transfer.test.ts
✅ Files skipped from review due to trivial changes (8)
- tests/transfer.test.ts
- src/middleware/auth.ts
- src/routes/authRoutes.ts
- src/controllers/complianceController.ts
- src/controllers/userController.ts
- src/types/permissions.ts
- src/config/mongodb.ts
- src/services/treasury/TreasuryService.test.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- README.md
- src/controllers/salaryController.ts
- src/services/transfer/transferService.ts
| AUTH_RATE_LIMIT_WINDOW_MS: z.coerce.number().default(15 * 60 * 1000), | ||
| AUTH_RATE_LIMIT_MAX_REQUESTS: z.coerce.number().default(10), | ||
| MAX_SIGNIN_ATTEMPTS: z.coerce.number().default(5), | ||
| SIGNIN_LOCKOUT_DURATION_MS: z.coerce.number().default(15 * 60 * 1000), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect the auth env schema and where the values are consumed in lockout logic.
sed -n '18,30p' src/config/env.ts
rg -n -C2 'maxSigninAttempts|signinLockoutDurationMs|authRateLimitWindowMs|authRateLimitMaxRequests' src/services/auth/authService.ts src/utils/authBruteGuard.tsRepository: Pi-Defi-world/acbu-backend
Length of output: 2119
🏁 Script executed:
#!/bin/bash
# Check the full env.ts config to understand the schema-to-config mapping
cat -n src/config/env.ts | head -50
# Check how env vars map to config object properties
rg -n 'export.*config|export.*env' src/config/env.ts -A 2
# Search for usage of all the auth rate limit config variables
rg -n 'authRateLimitWindowMs|authRateLimitMaxRequests' src/Repository: Pi-Defi-world/acbu-backend
Length of output: 2561
🏁 Script executed:
#!/bin/bash
# Check the full config export with all mapped properties
sed -n '59,90p' src/config/env.ts
# Check how these values are used in the rate limiter
sed -n '215,230p' src/middleware/rateLimiter.tsRepository: Pi-Defi-world/acbu-backend
Length of output: 1671
🏁 Script executed:
#!/bin/bash
# Check the createRateLimiter function to see if it validates the values
rg -n -A 10 'function createRateLimiter|const createRateLimiter' src/middleware/rateLimiter.ts
# Check if there's any validation of config values in the auth service lockout logic
sed -n '345,365p' src/services/auth/authService.tsRepository: Pi-Defi-world/acbu-backend
Length of output: 1012
Constrain the new auth envs to positive integers.
These values feed directly into rate-limit and lockout math. The schema currently accepts 0, negatives, and fractions, which causes security failures: MAX_SIGNIN_ATTEMPTS=0 locks every account on the first failure (since 1 >= 0 is true), and negative SIGNIN_LOCKOUT_DURATION_MS sets a past timestamp, effectively disabling the lockout window.
Suggested fix
- AUTH_RATE_LIMIT_WINDOW_MS: z.coerce.number().default(15 * 60 * 1000),
- AUTH_RATE_LIMIT_MAX_REQUESTS: z.coerce.number().default(10),
- MAX_SIGNIN_ATTEMPTS: z.coerce.number().default(5),
- SIGNIN_LOCKOUT_DURATION_MS: z.coerce.number().default(15 * 60 * 1000),
+ AUTH_RATE_LIMIT_WINDOW_MS: z.coerce.number().int().positive().default(15 * 60 * 1000),
+ AUTH_RATE_LIMIT_MAX_REQUESTS: z.coerce.number().int().positive().default(10),
+ MAX_SIGNIN_ATTEMPTS: z.coerce.number().int().positive().default(5),
+ SIGNIN_LOCKOUT_DURATION_MS: z.coerce.number().int().positive().default(15 * 60 * 1000),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config/env.ts` around lines 22 - 25, The schema entries for
AUTH_RATE_LIMIT_WINDOW_MS, AUTH_RATE_LIMIT_MAX_REQUESTS, MAX_SIGNIN_ATTEMPTS,
and SIGNIN_LOCKOUT_DURATION_MS currently allow zero, negatives and fractions;
update their Zod validators to coerce to numbers, require integers and enforce
positivity (e.g., use .int() and .positive() or .min(1) for counts, and
.int().positive() for durations) while keeping the same default values so the
environment will reject 0, negative, and fractional inputs and only accept
positive integers for these keys.
| const failedAttempts = user.failedSigninAttempts + 1; | ||
| const isLockout = failedAttempts >= config.maxSigninAttempts; | ||
|
|
||
| await prisma.user.update({ | ||
| where: { id: user.id }, | ||
| data: { | ||
| failedSigninAttempts: failedAttempts, | ||
| lockoutUntil: isLockout | ||
| lockoutUntil: isLockout | ||
| ? new Date(Date.now() + config.signinLockoutDurationMs) | ||
| : null, | ||
| }, |
There was a problem hiding this comment.
Do not persist lockout state for the dummy lookup record.
resolveUserByIdentifier() returns a synthetic user for misses, but this branch still computes failedSigninAttempts and calls prisma.user.update() with id: "dummy-id". A typoed username/password will therefore 500 instead of returning the intended uniform "Invalid credentials" response.
Suggested fix
const match = await bcrypt.compare(passcode, user.passcodeHash);
if (!match) {
- const failedAttempts = user.failedSigninAttempts + 1;
+ if (user.isDummy) {
+ logger.warn("Signin: invalid credentials", { identifier: "***" });
+ throw new Error("Invalid credentials");
+ }
+
+ const failedAttempts = (user.failedSigninAttempts ?? 0) + 1;
const isLockout = failedAttempts >= config.maxSigninAttempts;
await prisma.user.update({🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/auth/authService.ts` around lines 348 - 358, The code increments
failedSigninAttempts and calls prisma.user.update for every
resolveUserByIdentifier result, which updates the synthetic dummy user and
causes 500s; fix by skipping persistence for the dummy lookup record: wrap the
failedAttempts computation and the prisma.user.update call in a guard that
checks the resolved user is real (e.g., if (!user.isDummy) or if (user.id !==
'dummy-id')) so failedSigninAttempts and lockoutUntil are only computed/updated
for real users; leave the uniform "Invalid credentials" response logic
unchanged.
| select: { | ||
| id: true, | ||
| twoFaMethod: true, | ||
| totpSecretEncrypted: true, | ||
| lockoutUntil: true, | ||
| failedSigninAttempts: true | ||
| failedSigninAttempts: true, | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the selected fields in verify2fa() against the later audit payload.
sed -n '474,561p' src/services/auth/authService.tsRepository: Pi-Defi-world/acbu-backend
Length of output: 2714
🏁 Script executed:
# Check Prisma schema and configuration to understand select typing strictness
find . -name "prisma.schema" -o -name "schema.prisma" | head -5Repository: Pi-Defi-world/acbu-backend
Length of output: 93
🏁 Script executed:
# Find the logAudit function to verify what fields it expects
rg "logAudit\s*=" -A 10 | head -50Repository: Pi-Defi-world/acbu-backend
Length of output: 52
🏁 Script executed:
# Check if there are TypeScript errors in the file or any type suppressions
rg "@ts-" src/services/auth/authService.tsRepository: Pi-Defi-world/acbu-backend
Length of output: 52
🏁 Script executed:
# Find where logAudit is imported from
rg "import.*logAudit|from.*logAudit" src/services/auth/authService.tsRepository: Pi-Defi-world/acbu-backend
Length of output: 107
🏁 Script executed:
# Find the logAudit function definition
rg "function logAudit|const logAudit|export.*logAudit" -A 15Repository: Pi-Defi-world/acbu-backend
Length of output: 1207
🏁 Script executed:
# Check the Prisma schema for User model to confirm actorType and organizationId exist
grep -A 50 "model User" prisma/schema.prisma | head -60Repository: Pi-Defi-world/acbu-backend
Length of output: 3481
🏁 Script executed:
# Find the AuditEntry type definition to see if actorType and organizationId are required
rg "type AuditEntry|interface AuditEntry" -A 20Repository: Pi-Defi-world/acbu-backend
Length of output: 2633
🏁 Script executed:
# Check if there's a tsconfig or any TypeScript strict mode configuration
find . -name "tsconfig.json" -o -name "tsconfig*.json" | xargs cat 2>/dev/null | head -100Repository: Pi-Defi-world/acbu-backend
Length of output: 2703
🏁 Script executed:
# Look for any recent TypeScript compilation or build logs
ls -la | grep -E "dist|build|\.tsbuildinfo"Repository: Pi-Defi-world/acbu-backend
Length of output: 142
Add actorType and organizationId to the Prisma select statement.
The query omits these fields, but they're accessed immediately after on the user object and passed to logAudit(). With strict TypeScript enabled, accessing non-selected properties violates Prisma's exact-select typing and should cause a compile error. Even if it doesn't fail at build time, those audit fields will be undefined on every successful 2FA login, compromising the audit trail.
Suggested fix
const user = await prisma.user.findUnique({
where: { id: payload.userId },
select: {
id: true,
twoFaMethod: true,
+ actorType: true,
+ organizationId: true,
totpSecretEncrypted: true,
lockoutUntil: true,
failedSigninAttempts: true,
},
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| select: { | |
| id: true, | |
| twoFaMethod: true, | |
| totpSecretEncrypted: true, | |
| lockoutUntil: true, | |
| failedSigninAttempts: true | |
| failedSigninAttempts: true, | |
| }, | |
| select: { | |
| id: true, | |
| twoFaMethod: true, | |
| actorType: true, | |
| organizationId: true, | |
| totpSecretEncrypted: true, | |
| lockoutUntil: true, | |
| failedSigninAttempts: true, | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/auth/authService.ts` around lines 476 - 482, The Prisma user
select is missing actorType and organizationId, but those fields are later read
from the returned user and passed to logAudit; update the select object in the
query (the block that currently selects id, twoFaMethod, totpSecretEncrypted,
lockoutUntil, failedSigninAttempts) to also include actorType: true and
organizationId: true so the returned user has those properties and TypeScript
exact-select typing and audit logging succeed.
| it("should handle case-insensitive currency codes", async () => { | ||
| (prisma.acbuRate.findFirst as jest.Mock).mockResolvedValue({ | ||
| acbuNgn: new Decimal("1000.00"), | ||
| acbuUsd: new Decimal("0.50"), | ||
| timestamp: new Date(), | ||
| }); | ||
|
|
||
| // NGN should work | ||
| const result = await convertLocalToUsd(100000, "NGN"); | ||
| expect(result).toBeCloseTo(50, 5); |
There was a problem hiding this comment.
Case-insensitive test currently validates only uppercase input.
The test title claims case-insensitivity, but it only uses "NGN". Add lowercase/mixed-case inputs to actually exercise that behavior.
✅ Suggested test adjustment
- // NGN should work
- const result = await convertLocalToUsd(100000, "NGN");
- expect(result).toBeCloseTo(50, 5);
+ const lower = await convertLocalToUsd(100000, "ngn");
+ const mixed = await convertLocalToUsd(100000, "NgN");
+ expect(lower).toBeCloseTo(50, 5);
+ expect(mixed).toBeCloseTo(50, 5);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/rates/currencyConverter.test.ts` around lines 121 - 130, The
test "should handle case-insensitive currency codes" currently only calls
convertLocalToUsd with "NGN"; update the test to also call convertLocalToUsd
with lowercase and mixed-case variants (e.g., "ngn" and "NgN") and assert the
same numeric result (toBeCloseTo 50, 5) for each call; keep the existing
prisma.acbuRate.findFirst mock setup and timestamp/Decimal values unchanged so
the three calls verify convertLocalToUsd handles case-insensitive currency
codes.
| export async function convertLocalToUsd( | ||
| localAmount: number, | ||
| currency: string, | ||
| ): Promise<number> { |
There was a problem hiding this comment.
Validate localAmount before conversion.
Currently, non-finite/invalid/non-positive values can slip into conversion and yield invalid outputs or non-AppError failures. Guard upfront and return a 400 consistently.
🛡️ Suggested validation patch
export async function convertLocalToUsd(
localAmount: number,
currency: string,
): Promise<number> {
+ if (!Number.isFinite(localAmount) || localAmount <= 0) {
+ throw new AppError("localAmount must be a positive number.", 400);
+ }
+
// Validate currency is supported
if (!CURRENCY_TO_RATE_FIELD[currency]) {
@@
export async function convertLocalToUsdWithPrecision(
localAmount: string | number,
currency: string,
): Promise<{
@@
- // Convert using high-precision Decimal arithmetic
- const localAmountDecimal = new Decimal(localAmount);
+ let localAmountDecimal: Decimal;
+ try {
+ localAmountDecimal = new Decimal(localAmount);
+ } catch {
+ throw new AppError("localAmount must be a valid positive number.", 400);
+ }
+ if (localAmountDecimal.lte(0)) {
+ throw new AppError("localAmount must be a positive number.", 400);
+ }
+
+ // Convert using high-precision Decimal arithmetic
const rateDecimal = new Decimal(localToAcbuRate);Also applies to: 106-107, 137-140, 179-180
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/rates/currencyConverter.ts` around lines 68 - 71, Add upfront
validation in convertLocalToUsd to ensure localAmount is a finite number greater
than 0 and otherwise throw an AppError with status 400; specifically check
Number.isFinite(localAmount) and localAmount > 0 (reject NaN, Infinity, null,
undefined, 0, negative values). Apply the same guard to the other currency
conversion functions in this module that accept amounts (the USD->local
converter and any helper like the per-rate converter), and use the same
AppError(…, 400) pattern so all invalid inputs consistently return a 400 Bad
Request.
| function reconcileTotals( | ||
| ledgerTotal: number, | ||
| calculatedTotal: number, | ||
| tolerancePercentage: number = DEFAULT_TOLERANCE_PERCENTAGE, | ||
| ): ReconciliationResult { | ||
| const discrepancy = Math.abs(ledgerTotal - calculatedTotal); | ||
| const discrepancyPercentage = | ||
| ledgerTotal > 0 ? (discrepancy / ledgerTotal) * 100 : 0; | ||
|
|
||
| const isReconciled = discrepancyPercentage <= tolerancePercentage; |
There was a problem hiding this comment.
Treat ledgerTotal === 0 as a real reconciliation edge case.
When the ledger total is zero, discrepancyPercentage is forced to 0, so ledgerTotal = 0 and calculatedTotal > 0 still reports as reconciled. That hides missing reserve snapshots and can make the health check go green on a broken ledger state.
Suggested fix
function reconcileTotals(
ledgerTotal: number,
calculatedTotal: number,
tolerancePercentage: number = DEFAULT_TOLERANCE_PERCENTAGE,
): ReconciliationResult {
const discrepancy = Math.abs(ledgerTotal - calculatedTotal);
- const discrepancyPercentage =
- ledgerTotal > 0 ? (discrepancy / ledgerTotal) * 100 : 0;
+ const discrepancyPercentage =
+ ledgerTotal === 0
+ ? calculatedTotal === 0
+ ? 0
+ : Number.POSITIVE_INFINITY
+ : (discrepancy / ledgerTotal) * 100;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/treasury/TreasuryService.ts` around lines 239 - 248, In
reconcileTotals, handle the ledgerTotal === 0 edge case explicitly: instead of
forcing discrepancyPercentage to 0, check if ledgerTotal === 0 and then set
isReconciled = true only when calculatedTotal === 0, otherwise mark as not
reconciled and set discrepancyPercentage to 100 (or another sentinel) so the
health check fails; update the logic around discrepancy, discrepancyPercentage,
and isReconciled in the reconcileTotals function (referencing ledgerTotal,
calculatedTotal, discrepancyPercentage, isReconciled, and
DEFAULT_TOLERANCE_PERCENTAGE) and ensure the returned ReconciliationResult
reflects this special-case behavior.
| export async function getEnterpriseTreasury( | ||
| organizationId?: string, | ||
| tolerancePercentage: number = DEFAULT_TOLERANCE_PERCENTAGE, | ||
| ): Promise<EnterpriseTreasuryResult> { | ||
| logger.info("Starting treasury calculation", { | ||
| organizationId, | ||
| tolerancePercentage, | ||
| }); | ||
|
|
||
| try { | ||
| // Fetch all data in parallel | ||
| const [reservesBySegment, txAggregates] = await Promise.all([ | ||
| getLatestReservesBySegment(), | ||
| aggregateTransactionsBySegment(), | ||
| ]); |
There was a problem hiding this comment.
organizationId is a no-op right now.
The method accepts and logs an organization id, but the actual reads still come from unscoped getLatestReservesBySegment() and aggregateTransactionsBySegment(). Any caller expecting enterprise-scoped treasury data will get system-wide totals instead.
Suggested fix
-async function aggregateTransactionsBySegment(): Promise<
+async function aggregateTransactionsBySegment(organizationId?: string): Promise<
Map<string, TransactionAggregate>
> {
const transactions = await prisma.transaction.findMany({
where: {
+ ...(organizationId ? { organizationId } : {}),
status: { in: ["completed", "processing"] },
type: { in: ["mint", "burn", "transfer"] },
},
@@
-async function getLatestReservesBySegment(): Promise<
+async function getLatestReservesBySegment(organizationId?: string): Promise<
Map<string, ReserveSnapshot>
> {
const reserves = await prisma.reserve.findMany({
+ where: organizationId ? { organizationId } : undefined,
orderBy: { timestamp: "desc" },
@@
- const [reservesBySegment, txAggregates] = await Promise.all([
- getLatestReservesBySegment(),
- aggregateTransactionsBySegment(),
+ const [reservesBySegment, txAggregates] = await Promise.all([
+ getLatestReservesBySegment(organizationId),
+ aggregateTransactionsBySegment(organizationId),
]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/treasury/TreasuryService.ts` around lines 331 - 345,
organizationId is currently logged but not used, so getEnterpriseTreasury
returns system-wide data; update the data fetches to use the organization scope
by passing organizationId into the calls (or calling the organization-scoped
variants) — e.g., change getLatestReservesBySegment() and
aggregateTransactionsBySegment() invocations inside getEnterpriseTreasury to
getLatestReservesBySegment(organizationId) and
aggregateTransactionsBySegment(organizationId) (or call
getLatestReservesBySegmentForOrganization /
aggregateTransactionsBySegmentForOrganization if those scoped helpers exist),
and handle undefined organizationId appropriately (fallback to system-wide only
when intended).
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/controllers/mintController.ts (1)
276-283:⚠️ Potential issue | 🔴 CriticalFix undefined variable
amountUsdPlaceholderand missing import forconvertLocalToUsd.Line 276 calls
convertLocalToUsd()without importing it, and line 280 references undefined variableamountUsdPlaceholderinstead of the variableamountUsddefined on line 276. Both will cause runtime failures.Add the missing import:
+import { convertLocalToUsd } from "../services/rates";And fix the variable reference:
const amountUsd = await convertLocalToUsd(amountNum, currency); await checkDepositLimits( audience, - amountUsdPlaceholder, + amountUsd, userId, req.apiKey?.organizationId ?? null, );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/mintController.ts` around lines 276 - 283, Import the convertLocalToUsd function where other helpers are imported and replace the undefined amountUsdPlaceholder with the actual amountUsd returned from convertLocalToUsd in the call to checkDepositLimits; specifically, add an import for convertLocalToUsd (from the module that defines it) and change the argument in the checkDepositLimits call to amountUsd so it uses the variable declared on the previous line.
🧹 Nitpick comments (3)
.github/workflows/snyk.yml (1)
51-53: Pin Snyk CLI version to make CI deterministic.Line 53 installs the latest
snykversion without pinning, which is inconsistent with the pinned versions forpnpm(v9) and Node (v20) elsewhere in this workflow. Unversioned installations can introduce unexpected CI breakage from major updates. Pin to a tested version, e.g.,npm install -g snyk@1.1304.1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/snyk.yml around lines 51 - 53, The "Install Snyk CLI" GitHub Actions step currently runs "npm install -g snyk" without a pinned version; update that step (the Install Snyk CLI block / the run command) to install a specific, tested Snyk release (for example change to "npm install -g snyk@1.1304.1") so CI is deterministic and consistent with other pinned tools.src/controllers/mintController.ts (2)
222-228: Inconsistent error handling: direct response vs AppError.This validation error path uses direct
res.status().json()while the equivalent inmintFromUsdc(line 62) throws anAppError. This inconsistency bypasses centralized error handling/logging.♻️ Proposed fix to align with AppError pattern
const parsed = depositBodySchema.safeParse(req.body); if (!parsed.success) { - res - .status(400) - .json({ error: "Invalid request", details: parsed.error.flatten() }); - return; + throw new AppError("Invalid request", 400, "VALIDATION_ERROR", parsed.error.flatten()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/mintController.ts` around lines 222 - 228, Replace the direct response for validation failures with the centralized AppError pattern used elsewhere: after calling depositBodySchema.safeParse(req.body) (the parsed variable), remove the res.status(400).json(...) branch and instead throw a new AppError with status 400 and include the validation details (parsed.error.flatten()) in the error payload/message; mirror how mintFromUsdc throws AppError so the controller's validation path is handled by the global error middleware and logging.
99-99: Consider clarifying Decimal type coercion.Multiple places wrap a
Decimal(fromdecimalUtils) with Prisma'sDecimalconstructor. If both use compatible decimal.js-based implementations, this may be redundant but harmless. If they differ, this coercion is necessary.A brief inline comment or a dedicated helper like
toPrismaDecimal()would clarify intent and reduce repetition.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/mintController.ts` at line 99, The usdcAmount assignment wraps a Decimal from decimalUtils with Prisma's Decimal constructor (usdcAmount: new Decimal(usdcDecimal)), which is ambiguous; add a small helper function (e.g., toPrismaDecimal(value)) or an inline comment in mintController.ts near usdcAmount explaining whether the double-wrapping is required (compatibility conversion) and replace repeated new Decimal(...) usages with that helper to clarify intent and reduce repetition; ensure the helper clearly accepts the decimalUtils value and returns the Prisma Decimal instance used throughout the controller (reference: usdcAmount, Decimal, decimalUtils).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/snyk.yml:
- Around line 23-29: The skip logic emits a misleading reason because the
SNYK_TOKEN-empty check runs before the fork-PR check; reorder the conditionals
so the fork pull request gate (checking github.event_name == "pull_request" and
github.event.pull_request.head.repo.fork == "true") is evaluated before the
SNYK_TOKEN presence check, ensuring fork-specific messages are emitted for
forked PRs; update the if/elif sequence in the workflow to check the fork PR
condition first, then fall back to the SNYK_TOKEN empty check, keeping the same
outputs (writing should_run=false and the appropriate echo message).
---
Outside diff comments:
In `@src/controllers/mintController.ts`:
- Around line 276-283: Import the convertLocalToUsd function where other helpers
are imported and replace the undefined amountUsdPlaceholder with the actual
amountUsd returned from convertLocalToUsd in the call to checkDepositLimits;
specifically, add an import for convertLocalToUsd (from the module that defines
it) and change the argument in the checkDepositLimits call to amountUsd so it
uses the variable declared on the previous line.
---
Nitpick comments:
In @.github/workflows/snyk.yml:
- Around line 51-53: The "Install Snyk CLI" GitHub Actions step currently runs
"npm install -g snyk" without a pinned version; update that step (the Install
Snyk CLI block / the run command) to install a specific, tested Snyk release
(for example change to "npm install -g snyk@1.1304.1") so CI is deterministic
and consistent with other pinned tools.
In `@src/controllers/mintController.ts`:
- Around line 222-228: Replace the direct response for validation failures with
the centralized AppError pattern used elsewhere: after calling
depositBodySchema.safeParse(req.body) (the parsed variable), remove the
res.status(400).json(...) branch and instead throw a new AppError with status
400 and include the validation details (parsed.error.flatten()) in the error
payload/message; mirror how mintFromUsdc throws AppError so the controller's
validation path is handled by the global error middleware and logging.
- Line 99: The usdcAmount assignment wraps a Decimal from decimalUtils with
Prisma's Decimal constructor (usdcAmount: new Decimal(usdcDecimal)), which is
ambiguous; add a small helper function (e.g., toPrismaDecimal(value)) or an
inline comment in mintController.ts near usdcAmount explaining whether the
double-wrapping is required (compatibility conversion) and replace repeated new
Decimal(...) usages with that helper to clarify intent and reduce repetition;
ensure the helper clearly accepts the decimalUtils value and returns the Prisma
Decimal instance used throughout the controller (reference: usdcAmount, Decimal,
decimalUtils).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b835ec3b-affe-462e-abc6-b3124fdb5e44
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (4)
.github/workflows/snyk.ymlsrc/controllers/mintController.tssrc/index.tssrc/routes/index.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/routes/index.ts
- src/index.ts
| if [[ -z "${SNYK_TOKEN}" ]]; then | ||
| echo "should_run=false" >> "$GITHUB_OUTPUT" | ||
| echo "Skipping Snyk scan because SNYK_TOKEN is not configured." | ||
| elif [[ "${{ github.event_name }}" == "pull_request" && "${{ github.event.pull_request.head.repo.fork }}" == "true" ]]; then | ||
| echo "should_run=false" >> "$GITHUB_OUTPUT" | ||
| echo "Skipping Snyk scan because secrets are unavailable to fork pull requests." | ||
| else |
There was a problem hiding this comment.
Reorder gate checks to keep skip reasons accurate.
On fork PRs, Line 23 short-circuits first when secrets are unavailable, so the fork-specific message at Line 28 is never reached. Functionality is fine, but the emitted reason is misleading.
Suggested adjustment
- if [[ -z "${SNYK_TOKEN}" ]]; then
- echo "should_run=false" >> "$GITHUB_OUTPUT"
- echo "Skipping Snyk scan because SNYK_TOKEN is not configured."
- elif [[ "${{ github.event_name }}" == "pull_request" && "${{ github.event.pull_request.head.repo.fork }}" == "true" ]]; then
+ if [[ "${{ github.event_name }}" == "pull_request" && "${{ github.event.pull_request.head.repo.fork }}" == "true" ]]; then
echo "should_run=false" >> "$GITHUB_OUTPUT"
echo "Skipping Snyk scan because secrets are unavailable to fork pull requests."
+ elif [[ -z "${SNYK_TOKEN}" ]]; then
+ echo "should_run=false" >> "$GITHUB_OUTPUT"
+ echo "Skipping Snyk scan because SNYK_TOKEN is not configured."
else
echo "should_run=true" >> "$GITHUB_OUTPUT"
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [[ -z "${SNYK_TOKEN}" ]]; then | |
| echo "should_run=false" >> "$GITHUB_OUTPUT" | |
| echo "Skipping Snyk scan because SNYK_TOKEN is not configured." | |
| elif [[ "${{ github.event_name }}" == "pull_request" && "${{ github.event.pull_request.head.repo.fork }}" == "true" ]]; then | |
| echo "should_run=false" >> "$GITHUB_OUTPUT" | |
| echo "Skipping Snyk scan because secrets are unavailable to fork pull requests." | |
| else | |
| if [[ "${{ github.event_name }}" == "pull_request" && "${{ github.event.pull_request.head.repo.fork }}" == "true" ]]; then | |
| echo "should_run=false" >> "$GITHUB_OUTPUT" | |
| echo "Skipping Snyk scan because secrets are unavailable to fork pull requests." | |
| elif [[ -z "${SNYK_TOKEN}" ]]; then | |
| echo "should_run=false" >> "$GITHUB_OUTPUT" | |
| echo "Skipping Snyk scan because SNYK_TOKEN is not configured." | |
| else |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/snyk.yml around lines 23 - 29, The skip logic emits a
misleading reason because the SNYK_TOKEN-empty check runs before the fork-PR
check; reorder the conditionals so the fork pull request gate (checking
github.event_name == "pull_request" and github.event.pull_request.head.repo.fork
== "true") is evaluated before the SNYK_TOKEN presence check, ensuring
fork-specific messages are emitted for forked PRs; update the if/elif sequence
in the workflow to check the fork PR condition first, then fall back to the
SNYK_TOKEN empty check, keeping the same outputs (writing should_run=false and
the appropriate echo message).
Use pnpm 10.33.1 in CI to match package.json and prevent ERR_PNPM_BAD_PM_VERSION during setup, unblocking lint-and-test. Made-with: Cursor
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 4-8: The workflow currently triggers on push and pull_request but
only runs the destructive-migration gate on pull_request, allowing direct pushes
to run the prisma:migrate:deploy step without the migration-diff guard; to fix,
either add the destructive-migration gate check to run on push events as well
(ensure the same job/step that performs the migration-diff check is included in
the push path) or remove push triggers for protected branches so migrations can
only run via pull_request; update references to the destructive-migration gate
job/step and the prisma:migrate:deploy step to ensure the gate executes before
prisma:migrate:deploy on all event types.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| push: | ||
| branches: [main, develop] | ||
| branches: [main, dev, develop] | ||
| pull_request: | ||
| branches: [main, develop] | ||
| branches: [main, dev, develop] | ||
| types: [opened, synchronize, reopened, labeled, unlabeled] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
printf '\n== Workflow trigger + migration gate ==\n'
rg -n -C 2 'pull_request|push:|check-destructive-migrations|prisma:migrate:deploy|DESTRUCTIVE_MIGRATION_LABEL' .github/workflows/ci.yml
printf '\n== Gate script ==\n'
script="$(fd -a 'check-destructive-migrations\.js$' scripts . -t f | head -n 1 || true)"
if [[ -n "${script:-}" ]]; then
sed -n '1,260p' "$script"
else
echo "check-destructive-migrations.js not found"
fiRepository: Pi-Defi-world/acbu-backend
Length of output: 7367
Run the destructive-migration gate on the push path too.
The workflow triggers on push to main, dev, and develop branches, but the destructive migration gate (line 75) only runs on pull_request events. The gate script intentionally skips when triggered by push events, yet the prisma:migrate:deploy step (line 88) runs unconditionally. This creates a bypass where direct pushes can reach the migration deployment without the migration-diff guard.
Either run the gate check on push events as well or remove the push trigger for branches that should only receive merges via protected pull requests.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 4 - 8, The workflow currently triggers
on push and pull_request but only runs the destructive-migration gate on
pull_request, allowing direct pushes to run the prisma:migrate:deploy step
without the migration-diff guard; to fix, either add the destructive-migration
gate check to run on push events as well (ensure the same job/step that performs
the migration-diff check is included in the push path) or remove push triggers
for protected branches so migrations can only run via pull_request; update
references to the destructive-migration gate job/step and the
prisma:migrate:deploy step to ensure the gate executes before
prisma:migrate:deploy on all event types.
closes #170
#170
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Tests