fix: type permissions as Zod enum instead of loose strings (#163)#253
Conversation
…ary (B-052) Add logFinancialEvent instrumentation to the three previously uninstrumented money-movement paths: - mintFromUsdcInternal: emits mint.initiated / mint.completed / mint.failed with amount in cents (USDC), correlationId, and providerRef (blockchain hash) - burnAcbu: emits burn.initiated / burn.processing / burn.failed with amount in stroops (ACBU), correlationId sourced from x-request-id header - salaryService: emits salary.batch.initiated on creation and salary.batch.completed on finish, idempotencyKey propagated from batch All events carry the full FinancialLogPayload schema: transactionId, userId, accountId, amount, currency, idempotencyKey, correlationId, status, timestamp, environment. Closes Pi-Defi-world#167.
…aryItem/SalarySchedule/UserDevice models, add organizationId to Transaction
…age guardrails B-056: Add startup assertions in database.ts that detect when DATABASE_URL is mistakenly set to a Prisma Accelerate URL (prisma://), and emit clear boot logs indicating which connection type is active at runtime vs migration. Document the DATABASE_URL vs PRISMA_ACCELERATE_URL matrix in README.md. B-058: Introduce AES-256-GCM field-level encryption utility (src/utils/piiEncryption.ts) with encrypt/decrypt helpers for string and JSON PII fields, plus a deterministic HMAC search-token function for fields that need indexed lookups (phone, email). Add PII_ENCRYPTION_KEY env var (64-char hex, 256-bit) to config and .env.example. B-063: Add OpenAI usage guardrails service (src/services/ai/openaiGuard.ts) that enforces auth context (orgId required), per-org monthly spend budget backed by MongoDB, and a prompt injection allowlist before every OpenAI API call. Spend is tracked per-org per-month so budget caps survive restarts. Add OPENAI_API_KEY, OPENAI_ORG_MONTHLY_BUDGET_USD, and OPENAI_MAX_TOKENS_PER_REQUEST to env config. Closes Pi-Defi-world#171 Closes Pi-Defi-world#173 Closes Pi-Defi-world#178
…orld#163) - Add PermissionScopeEnum and PermissionsArraySchema to src/types/permissions.ts - Update validatePermissions in auth.ts to use Zod safeParse (rejects unknown scopes at read time) - Tighten generateApiKey permissions param from string[] to PermissionScope[] - Replace validateAdminScopes in authService.ts to throw on invalid scopes instead of silently filtering
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (7)
📝 WalkthroughWalkthroughThis PR introduces multi-faceted enhancements spanning environment configuration, currency conversion infrastructure, Prisma schema updates, controller refactoring with financial event logging, typed permission validation, new AI/encryption services, and supporting documentation. Changes include Prisma Accelerate pooling configuration, AES-256-GCM PII encryption, OpenAI budget-aware chat integration, local-to-USD currency conversion for deposit limits, new database models (SalaryItem, SalarySchedule, UserDevice), and numeric arithmetic updates in monetary controllers. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant MintCtrl as Mint Controller
participant CurrencyConverter as Currency Converter
participant LimitsService as Limits Service
participant Prisma as Prisma (DB)
participant Response
Client->>MintCtrl: POST depositFromBasketCurrency<br/>(amount, currency)
MintCtrl->>MintCtrl: Parse & validate amount (Number)
MintCtrl->>CurrencyConverter: convertLocalToUsd(amount, currency)
CurrencyConverter->>Prisma: fetch acbuRate record
Prisma-->>CurrencyConverter: rate data
CurrencyConverter->>CurrencyConverter: Decimal math:<br/>acbuAmount = localAmount / localRate<br/>usdAmount = acbuAmount * usdRate
CurrencyConverter-->>MintCtrl: usdAmount
MintCtrl->>LimitsService: checkDepositLimits(orgId, userId,<br/>usdAmount)
LimitsService-->>MintCtrl: OK or 429 error
MintCtrl->>Prisma: transaction.create<br/>(localAmount, rateSnapshot, ...)
Prisma-->>MintCtrl: transaction record
MintCtrl->>MintCtrl: logFinancialEvent<br/>(mint.completed, correlationId)
MintCtrl-->>Response: 200 {acbuAmount, ...}
sequenceDiagram
participant Client
participant Controller as KYC/Chat Controller
participant OpenAIGuard as OpenAI Guard
participant MongoDB as MongoDB (Spend)
participant OpenAI as OpenAI API
participant Response
Client->>Controller: POST chat<br/>(orgId, messages, ...)
Controller->>OpenAIGuard: guardedChat(orgId, messages, ...)
OpenAIGuard->>MongoDB: fetch org spend<br/>(current month)
MongoDB-->>OpenAIGuard: currentSpend
OpenAIGuard->>OpenAIGuard: Check budget:<br/>currentSpend + est. > budget?
alt Budget Exceeded
OpenAIGuard-->>Response: 429 {code: BUDGET_EXCEEDED}
else Budget OK
OpenAIGuard->>OpenAI: chat.completions.create(...)
OpenAI-->>OpenAIGuard: {content, usage}
OpenAIGuard->>OpenAIGuard: Estimate cost from<br/>model-pricing map
OpenAIGuard->>MongoDB: upsert spend record<br/>(org, month, $inc)
MongoDB-->>OpenAIGuard: updated spend
OpenAIGuard-->>Response: 200 {content,<br/>usage, estimatedCostUsd}
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@temisan0x Great news! 🎉 Based on an automated assessment of this PR, the linked Wave issue(s) no longer count against your application limits. You can now already apply to more issues while waiting for a review of this PR. Keep up the great work! 🚀 |
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/controllers/burnController.ts (1)
153-193:⚠️ Potential issue | 🔴 CriticalCritical: this block is structurally broken and will not compile.
Several issues compound here:
- L153:
const tx = await prisma.transaction.create({opens a function call that is never closed — there's no matching}).- L168-171:
}closes thedataobject, thenblockchainTxHash:is added at the wrong scope (sibling ofdata, not a column on the Transaction model), then};ends what looks like a plain object literal — which doesn't match theprisma.transaction.create({...})opener. Biome parser flags this at L171:expected ',' but instead found ';'.- L173:
let tx: Transaction;redeclarestx(already declaredconstat L153) — TypeScript duplicate-binding error.- L175: References
createData, which is never defined anywhere in this file.The intent appears to be: extract the transaction payload into a
createDatavariable, then attemptprisma.transaction.create({ data: createData })insidetry/catchfor P2002 idempotency. The L153 line was simply never converted.🛠 Proposed fix
- const tx = await prisma.transaction.create({ - data: { - userId: req.apiKey?.userId ?? undefined, - organizationId: req.apiKey?.organizationId ?? undefined, - type: "burn", - status: "pending", - acbuAmountBurned: new Decimal(acbuNum), - localCurrency: currency, - localAmount: new Decimal(localNum), - recipientAccount: recipient_account as object, - fee: new Decimal(feeAcbu), - rateSnapshot: { - acbu_ngn: null, - timestamp: new Date().toISOString(), - }, - }, - blockchainTxHash: - burningEnabled && blockchain_tx_hash ? blockchain_tx_hash : undefined, - }; - - let tx: Transaction; + const createData: Prisma.TransactionUncheckedCreateInput = { + userId: req.apiKey?.userId ?? undefined, + organizationId: req.apiKey?.organizationId ?? undefined, + type: "burn", + status: "pending", + acbuAmountBurned: new Decimal(acbuNum), + localCurrency: currency, + localAmount: new Decimal(localNum), + recipientAccount: recipient_account as object, + fee: new Decimal(feeAcbu), + rateSnapshot: { + acbu_ngn: null, + timestamp: new Date().toISOString(), + }, + blockchainTxHash: + burningEnabled && blockchain_tx_hash ? blockchain_tx_hash : undefined, + }; + + let tx: Transaction; try { tx = await prisma.transaction.create({ data: createData }); } catch (err) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/burnController.ts` around lines 153 - 193, The transaction creation block is malformed: you opened a direct prisma.transaction.create(...) at the top but then intended to build a createData object and call prisma.transaction.create({ data: createData }) inside the try/catch; fix by removing the erroneous initial const tx = await prisma.transaction.create(...) call, construct a createData object (matching the fields currently inside the misplaced data/object and include blockchainTxHash as a field on that createData), then in the try block call tx = await prisma.transaction.create({ data: createData }); ensure tx is declared once (let tx: Transaction) before the try, and keep the existing P2002 handling that calls respondFromExistingBurnTx(res, existing, blockchain_tx_hash) when burningEnabled && blockchain_tx_hash and err.code === "P2002".src/controllers/mintController.ts (1)
301-309:⚠️ Potential issue | 🔴 CriticalCritical: deposit limits are still checked against raw local-currency amount —
convertLocalToUsdis never called.The PR ships
convertLocalToUsdinsrc/services/rates/currencyConverter.tsandmintController.test.tsasserts it is invoked beforecheckDepositLimits, but the production code here still has the placeholder:const amountUsdPlaceholder = amountNum; // TODO: convert via rate to USD for accurate limit await checkDepositLimits(audience, amountUsdPlaceholder, userId, ...);There is also no
convertLocalToUsdimport in this file. Net effect:
- Tests in
mintController.test.tswill fail atexpect(mockConvertLocalToUsd).toHaveBeenCalledWith(...).- Production limit enforcement is wrong: 100,000 NGN (~$60 USD) is checked against the $5,000/day retail USD cap as if it were
100000 USD— over-restricting low-value deposits and, depending on currency, potentially under-restricting higher-value ones (e.g., 1,000 EUR ≈ ~$1,070 also checked as1000).🛠 Proposed fix
+import { convertLocalToUsd } from "../services/rates/currencyConverter"; ... - // Apply deposit limits - use retail as default if no audience is set - const audience = req.audience || "retail"; - const amountUsdPlaceholder = amountNum; // TODO: convert via rate to USD for accurate limit - await checkDepositLimits( - audience, - amountUsdPlaceholder, - userId, - req.apiKey?.organizationId ?? null, - ); + // Apply deposit limits in USD (not raw local currency) so caps remain consistent. + const audience = req.audience || "retail"; + const amountUsd = await convertLocalToUsd(amountNum, currency); + await checkDepositLimits( + audience, + amountUsd, + userId, + req.apiKey?.organizationId ?? null, + );You may also want
rateSnapshotto capture bothamountNumandamountUsdfor audit clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/mintController.ts` around lines 301 - 309, The code is still passing the raw local amount to checkDepositLimits; import and call convertLocalToUsd (from src/services/rates/currencyConverter.ts) with the local amount and the request currency before invoking checkDepositLimits, replace amountUsdPlaceholder with the returned USD amount, and ensure convertLocalToUsd is called prior to checkDepositLimits so mintController.test.ts's mock assertion passes; optionally capture and include a rateSnapshot (containing amountNum and amountUsd) for auditing alongside the audience, userId and organizationId when enforcing limits.
🧹 Nitpick comments (2)
src/services/rates/index.ts (1)
1-5: Prettier/CRLF will fail lint in CI.ESLint reports CRLF (
␍) on every line and that the named-export should be wrapped on multiple lines. Re-save with LF line endings (configure.gitattributesor your editor) and letpnpm formatrewrite the export shape.♻️ Proposed fix (LF endings + multi-line export)
-/** - * Rates service exports. - */ - -export { convertLocalToUsd, convertLocalToUsdWithPrecision } from "./currencyConverter"; +/** + * Rates service exports. + */ + +export { + convertLocalToUsd, + convertLocalToUsdWithPrecision, +} from "./currencyConverter";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/rates/index.ts` around lines 1 - 5, Convert the file to LF line endings and reformat the export to a multi-line named export so CI/prettier lint passes: change CRLF to LF (update .gitattributes or your editor to save as LF), then rewrite the export of convertLocalToUsd and convertLocalToUsdWithPrecision into a multi-line form and run pnpm format to apply the project style.src/services/rates/currencyConverter.ts (1)
16-47: Optional: simplify the rate-field type plumbing.The
rateFieldhelper object exists only soRecord<string, keyof typeof rateField>works; it's never used at runtime. Aconst-asserted map gives the same type safety with less indirection:-const CURRENCY_TO_RATE_FIELD: Record<string, keyof typeof rateField> = { - NGN: "acbuNgn", - ZAR: "acbuZar", - ... - USD: "acbuUsd", -}; - -// Helper object to ensure type safety -const rateField = { - acbuNgn: "acbuNgn", - ... - acbuUsd: "acbuUsd", -}; +const CURRENCY_TO_RATE_FIELD = { + NGN: "acbuNgn", + ZAR: "acbuZar", + KES: "acbuKes", + EGP: "acbuEgp", + GHS: "acbuGhs", + RWF: "acbuRwf", + XOF: "acbuXof", + MAD: "acbuMad", + TZS: "acbuTzs", + UGX: "acbuUgx", + EUR: "acbuEur", + GBP: "acbuGbp", + USD: "acbuUsd", +} as const satisfies Record<string, keyof Prisma.AcbuRateGetPayload<{}>>;This also removes ~14 lines of duplication.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/rates/currencyConverter.ts` around lines 16 - 47, The rateField helper object is only used for typing and duplicates keys; replace it with a single const-asserted map so CURRENCY_TO_RATE_FIELD gets strong literal types without the extra runtime object. Specifically, remove the separate rateField object and declare CURRENCY_TO_RATE_FIELD using a const assertion (e.g., const CURRENCY_TO_RATE_FIELD = { ... } as const) so the compiler infers the exact string literal value types and maintains type safety for lookups that currently rely on keyof typeof rateField.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@prisma/schema.prisma`:
- Around line 504-505: The InvestmentStrategy model was removed from
schema.prisma but code still references prisma.investmentStrategy and related
logic (e.g., allocationService, yieldAccountingService.accrueFromStrategies,
prisma/seedStrategies.ts, and allocation.test.ts), causing compile/runtime
failures; either restore the InvestmentStrategy model definition in
schema.prisma and run prisma migrate/generate so prisma.investmentStrategy is
available (then regenerate client), or remove/update all dependent code (delete
or refactor references in src/services/investment/allocationService.ts,
src/services/investment/yieldAccountingService.ts — specifically
accrueFromStrategies — prisma/seedStrategies.ts, and tests/allocation.test.ts)
and add a proper Prisma migration that drops the investment_strategies table
before running pnpm prisma generate; ensure the change set in this PR includes
both schema + migration + regenerated client or the code deletions/refactors so
types and runtime behavior remain consistent.
In `@src/config/env.ts`:
- Line 377: Revert the risky default change for corsOrigin: change the config
key corsOrigin so it defaults to an empty array ([]) instead of ["*"] and do not
treat "*" as a wildcard; additionally, add a runtime check (similar to the
existing PRISMA_ACCELERATE_URL production check) that throws or logs an error
when process.env.CORS_ORIGIN is missing in production to force explicit
configuration; update any references in the CORS middleware (cors.ts) to
continue using config.corsOrigin.includes(origin) without special-casing "*" so
the deny-by-default behavior is preserved.
- Around line 21-32: The config currently marks PII_ENCRYPTION_KEY and
OPENAI_API_KEY as .optional(), letting production start without them; change
both to be required when running in production similar to the existing
PRISMA_ACCELERATE_URL production guard: detect production (the same
isProduction/ NODE_ENV check used in this file) and conditionally require
PII_ENCRYPTION_KEY (keep the 64-char hex validation) and OPENAI_API_KEY in that
branch, otherwise keep them optional for non-prod; update the schema creation
for PII_ENCRYPTION_KEY and OPENAI_API_KEY (and their error messages) so missing
keys in production fail fast at startup rather than at runtime in
encryptField/decryptField or guardedChat paths.
In `@src/controllers/burnController.ts`:
- Around line 203-216: Add an explicit import for Node's crypto randomUUID and
stop relying on a global: import the function (e.g., randomUUID from 'crypto')
and use randomUUID() when computing burnCorrelationId instead of
crypto.randomUUID(); then update the logFinancialEvent call so userId and
accountId do not fall back to tx.id — use null or a clear sentinel like
"anonymous" (referencing req.apiKey?.userId, userId, accountId,
logFinancialEvent, and burnCorrelationId) to avoid writing transaction IDs into
user/account fields.
In `@src/controllers/mintController.ts`:
- Line 126: Add an explicit import for Node's crypto module and ensure both
controllers use it: add "import * as crypto from 'crypto';" at the top of the
file containing the const correlationId = crypto.randomUUID(); in
mintController.ts, and make the identical change in burnController.ts where the
financial-event correlationId is generated so that crypto.randomUUID() is backed
by the imported crypto module.
In `@src/middleware/auth.ts`:
- Around line 43-53: validatePermissions currently fails the whole array when
any element is invalid because it calls PermissionsArraySchema.safeParse on the
entire payload; change it to validate each element individually by iterating
over the input, validating each item against the single-scope schema (use the
same underlying schema used by PermissionsArraySchema), collect and return only
the successfully parsed PermissionScope values, and log invalid entries with
logger.warn (including the raw bad values and parse errors) so one bad scope
doesn't drop all permissions; also ensure PermissionScopeEnum is exported from
../types/permissions so you can validate against it if needed.
In `@src/services/ai/openaiGuard.ts`:
- Around line 62-75: assertPromptSafe currently treats non-string
ChatCompletionMessageParam.content as "" so array-form (multi-part) user
messages bypass pattern checks; update the function to handle array content by
normalizing content into a single searchable string (e.g., if content is an
array, map/flat to extract text segments from string items and any object parts
with text-like properties, join them into one string) and then test that
combined text against DISALLOWED_PROMPT_PATTERNS just as for string content;
keep the same throw behavior (statusCode: 400, isOperational: true) and
reference assertPromptSafe and DISALLOWED_PROMPT_PATTERNS when making the
change.
- Around line 104-121: The pricing lookup in estimateCostUsd is matching the
wrong model because Object.keys(pricing).find((k) => model.startsWith(k))
returns the first insertion-order match (so "gpt-4o" matches "gpt-4o-mini");
change the lookup to prefer the longest matching prefix (or exact match) to
avoid mis-pricing: compute modelKey by iterating pricing keys sorted by
descending length and selecting the first key where model === key ||
model.startsWith(key), then use pricing[modelKey ?? "gpt-4o-mini"] for { input,
output } so gpt-4o-mini resolves to its correct rates.
In `@src/services/auth/authService.ts`:
- Around line 141-154: The validateAdminScopes function currently parses scopes
with PermissionsArraySchema then silently filters out non-admin items; change it
so after safeParse succeeds you check for any values not included in
ADMIN_SCOPES (using parsed.data) and throw an Error listing the offending scopes
instead of dropping them, returning parsed admin scopes only when all inputs are
valid admin scopes; update error messages to be descriptive. Also remove the
now-redundant permissions.length === 0 guard inside issueAdminKey because
validateAdminScopes will enforce at least one admin scope. Ensure references to
PermissionsArraySchema, ADMIN_SCOPES, validateAdminScopes and issueAdminKey are
used to locate and change the logic.
In `@src/services/rates/currencyConverter.test.ts`:
- Around line 99-109: The test title claims case-insensitivity but only uses
"NGN"; fix by normalizing currency codes in convertLocalToUsd before lookup
(e.g., transform the currency parameter to uppercase) so it can use
CURRENCY_TO_RATE_FIELD safely, and update the test in currencyConverter.test.ts
to include a lowercase assertion (call convertLocalToUsd(100000, "ngn") and
expect same result) or alternatively rename the test if you prefer not to change
implementation; ensure modifications reference convertLocalToUsd and
CURRENCY_TO_RATE_FIELD so lookups no longer fail for lowercase inputs.
- Around line 76-88: The test's expected USD value is wrong by ~1000x; update
the assertion in the "should preserve decimal precision with high-value
conversions" test that calls convertLocalToUsd so it expects ~507.81 instead of
507880.64 (e.g., toBeCloseTo(507.81, 2)) to match the computed acbuAmount =
999999.99 / 1000.12345 and usdAmount = acbuAmount * 0.50789123; keep the rest of
the mock (prisma.acbuRate.findFirst) and precision assertion unchanged.
In `@src/services/salary/salaryService.ts`:
- Around line 195-205: The completion event is generating a new correlationId
which breaks correlation with createSalaryBatch's salaryCorrelationId; update
the logFinancialEvent call in the salary.batch.completed path to use the
persisted batch correlation value (e.g. batch.salaryCorrelationId or fall back
to batch.idempotencyKey or batchId) instead of crypto.randomUUID() so initiation
and completion share the same correlationId, and adjust the status logic so
partial success isn't masked — use allSucceeded ? "success" : anySucceeded ?
"partially_completed" : "failed" (or keep "failed" and add a
finalStatus/fromPartial metadata field) so logFinancialEvent reflects partial
completions clearly.
- Line 84: The amount calculation for FinancialLogPayload uses
Math.round(calculatedTotal.toNumber() * 100) which truncates ACBU precision;
update the logic in salaryService.ts where calculatedTotal is turned into amount
(and similarly in burn/mint controllers) to multiply by the asset's native
precision (1e8 for ACBU) instead of 100, e.g. use the asset precision multiplier
when computing amount for FinancialLogPayload (or serialize as integer
string/BigInt to avoid Number.MAX_SAFE_INTEGER issues), and ensure the currency
code remains "ACBU" when using the native precision.
In `@src/types/permissions.ts`:
- Line 20: The file ends without a trailing newline which causes Prettier/ESLint
to fail; open the file that defines PermissionsArraySchema and add a single
trailing newline character at the end of the file so the export line (export
const PermissionsArraySchema = z.array(PermissionScopeEnum);) is followed by a
newline.
- Around line 3-16: PermissionScopeEnum currently omits many scopes used across
the codebase, causing PermissionsArraySchema.safeParse in validatePermissions to
drop unknown scopes and return an empty list; fix by making PermissionScopeEnum
match the authoritative list in SEGMENT_SCOPES (or importing and re-exporting
SEGMENT_SCOPES into this module) and ensure the enum also includes the admin
variants for p2p, sme, gateway, and enterprise so existing API keys validate;
alternatively, if this enum was intended only for admin issuance, rename
PermissionScopeEnum to AdminPermissionScopeEnum and update validatePermissions
to use the broader SEGMENT_SCOPES (or a merged enum) so PermissionsArraySchema
validation no longer strips legitimate scopes.
In `@src/utils/piiEncryption.ts`:
- Around line 12-20: The getPiiKey function currently only checks hexKey length
but can silently accept non-hex chars; update getPiiKey to validate the string
is valid hex before creating the Buffer (e.g., assert hexKey matches a
64-character hex regex like /^[0-9a-fA-F]{64}$/) and throw a descriptive Error
if it fails, then continue to call Buffer.from(hexKey, "hex") to return the
Buffer; reference the exported function name getPiiKey and the parameter hexKey
when implementing this validation.
- Around line 48-67: The decryptField function is concatenating UTF-8-decoded
strings from decipher.update(...) and decipher.final(...), which can corrupt
multi-byte codepoints split across the two chunks; instead, call
decipher.update(...) and decipher.final(...) to get Buffer objects, concatenate
them with Buffer.concat, then call .toString("utf8") once to decode; update the
implementation in decryptField (which uses createDecipheriv,
decipher.setAuthTag(tag.slice(0, TAG_BYTES)), and variables iv/tag/ciphertext)
to return Buffer.concat([decipher.update(ciphertext),
decipher.final()]).toString("utf8") rather than decoding each piece separately.
---
Outside diff comments:
In `@src/controllers/burnController.ts`:
- Around line 153-193: The transaction creation block is malformed: you opened a
direct prisma.transaction.create(...) at the top but then intended to build a
createData object and call prisma.transaction.create({ data: createData })
inside the try/catch; fix by removing the erroneous initial const tx = await
prisma.transaction.create(...) call, construct a createData object (matching the
fields currently inside the misplaced data/object and include blockchainTxHash
as a field on that createData), then in the try block call tx = await
prisma.transaction.create({ data: createData }); ensure tx is declared once (let
tx: Transaction) before the try, and keep the existing P2002 handling that calls
respondFromExistingBurnTx(res, existing, blockchain_tx_hash) when burningEnabled
&& blockchain_tx_hash and err.code === "P2002".
In `@src/controllers/mintController.ts`:
- Around line 301-309: The code is still passing the raw local amount to
checkDepositLimits; import and call convertLocalToUsd (from
src/services/rates/currencyConverter.ts) with the local amount and the request
currency before invoking checkDepositLimits, replace amountUsdPlaceholder with
the returned USD amount, and ensure convertLocalToUsd is called prior to
checkDepositLimits so mintController.test.ts's mock assertion passes; optionally
capture and include a rateSnapshot (containing amountNum and amountUsd) for
auditing alongside the audience, userId and organizationId when enforcing
limits.
---
Nitpick comments:
In `@src/services/rates/currencyConverter.ts`:
- Around line 16-47: The rateField helper object is only used for typing and
duplicates keys; replace it with a single const-asserted map so
CURRENCY_TO_RATE_FIELD gets strong literal types without the extra runtime
object. Specifically, remove the separate rateField object and declare
CURRENCY_TO_RATE_FIELD using a const assertion (e.g., const
CURRENCY_TO_RATE_FIELD = { ... } as const) so the compiler infers the exact
string literal value types and maintains type safety for lookups that currently
rely on keyof typeof rateField.
In `@src/services/rates/index.ts`:
- Around line 1-5: Convert the file to LF line endings and reformat the export
to a multi-line named export so CI/prettier lint passes: change CRLF to LF
(update .gitattributes or your editor to save as LF), then rewrite the export of
convertLocalToUsd and convertLocalToUsdWithPrecision into a multi-line form and
run pnpm format to apply the project style.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 509d5c2d-e123-4fae-8e24-1e6e6cfa87f7
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (20)
.env.example.github/workflows/ci.ymlCURRENCY_FIX_IMPLEMENTATION.mdCURRENCY_FIX_QUICK_REFERENCE.mdREADME.mdprisma/schema.prismasrc/config/database.tssrc/config/env.tssrc/controllers/burnController.tssrc/controllers/mintController.test.tssrc/controllers/mintController.tssrc/middleware/auth.tssrc/services/ai/openaiGuard.tssrc/services/auth/authService.tssrc/services/rates/currencyConverter.test.tssrc/services/rates/currencyConverter.tssrc/services/rates/index.tssrc/services/salary/salaryService.tssrc/types/permissions.tssrc/utils/piiEncryption.ts
💤 Files with no reviewable changes (1)
- .github/workflows/ci.yml
| } | ||
|
|
||
| model SalaryBatch { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all references to InvestmentStrategy / investmentStrategy across the codebase.
rg -nP --type=ts -C2 '\b(investmentStrategy|InvestmentStrategy)\b'
# Confirm the model is indeed gone from the schema.
rg -nP -C2 '^\s*model\s+InvestmentStrategy\b' prisma/schema.prisma
# Check whether a migration drops the table.
fd -t f . prisma/migrations | xargs rg -nP -l 'investment_strategies|InvestmentStrategy' 2>/dev/nullRepository: Pi-Defi-world/acbu-backend
Length of output: 14013
🏁 Script executed:
#!/bin/bash
# 1. Confirm InvestmentStrategy model is completely gone from schema
echo "=== Checking schema for InvestmentStrategy model ==="
rg '^\s*model\s+InvestmentStrategy' prisma/schema.prisma || echo "✓ Model not found in schema"
# 2. List all migration files to confirm no DROP migration exists
echo -e "\n=== All migration files ==="
fd -t f 'migration\.sql$' prisma/migrations | sort
# 3. Check content of migrations for drops/deletes of investment_strategies table
echo -e "\n=== Checking migration content for DROP table ==="
fd -t f 'migration\.sql$' prisma/migrations -x grep -l 'DROP.*investment_strategies\|DELETE.*investment_strategies' {} \; 2>/dev/null || echo "✓ No DROP migrations found"
# 4. Verify the actual schema file size and check for any reference to investmentStrategy
echo -e "\n=== Schema file check ==="
wc -l prisma/schema.prisma
grep -i 'investmentstrategy\|investment_strategy' prisma/schema.prisma || echo "✓ No InvestmentStrategy references in schema"Repository: Pi-Defi-world/acbu-backend
Length of output: 1293
Removing InvestmentStrategy model breaks dependent services and seed scripts.
The InvestmentStrategy model has been removed from the schema, but no migration exists to drop the investment_strategies table. More critically, the codebase still contains active references to this model:
src/services/investment/allocationService.ts: 6 references (aggregate, findUnique, update operations)src/services/investment/yieldAccountingService.ts: accrueFromStrategies() queries active strategiesprisma/seedStrategies.ts: Uses findUnique and create operationstests/allocation.test.ts: 20+ references in allocation tests
After pnpm prisma generate, Prisma client will not expose prisma.investmentStrategy, causing immediate TypeScript compilation failures and runtime errors in allocation service, yield accounting, and the seed script.
Either:
- Restore the model if still in use, or
- Delete the dependent services, tests, and seed code in the same PR, then add a migration to drop the
investment_strategiestable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prisma/schema.prisma` around lines 504 - 505, The InvestmentStrategy model
was removed from schema.prisma but code still references
prisma.investmentStrategy and related logic (e.g., allocationService,
yieldAccountingService.accrueFromStrategies, prisma/seedStrategies.ts, and
allocation.test.ts), causing compile/runtime failures; either restore the
InvestmentStrategy model definition in schema.prisma and run prisma
migrate/generate so prisma.investmentStrategy is available (then regenerate
client), or remove/update all dependent code (delete or refactor references in
src/services/investment/allocationService.ts,
src/services/investment/yieldAccountingService.ts — specifically
accrueFromStrategies — prisma/seedStrategies.ts, and tests/allocation.test.ts)
and add a proper Prisma migration that drops the investment_strategies table
before running pnpm prisma generate; ensure the change set in this PR includes
both schema + migration + regenerated client or the code deletions/refactors so
types and runtime behavior remain consistent.
| // B-058: 64-char hex key (32 bytes) for AES-256-GCM PII field encryption. | ||
| // Generate with: node -e "console.log(require('crypto').randomBytes(32).toString('hex'))" | ||
| PII_ENCRYPTION_KEY: z | ||
| .string() | ||
| .length(64, "PII_ENCRYPTION_KEY must be exactly 64 hex characters (32 bytes)") | ||
| .regex(/^[0-9a-fA-F]+$/, "PII_ENCRYPTION_KEY must be a hex string") | ||
| .optional(), | ||
| // B-063: OpenAI integration config. | ||
| OPENAI_API_KEY: z.string().optional(), | ||
| OPENAI_ORG_MONTHLY_BUDGET_USD: z.coerce.number().default(50), | ||
| OPENAI_MAX_TOKENS_PER_REQUEST: z.coerce.number().default(2000), | ||
| }); |
There was a problem hiding this comment.
Consider requiring PII_ENCRYPTION_KEY in production.
PII_ENCRYPTION_KEY is .optional(), so production can boot without it. Any code path that calls encryptField/decryptField without a key will throw at request time, but a misconfigured fleet could ship without anyone noticing until the first KYC write. The same goes for OPENAI_API_KEY if guardedChat is on critical paths. Match the existing PRISMA_ACCELERATE_URL production guard (L43-50) for these:
+if (parsed.data.NODE_ENV === "production" && !parsed.data.PII_ENCRYPTION_KEY) {
+ throw new Error("Missing required environment variable: PII_ENCRYPTION_KEY");
+}🧰 Tools
🪛 ESLint
[error] 25-25: Replace 64,·"PII_ENCRYPTION_KEY·must·be·exactly·64·hex·characters·(32·bytes)" with ⏎······64,⏎······"PII_ENCRYPTION_KEY·must·be·exactly·64·hex·characters·(32·bytes)",⏎····
(prettier/prettier)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config/env.ts` around lines 21 - 32, The config currently marks
PII_ENCRYPTION_KEY and OPENAI_API_KEY as .optional(), letting production start
without them; change both to be required when running in production similar to
the existing PRISMA_ACCELERATE_URL production guard: detect production (the same
isProduction/ NODE_ENV check used in this file) and conditionally require
PII_ENCRYPTION_KEY (keep the 64-char hex validation) and OPENAI_API_KEY in that
branch, otherwise keep them optional for non-prod; update the schema creation
for PII_ENCRYPTION_KEY and OPENAI_API_KEY (and their error messages) so missing
keys in production fail fast at startup rather than at runtime in
encryptField/decryptField or guardedChat paths.
|
|
||
| // CORS | ||
| corsOrigin: process.env.CORS_ORIGIN?.split(",") || [], | ||
| corsOrigin: process.env.CORS_ORIGIN?.split(",") || ["*"], |
There was a problem hiding this comment.
Risky change: default corsOrigin from [] to ["*"].
Today this is mostly harmless because src/middleware/cors.ts does config.corsOrigin.includes(origin), which on ["*"] only matches a literal "*" origin string and rejects everything else (the same effective behavior as []). But the intent of ["*"] is universally read as "allow all origins" — the moment anyone refactors the middleware to treat * as a wildcard (a common change), every deployment without CORS_ORIGIN set instantly becomes wide-open CORS.
Recommend either:
- Keep the default
[](deny by default), or - Require
CORS_ORIGINinproduction(similar to the existingPRISMA_ACCELERATE_URLcheck at L43-50).
- corsOrigin: process.env.CORS_ORIGIN?.split(",") || ["*"],
+ corsOrigin: process.env.CORS_ORIGIN?.split(",") || [],📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| corsOrigin: process.env.CORS_ORIGIN?.split(",") || ["*"], | |
| corsOrigin: process.env.CORS_ORIGIN?.split(",") || [], |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config/env.ts` at line 377, Revert the risky default change for
corsOrigin: change the config key corsOrigin so it defaults to an empty array
([]) instead of ["*"] and do not treat "*" as a wildcard; additionally, add a
runtime check (similar to the existing PRISMA_ACCELERATE_URL production check)
that throws or logs an error when process.env.CORS_ORIGIN is missing in
production to force explicit configuration; update any references in the CORS
middleware (cors.ts) to continue using config.corsOrigin.includes(origin)
without special-casing "*" so the deny-by-default behavior is preserved.
| const burnCorrelationId = | ||
| (req.headers["x-request-id"] as string | undefined) ?? crypto.randomUUID(); | ||
|
|
||
| logFinancialEvent({ | ||
| event: "burn.initiated", | ||
| status: "pending", | ||
| transactionId: tx.id, | ||
| userId: req.apiKey?.userId ?? tx.id, | ||
| accountId: req.apiKey?.userId ?? tx.id, | ||
| idempotencyKey: blockchain_tx_hash ?? tx.id, | ||
| amount: Math.round(acbuNum * DECIMALS_7), // stroops | ||
| currency: "ACBU", | ||
| correlationId: burnCorrelationId, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What Node.js version makes crypto.randomUUID available as a global without import?
💡 Result:
Node.js version 20 and later. In Node.js 20+, the Web Crypto API crypto object (which includes crypto.randomUUID) is exposed on the global scope via globalThis.crypto, allowing use without importing the node:crypto module. Prior versions required importing from 'node:crypto' to access crypto.randomUUID.
Citations:
- 1: fix!: expect
cryptoto be global everywhere (requires node@20+) uuidjs/uuid#935 - 2: uuidjs/uuid@f2c235f
- 3: https://nodejs.org/download/release/v20.0.0/docs/api/webcrypto.html
- 4: http://nodejs.org/docs/latest/api/webcrypto.html
- 5: nodejs/node@c9747f1140
- 6: nodejs/node@6068291
- 7: https://nodejs.org/download/release/v20.0.0/docs/api/globals.html
🏁 Script executed:
# Check the burnController.ts file imports and context around the flagged lines
head -50 src/controllers/burnController.ts | cat -nRepository: Pi-Defi-world/acbu-backend
Length of output: 2327
🏁 Script executed:
# Check package.json for Node.js version requirements
cat package.json | grep -A 5 -B 5 "engines\|node"Repository: Pi-Defi-world/acbu-backend
Length of output: 2780
🏁 Script executed:
# Check tsconfig.json for strict settings
cat tsconfig.json 2>/dev/null || echo "tsconfig.json not found"Repository: Pi-Defi-world/acbu-backend
Length of output: 1257
🏁 Script executed:
# Verify the actual code at lines 203-216 in burnController.ts
sed -n '200,220p' src/controllers/burnController.ts | cat -nRepository: Pi-Defi-world/acbu-backend
Length of output: 893
Add explicit import for crypto.randomUUID() for Node.js version portability.
crypto is not imported in this file. Although it works at runtime in Node 20+ via the global crypto, it fails on Node <19 and is unsafe under strict TypeScript. Add an explicit import:
Diff
+import { randomUUID } from "node:crypto";
import { logAudit } from "../services/audit";
// ...
- (req.headers["x-request-id"] as string | undefined) ?? crypto.randomUUID();
+ (req.headers["x-request-id"] as string | undefined) ?? randomUUID();Additionally, userId and accountId (lines 210–211) both fall back to tx.id when req.apiKey?.userId is unavailable. This corrupts the financial event stream by recording transaction IDs as user/account identifiers. Use null or a dedicated field like "anonymous" instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/controllers/burnController.ts` around lines 203 - 216, Add an explicit
import for Node's crypto randomUUID and stop relying on a global: import the
function (e.g., randomUUID from 'crypto') and use randomUUID() when computing
burnCorrelationId instead of crypto.randomUUID(); then update the
logFinancialEvent call so userId and accountId do not fall back to tx.id — use
null or a clear sentinel like "anonymous" (referencing req.apiKey?.userId,
userId, accountId, logFinancialEvent, and burnCorrelationId) to avoid writing
transaction IDs into user/account fields.
| const usdcAmount7 = decimalToContractNumber(usdcDecimal).toString(); | ||
| const feeUsdc = (usdcAmount * MINT_FEE_BPS) / 10000; | ||
| const usdcAmount7 = Math.round(usdcAmount * DECIMALS_7).toString(); | ||
| const correlationId = crypto.randomUUID(); |
There was a problem hiding this comment.
crypto.randomUUID() used without import.
Same issue as burnController.ts: no crypto import. Works on Node ≥19 via global, but explicit imports are safer:
+import { randomUUID } from "node:crypto";
...
- const correlationId = crypto.randomUUID();
+ const correlationId = randomUUID();Apply the same change to the financial-event correlationId in burnController.ts.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/controllers/mintController.ts` at line 126, Add an explicit import for
Node's crypto module and ensure both controllers use it: add "import * as crypto
from 'crypto';" at the top of the file containing the const correlationId =
crypto.randomUUID(); in mintController.ts, and make the identical change in
burnController.ts where the financial-event correlationId is generated so that
crypto.randomUUID() is backed by the imported crypto module.
| logFinancialEvent({ | ||
| event: "salary.batch.completed", | ||
| status: allSucceeded ? "success" : anySucceeded ? "success" : "failed", | ||
| transactionId: batchId, | ||
| userId: batch.userId ?? batchId, | ||
| accountId: batch.organizationId ?? batch.userId ?? batchId, | ||
| idempotencyKey: batch.idempotencyKey ?? batchId, | ||
| amount: Math.round(batch.totalAmount.toNumber() * 100), | ||
| currency: batch.currency, | ||
| correlationId: crypto.randomUUID(), | ||
| }); |
There was a problem hiding this comment.
correlationId is regenerated at completion — breaks lifecycle correlation.
createSalaryBatch (line 76) sets salaryCorrelationId = idempotencyKey ?? crypto.randomUUID() and emits salary.batch.initiated with it. Here, salary.batch.completed mints a fresh crypto.randomUUID(), so the two events for the same batch have different correlationIds and cannot be tied together in logs/traces — defeating the purpose of correlation IDs as documented in src/types/logging.ts ("used to correlate related log entries… across multiple lifecycle events for the same request or job execution").
Derive the correlationId deterministically from data on the persisted batch row so initiation and completion match.
🛠️ Proposed fix
logFinancialEvent({
event: "salary.batch.completed",
- status: allSucceeded ? "success" : anySucceeded ? "success" : "failed",
+ status: allSucceeded ? "success" : "failed",
transactionId: batchId,
userId: batch.userId ?? batchId,
accountId: batch.organizationId ?? batch.userId ?? batchId,
idempotencyKey: batch.idempotencyKey ?? batchId,
amount: Math.round(batch.totalAmount.toNumber() * 100),
currency: batch.currency,
- correlationId: crypto.randomUUID(),
+ // Match the correlationId produced in createSalaryBatch so initiation
+ // and completion events can be joined in logs.
+ correlationId: batch.idempotencyKey ?? batchId,
});Additionally, line 197 collapses allSucceeded and anySucceeded (partial completion) both to "success", which masks partial failure from alerting and dashboards. Either map partial completion to "failed" (as above) or carry the granular finalStatus ("partially_completed") on a separate metadata field while keeping the status enum honest.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logFinancialEvent({ | |
| event: "salary.batch.completed", | |
| status: allSucceeded ? "success" : anySucceeded ? "success" : "failed", | |
| transactionId: batchId, | |
| userId: batch.userId ?? batchId, | |
| accountId: batch.organizationId ?? batch.userId ?? batchId, | |
| idempotencyKey: batch.idempotencyKey ?? batchId, | |
| amount: Math.round(batch.totalAmount.toNumber() * 100), | |
| currency: batch.currency, | |
| correlationId: crypto.randomUUID(), | |
| }); | |
| logFinancialEvent({ | |
| event: "salary.batch.completed", | |
| status: allSucceeded ? "success" : "failed", | |
| transactionId: batchId, | |
| userId: batch.userId ?? batchId, | |
| accountId: batch.organizationId ?? batch.userId ?? batchId, | |
| idempotencyKey: batch.idempotencyKey ?? batchId, | |
| amount: Math.round(batch.totalAmount.toNumber() * 100), | |
| currency: batch.currency, | |
| // Match the correlationId produced in createSalaryBatch so initiation | |
| // and completion events can be joined in logs. | |
| correlationId: batch.idempotencyKey ?? batchId, | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/salary/salaryService.ts` around lines 195 - 205, The completion
event is generating a new correlationId which breaks correlation with
createSalaryBatch's salaryCorrelationId; update the logFinancialEvent call in
the salary.batch.completed path to use the persisted batch correlation value
(e.g. batch.salaryCorrelationId or fall back to batch.idempotencyKey or batchId)
instead of crypto.randomUUID() so initiation and completion share the same
correlationId, and adjust the status logic so partial success isn't masked — use
allSucceeded ? "success" : anySucceeded ? "partially_completed" : "failed" (or
keep "failed" and add a finalStatus/fromPartial metadata field) so
logFinancialEvent reflects partial completions clearly.
| export function getPiiKey(hexKey: string): Buffer { | ||
| if (hexKey.length !== 64) { | ||
| throw new Error( | ||
| "PII_ENCRYPTION_KEY must be a 64-character hex string (32 bytes / 256 bits). " + | ||
| `Got ${hexKey.length} characters.`, | ||
| ); | ||
| } | ||
| return Buffer.from(hexKey, "hex"); | ||
| } |
There was a problem hiding this comment.
Validate hex format, not just length.
Buffer.from(hexKey, "hex") silently stops at the first non-hex character and returns a shorter buffer, which will then derive an undersized AES key with no error. Env validation guards the configured value, but this helper is exported and callable from elsewhere. Add a regex check before constructing the buffer:
if (hexKey.length !== 64) {
throw new Error(
"PII_ENCRYPTION_KEY must be a 64-character hex string (32 bytes / 256 bits). " +
`Got ${hexKey.length} characters.`,
);
}
+ if (!/^[0-9a-fA-F]{64}$/.test(hexKey)) {
+ throw new Error("PII_ENCRYPTION_KEY must be a hex string.");
+ }
return Buffer.from(hexKey, "hex");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/piiEncryption.ts` around lines 12 - 20, The getPiiKey function
currently only checks hexKey length but can silently accept non-hex chars;
update getPiiKey to validate the string is valid hex before creating the Buffer
(e.g., assert hexKey matches a 64-character hex regex like /^[0-9a-fA-F]{64}$/)
and throw a descriptive Error if it fails, then continue to call
Buffer.from(hexKey, "hex") to return the Buffer; reference the exported function
name getPiiKey and the parameter hexKey when implementing this validation.
| export function decryptField(encrypted: string, key: Buffer): string { | ||
| if (!encrypted.startsWith(VERSION_PREFIX)) { | ||
| throw new Error("Unsupported PII encryption version or unencrypted value."); | ||
| } | ||
| const parts = encrypted.slice(VERSION_PREFIX.length).split(":"); | ||
| if (parts.length !== 3) { | ||
| throw new Error("Malformed encrypted PII field."); | ||
| } | ||
| const [ivHex, tagHex, ciphertextB64] = parts; | ||
| const iv = Buffer.from(ivHex, "hex"); | ||
| const tag = Buffer.from(tagHex, "hex"); | ||
| const ciphertext = Buffer.from(ciphertextB64, "base64"); | ||
|
|
||
| const decipher = createDecipheriv(ALGORITHM, key, iv); | ||
| decipher.setAuthTag(tag.slice(0, TAG_BYTES)); | ||
| return ( | ||
| decipher.update(ciphertext).toString("utf8") + | ||
| decipher.final().toString("utf8") | ||
| ); | ||
| } |
There was a problem hiding this comment.
UTF-8 corruption on chunk boundary in decryptField.
decipher.update(ciphertext).toString("utf8") + decipher.final().toString("utf8") decodes each chunk independently. If a multi-byte UTF-8 codepoint straddles the boundary between update's output and final's output, each .toString("utf8") will substitute U+FFFD on its half of the codepoint, silently corrupting the plaintext. This is a realistic risk for KYC PII (accented names, non-Latin scripts).
Concatenate the buffers first, then decode once:
🛠 Proposed fix
- const decipher = createDecipheriv(ALGORITHM, key, iv);
- decipher.setAuthTag(tag.slice(0, TAG_BYTES));
- return (
- decipher.update(ciphertext).toString("utf8") +
- decipher.final().toString("utf8")
- );
+ if (tag.length !== TAG_BYTES) {
+ throw new Error("Invalid PII auth tag length.");
+ }
+ const decipher = createDecipheriv(ALGORITHM, key, iv);
+ decipher.setAuthTag(tag);
+ const plaintext = Buffer.concat([decipher.update(ciphertext), decipher.final()]);
+ return plaintext.toString("utf8");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/piiEncryption.ts` around lines 48 - 67, The decryptField function
is concatenating UTF-8-decoded strings from decipher.update(...) and
decipher.final(...), which can corrupt multi-byte codepoints split across the
two chunks; instead, call decipher.update(...) and decipher.final(...) to get
Buffer objects, concatenate them with Buffer.concat, then call .toString("utf8")
once to decode; update the implementation in decryptField (which uses
createDecipheriv, decipher.setAuthTag(tag.slice(0, TAG_BYTES)), and variables
iv/tag/ciphertext) to return Buffer.concat([decipher.update(ciphertext),
decipher.final()]).toString("utf8") rather than decoding each piece separately.
What's the problem?
validatePermissionsinauth.tswas only checkingtypeof p === "string"— so a typo like"p2p:ADMIN"or"p2p:rea"coming from the DB JSON field would pass through silently and either grant wrong access or lock someone out with no useful error.Same issue in
validateAdminScopes— it was filtering against a hardcoded Set and silently dropping anything it didn't recognise instead of rejecting it.What I changed
src/types/permissions.ts(new file)PermissionScopeEnumZod enum as the single source of truth for all 12 valid permission scopesPermissionScopetype derived from it andPermissionsArraySchemafor array validationsrc/middleware/auth.tsvalidatePermissionsnow usessafeParse— invalid scopes get rejected and logged instead of passing throughgenerateApiKeyparam tightened fromstring[]toPermissionScope[]so bad values fail at compile timesrc/services/auth/authService.tsvalidateAdminScopesnow throws"Invalid permission scope(s): ..."on unrecognised input instead of silently filtering — this is the write-time enforcement at the admin APIAcceptance check
Invalid permission is now rejected at the admin key issuance endpoints (
issueAdminKey,issueBreakGlassKey) before anything reachesgenerateApiKeyor the DB.Note:
src/middleware/errorHandler.tshas a duplicatedetailsproperty that appears to be a pre-existing merge conflict — unrelated to this PR.Closes #163