Pi-Dev#252
Conversation
|
@abdullahilateefat03-boop Great news! 🎉 Based on an automated assessment of this PR, the linked Wave issue(s) no longer count against your application limits. You can now already apply to more issues while waiting for a review of this PR. Keep up the great work! 🚀 |
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughA multi-layered Stellar address integrity hardening is introduced, combining application-level placeholder detection and strict validation in the wallet service with database-level constraints, pre-migration validation scripts in multiple languages, schema documentation updates, and deployment guidance to prevent invalid addresses from reaching production. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~35 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (7)
scripts/validateWithPrisma.ts (1)
41-41: Avoidany[]for the invalid-records buffer.A small typed interface (id, username, stellarAddress, createdAt, issues) keeps the per-user
console.logaccesses below type-checked and prevents future refactors from silently breaking the report.♻️ Suggested change
- const invalid: any[] = []; + type InvalidRecord = { + id: string; + username: string | null; + stellarAddress: string | null; + createdAt: Date; + issues: string[]; + }; + const invalid: InvalidRecord[] = [];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/validateWithPrisma.ts` at line 41, The buffer `invalid` is typed as `any[]`; define a small interface (e.g. `InvalidRecord` with fields id, username, stellarAddress, createdAt, issues) and change `const invalid: any[] = [];` to `const invalid: InvalidRecord[] = [];` in scripts/validateWithPrisma.ts (inside the validateWithPrisma flow), then update all places that push into `invalid` to construct objects matching `InvalidRecord` so subsequent per-user `console.log` and property accesses are type-checked and safe.prisma/schema.prisma (1)
37-38: Tighten the comment phrasing.The comment reads as if it trails off ("validated at application level and database constraint"). Consider clarifying that validation happens at both layers.
📝 Proposed wording fix
- // B-013: stellarAddress is validated at application level and database constraint - // Valid addresses: 56 chars, starts with 'G', base32 encoded, not a placeholder + // B-013: stellarAddress is validated at the application level and enforced by a + // database CHECK constraint (chk_valid_stellar_address). Valid addresses are + // 56 chars, start with 'G', use base32 (A-Z,2-7), and must not be placeholders.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prisma/schema.prisma` around lines 37 - 38, Update the trailing/ambiguous comment for the stellarAddress field so it clearly states that validation occurs both in the application and via a database constraint; edit the first line to something like "B-013: stellarAddress is validated at the application level and enforced by a database constraint" and keep the second line that documents the allowed format ("Valid addresses: 56 chars, starts with 'G', base32 encoded, not a placeholder") to preserve the format requirements for stellarAddress.B-013_PR_CHECKLIST.md (1)
43-43: Use "sign in" (verb) consistently."signin" isn't a standard English compound; the verb form is "sign in" (and the noun is usually "sign-in"). Same nit at the post-deployment monitoring section in
B-013_FIX_SUMMARY.md.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@B-013_PR_CHECKLIST.md` at line 43, Replace the incorrect compound "signin" with the verb form "sign in" across the checklist and summary; specifically update the checklist line "Test signin with existing user" to "Test sign in with existing user" and also fix the matching occurrences of "signin" in B-013_FIX_SUMMARY.md (and any other instances in these docs) to use "sign in" (or "sign-in" only where used as a noun) so terminology is consistent.scripts/validateStellarAddresses.js (2)
55-84: Redundant checks produce noisy multi-issue reports.
isValidStellarAddressalready returnsfalsefor any of: non-string, length ≠ 56, missingGprefix, or failingStrKey.isValidEd25519PublicKey(lines 22–27). Re-runninglength/startsWithchecks separately (lines 59–65) means a single bad address is reported with three overlapping issues (Invalid length,Does not start with 'G',Fails StrKey.isValidEd25519PublicKey validation), making the operator output harder to triage. Either drop the granular checks and rely onisValidStellarAddress, or invert: only run the granular checks whenisValidStellarAddressreturns false to yield a single, descriptive reason.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/validateStellarAddresses.js` around lines 55 - 84, The loop currently reports overlapping issues because it always runs granular checks as well as isValidStellarAddress; change the logic so you first call isValidStellarAddress(address) and only when that returns false run the granular checks (length, startsWith, placeholder) to collect specific reasons; keep the isPlaceholderAddress(address) check inside that invalid branch and then push a single invalidAddresses entry when issues.length>0; reference the usersWithAddress loop, isValidStellarAddress, isPlaceholderAddress, and invalidAddresses to locate where to reorder/remove the redundant standalone length/startsWith checks.
1-126: Duplicate ofscripts/validateWithPrisma.ts.This
.jsscript reimplements the same validation flow asscripts/validateWithPrisma.ts(PrismafindMany→ length/prefix/StrKey/placeholder checks → diagnostic report) and also re-definesisValidStellarAddress/placeholder regexes that already live insrc/utils/stellar.tsandsrc/services/wallet/walletService.ts. Maintaining four parallel implementations (here, the TS variant, the SQL script, and the migration'sDOblock) makes drift between them very likely — and the SQL/migration mismatch flagged elsewhere in this review is exactly that kind of drift.Recommendation: keep one canonical validator (the TS one, since it can
importfromsrc/utils/stellar.ts) and delete this.jsduplicate. If a no-build entrypoint is needed, documentnpx tsx scripts/validateWithPrisma.tsinstead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/validateStellarAddresses.js` around lines 1 - 126, This script duplicates logic that already exists in src/utils/stellar.ts and scripts/validateWithPrisma.ts; remove the redundant file scripts/validateStellarAddresses.js and any CI/docs references to it, and update README or developer docs to recommend running the canonical TS validator with "npx tsx scripts/validateWithPrisma.ts" (or import and reuse isValidStellarAddress and PLACEHOLDER_PATTERNS from src/utils/stellar.ts in scripts/validateWithPrisma.ts if needed) so there is a single source of truth instead of maintaining isValidStellarAddress, PLACEHOLDER_PATTERNS, and validateStellarAddresses in multiple places.scripts/validateStellarAddresses.ts (1)
30-44: Consider streaming/pagination for largeuserstables.
findManywithout pagination loads every row with a non-nullstellarAddressinto memory. For typical user-counts this is fine, but if this is run against a production-scale DB during deployment, consider chunking (e.g.,findManywithtake/cursoror a raw cursor) so the script doesn't OOM the deploy host.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/validateStellarAddresses.ts` around lines 30 - 44, The current prisma.user.findMany call in scripts/validateStellarAddresses.ts (producing usersWithAddress) can OOM for large tables; change it to stream/paginate by querying in chunks (use findMany with take and cursor or an explicit cursor loop) and process each page before fetching the next, e.g., loop using a cursor based on the last fetched id or createdAt until no more rows; ensure you replace the single findMany call that populates usersWithAddress with a paginated iterator and preserve the selected fields (id, stellarAddress, username, createdAt) and existing validation logic.src/services/wallet/walletService.ts (1)
79-98: ExtractPLACEHOLDER_PATTERNSandisPlaceholderAddressto a shared module to avoid drift.The exact same regex list and helper is duplicated in
scripts/validateStellarAddresses.ts(lines 13-25). When new placeholder patterns get discovered in the wild, they'll need to be added in both places — and any divergence means the pre-migration validator will accept rows the runtime guard would reject (or vice versa).Consider colocating with
assertValidStellarAddressinsrc/utils/stellar.ts:♻️ Proposed refactor
In
src/utils/stellar.ts:const PLACEHOLDER_PATTERNS: readonly RegExp[] = [ /^GA{55}$/, /^GB{55}$/, /^G0{55}$/, /^GTEST/, /^GDUMMY/, /^GPLACEHOLDER/, /^GXXXXXXXX/, ]; export function isPlaceholderStellarAddress(address: string): boolean { if (!address || address.length !== 56) return true; return PLACEHOLDER_PATTERNS.some((p) => p.test(address)); }Then in
walletService.ts:-import { assertValidStellarAddress, isValidStellarAddress } from "../../utils/stellar"; +import { + assertValidStellarAddress, + isValidStellarAddress, + isPlaceholderStellarAddress, +} from "../../utils/stellar";And drop the local
isPlaceholderAddress(lines 79-98), updating both callsites.Side note: Line 84 returns
truewhenlength !== 56, which conflates "invalid length" with "is a placeholder". It's safe given current call order (always afterassertValidStellarAddress), but worth either renaming the function to reflect that or removing the length branch since the assertion already covers it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/wallet/walletService.ts` around lines 79 - 98, Extract the regex list and helper into a shared util: move PLACEHOLDER_PATTERNS and an exported isPlaceholderStellarAddress(address: string) into the existing stellar utility alongside assertValidStellarAddress, then replace local isPlaceholderAddress usages (and the duplicate in scripts/validateStellarAddresses.ts) to call the new isPlaceholderStellarAddress; ensure isPlaceholderStellarAddress either drops the length check (since assertValidStellarAddress already enforces 56 chars) or rename the helper to reflect it returns true for invalid-length inputs so behavior remains consistent with current callers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@B-013_FIX_SUMMARY.md`:
- Around line 112-118: Update the incorrect documentation paths that reference
prisma/validateStellarAddresses.ts and validateWithPrisma.ts to point to the
actual scripts directory: use scripts/validateStellarAddresses.ts (and
scripts/validateStellarAddresses.js where applicable) and
scripts/validateWithPrisma.ts; search the markdown for the three incorrect
occurrences (the Validation Script section around the heading and the two other
mentions noted at lines ~159 and the Files Modified section) and replace them so
the commands (e.g., npx ts-node scripts/validateStellarAddresses.ts) and file
lists accurately reflect the new locations.
In
`@prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql`:
- Around line 39-57: The ALTER TABLE ADD CONSTRAINT statement
(chk_valid_stellar_address on table users) will take an ACCESS EXCLUSIVE lock
and scan the whole table; replace the single-step ADD CONSTRAINT with the
two-step pattern: add the constraint as NOT VALID so it is enforced for new
INSERT/UPDATE without scanning existing rows, then run ALTER TABLE users
VALIDATE CONSTRAINT chk_valid_stellar_address to perform the full validation
(the existing pre-flight DO block means this validate step should be quick and
use a less disruptive lock).
- Around line 16-36: The pre-check in the DO block currently counts invalid rows
into invalid_count but doesn’t mirror the CHECK constraint’s base32 rule; update
the WHERE used to compute invalid_count (on table users and column
stellar_address) to also treat any value that does NOT match the CHECK’s base32
charset as invalid (i.e., add a predicate such as stellar_address !~
'^[A-Z2-7]{56}$' or equivalent so the SELECT COUNT(*) INTO invalid_count flags
rows that would fail the subsequent ALTER TABLE ... ADD CONSTRAINT); keep the
existing length, leading 'G', and placeholder checks and raise the same RAISE
EXCEPTION if invalid_count > 0.
- Around line 60-62: Migration adds a partial index
idx_users_stellar_address_not_null on users(stellar_address) with WHERE
stellar_address IS NOT NULL which will cause Prisma schema drift because
schema.prisma declares `@unique` on stellarAddress and @@index([stellarAddress],
map: "idx_stellar_address") but Prisma 5.19 does not support partial indexes;
fix by either (A) remove this partial index from the migration.sql and rely on
the existing B-tree index declared by @@index/@unique on stellarAddress, (B) if
you must keep it, convert the migration to create the index CONCURRENTLY and
document that this index is unmanaged by Prisma (manual maintenance required),
or (C) add the partialIndexes preview feature to schema.prisma and declare the
partial index there (only if your Prisma version supports it); reference
idx_users_stellar_address_not_null, stellar_address, the migration.sql CREATE
INDEX statement, and the schema.prisma annotations `@unique` on stellarAddress and
@@index(..., map: "idx_stellar_address") when making the change.
In `@scripts/run_validation.bat`:
- Line 1: The batch script run_validation.bat currently uses LF endings (the
file starts with "@echo off") which can break cmd.exe parsing on Windows;
convert/save the file with CRLF line endings (or run a tool like
unix2dos/dos2unix -i or set git config core.autocrlf true and re-add the file)
so the "@echo off" and any labels/GOTO/parentheses are parsed correctly on
Windows before merging.
- Around line 64-68: The script sets PGPASSWORD globally which leaks the secret
to the parent shell; scope or clear it: wrap the batch logic around setlocal and
endlocal (or alternatively after the psql invocation explicitly clear PGPASSWORD
with set "PGPASSWORD=") so the password is not persisted—apply this change
around the PGPASSWORD assignment and the psql call (the PGPASSWORD variable and
the psql -U %DB_USER% ... -f scripts\validate_stellar_addresses.sql invocation).
- Around line 40-51: The DATABASE_URL parsing is broken: remove the incorrect
substring replacement ("set DATABASE_URL=%DATABASE_URL:DATABASE_URL=%") and stop
manually slicing the URL with fragile FOR/token logic; instead update the FOR
statement to request five tokens (use tokens=1,2,3,4,5) if you keep this
approach, and correctly strip a leading "DATABASE_URL=" or "postgresql://"
prefix before tokenizing so DB_USER, DB_PASS, DB_HOST, DB_PORT, and DB_NAME are
assigned properly; better yet, replace this block entirely by delegating parsing
to PowerShell or by passing DATABASE_URL directly to psql/PG* env vars to
robustly handle optional ports and URL-encoded characters (affecting DB_PASS and
token boundaries).
In `@scripts/validate_stellar_addresses.sql`:
- Around line 23-50: The validator queries are missing the base32 character-set
check used by the migration, so add the same predicate (stellar_address NOT ~
'^[A-Z2-7]{56}$' or equivalent) to the diagnostic WHERE, the COUNT(*) WHERE, and
the CASE that classifies invalid addresses so rows with invalid base32 chars
(lowercase or disallowed digits) are flagged; update the same predicate in the
first WHERE block (the diagnostic SELECT), the second COUNT(*) SELECT, and the
CASE expression referencing stellar_address to mirror the migration's regex
'^[A-Z2-7]{56}$'.
In `@scripts/validateStellarAddresses.ts`:
- Around line 1-6: Update the script usage documentation to point to the correct
path: change the header line that currently says "Usage: npx ts-node
prisma/validateStellarAddresses.ts" to reference the actual location
"scripts/validateStellarAddresses.ts" so operators run "npx ts-node
scripts/validateStellarAddresses.ts"; ensure any README or comment block in
validateStellarAddresses.ts is updated accordingly to avoid "Cannot find module"
errors.
In `@scripts/validateWithPrisma.ts`:
- Around line 25-87: The script currently returns normally when invalid
addresses are found; update main() so it fails the process when invalid.length >
0 (instead of falling through) by calling process.exit(1) or returning a
rejected promise; specifically, inside the block that logs invalid addresses
(the else branch that builds the invalid array and prints the report) add a
non-zero exit (e.g., process.exit(1)) after printing the summary so the CI gate
fails; ensure prisma.$disconnect() still runs (keep or move the exit to after
finally or call process.exitCode = 1 before returning) to mirror the behavior
implemented in scripts/validateStellarAddresses.js.
In `@src/services/wallet/walletService.ts`:
- Around line 100-122: setStellarAddressForUser currently overwrites an existing
stellarAddress (and may leave an encryptedStellarSecret stale) and also can
surface Prisma P2025; change it to guard and be atomic by attempting a
conditional update that only succeeds if the user has no
stellarAddress/encryptedStellarSecret, e.g. use prisma.user.updateMany({ where:
{ id: userId, stellarAddress: null, encryptedStellarSecret: null }, data: {
stellarAddress } }) and if the returned count is 0 throw an AppError indicating
the user already has a wallet (or not found), ensuring you still validate via
assertValidStellarAddress/isPlaceholderAddress beforehand and log on success;
this avoids P2025 and prevents overwriting an existing wallet.
---
Nitpick comments:
In `@B-013_PR_CHECKLIST.md`:
- Line 43: Replace the incorrect compound "signin" with the verb form "sign in"
across the checklist and summary; specifically update the checklist line "Test
signin with existing user" to "Test sign in with existing user" and also fix the
matching occurrences of "signin" in B-013_FIX_SUMMARY.md (and any other
instances in these docs) to use "sign in" (or "sign-in" only where used as a
noun) so terminology is consistent.
In `@prisma/schema.prisma`:
- Around line 37-38: Update the trailing/ambiguous comment for the
stellarAddress field so it clearly states that validation occurs both in the
application and via a database constraint; edit the first line to something like
"B-013: stellarAddress is validated at the application level and enforced by a
database constraint" and keep the second line that documents the allowed format
("Valid addresses: 56 chars, starts with 'G', base32 encoded, not a
placeholder") to preserve the format requirements for stellarAddress.
In `@scripts/validateStellarAddresses.js`:
- Around line 55-84: The loop currently reports overlapping issues because it
always runs granular checks as well as isValidStellarAddress; change the logic
so you first call isValidStellarAddress(address) and only when that returns
false run the granular checks (length, startsWith, placeholder) to collect
specific reasons; keep the isPlaceholderAddress(address) check inside that
invalid branch and then push a single invalidAddresses entry when
issues.length>0; reference the usersWithAddress loop, isValidStellarAddress,
isPlaceholderAddress, and invalidAddresses to locate where to reorder/remove the
redundant standalone length/startsWith checks.
- Around line 1-126: This script duplicates logic that already exists in
src/utils/stellar.ts and scripts/validateWithPrisma.ts; remove the redundant
file scripts/validateStellarAddresses.js and any CI/docs references to it, and
update README or developer docs to recommend running the canonical TS validator
with "npx tsx scripts/validateWithPrisma.ts" (or import and reuse
isValidStellarAddress and PLACEHOLDER_PATTERNS from src/utils/stellar.ts in
scripts/validateWithPrisma.ts if needed) so there is a single source of truth
instead of maintaining isValidStellarAddress, PLACEHOLDER_PATTERNS, and
validateStellarAddresses in multiple places.
In `@scripts/validateStellarAddresses.ts`:
- Around line 30-44: The current prisma.user.findMany call in
scripts/validateStellarAddresses.ts (producing usersWithAddress) can OOM for
large tables; change it to stream/paginate by querying in chunks (use findMany
with take and cursor or an explicit cursor loop) and process each page before
fetching the next, e.g., loop using a cursor based on the last fetched id or
createdAt until no more rows; ensure you replace the single findMany call that
populates usersWithAddress with a paginated iterator and preserve the selected
fields (id, stellarAddress, username, createdAt) and existing validation logic.
In `@scripts/validateWithPrisma.ts`:
- Line 41: The buffer `invalid` is typed as `any[]`; define a small interface
(e.g. `InvalidRecord` with fields id, username, stellarAddress, createdAt,
issues) and change `const invalid: any[] = [];` to `const invalid:
InvalidRecord[] = [];` in scripts/validateWithPrisma.ts (inside the
validateWithPrisma flow), then update all places that push into `invalid` to
construct objects matching `InvalidRecord` so subsequent per-user `console.log`
and property accesses are type-checked and safe.
In `@src/services/wallet/walletService.ts`:
- Around line 79-98: Extract the regex list and helper into a shared util: move
PLACEHOLDER_PATTERNS and an exported isPlaceholderStellarAddress(address:
string) into the existing stellar utility alongside assertValidStellarAddress,
then replace local isPlaceholderAddress usages (and the duplicate in
scripts/validateStellarAddresses.ts) to call the new
isPlaceholderStellarAddress; ensure isPlaceholderStellarAddress either drops the
length check (since assertValidStellarAddress already enforces 56 chars) or
rename the helper to reflect it returns true for invalid-length inputs so
behavior remains consistent with current callers.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9f202316-316e-4dc8-a422-c029774c6cfc
📒 Files selected for processing (10)
B-013_FIX_SUMMARY.mdB-013_PR_CHECKLIST.mdprisma/migrations/20260426000000_add_stellar_address_validation/migration.sqlprisma/schema.prismascripts/run_validation.batscripts/validateStellarAddresses.jsscripts/validateStellarAddresses.tsscripts/validateWithPrisma.tsscripts/validate_stellar_addresses.sqlsrc/services/wallet/walletService.ts
| ### 3. Validation Script (`prisma/validateStellarAddresses.ts`) | ||
|
|
||
| A standalone script to validate existing data before applying migration: | ||
|
|
||
| ```bash | ||
| npx ts-node prisma/validateStellarAddresses.ts | ||
| ``` |
There was a problem hiding this comment.
Validation script path is wrong.
The doc references prisma/validateStellarAddresses.ts, but the actual files added in this PR live under scripts/ (scripts/validateStellarAddresses.ts, scripts/validateStellarAddresses.js, scripts/validateWithPrisma.ts). The same incorrect path appears at line 159 and in the "Files Modified" section at line 233. Update all three to scripts/... so operators running the documented commands don't get a "module not found" error.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@B-013_FIX_SUMMARY.md` around lines 112 - 118, Update the incorrect
documentation paths that reference prisma/validateStellarAddresses.ts and
validateWithPrisma.ts to point to the actual scripts directory: use
scripts/validateStellarAddresses.ts (and scripts/validateStellarAddresses.js
where applicable) and scripts/validateWithPrisma.ts; search the markdown for the
three incorrect occurrences (the Validation Script section around the heading
and the two other mentions noted at lines ~159 and the Files Modified section)
and replace them so the commands (e.g., npx ts-node
scripts/validateStellarAddresses.ts) and file lists accurately reflect the new
locations.
| SELECT COUNT(*) INTO invalid_count | ||
| FROM users | ||
| WHERE stellar_address IS NOT NULL | ||
| AND ( | ||
| -- Invalid format: wrong length or doesn't start with G | ||
| LENGTH(stellar_address) != 56 | ||
| OR stellar_address NOT LIKE 'G%' | ||
| -- Placeholder patterns | ||
| OR stellar_address ~ '^G[A]{55}$' | ||
| OR stellar_address ~ '^G[B]{55}$' | ||
| OR stellar_address ~ '^G[0]{55}$' | ||
| OR stellar_address LIKE 'GTEST%' | ||
| OR stellar_address LIKE 'GDUMMY%' | ||
| OR stellar_address LIKE 'GPLACEHOLDER%' | ||
| OR stellar_address LIKE 'GXXXXXXXX%' | ||
| ); | ||
|
|
||
| IF invalid_count > 0 THEN | ||
| RAISE EXCEPTION 'Found % users with invalid stellar_address format. Please clean up data before applying constraint.', invalid_count; | ||
| END IF; | ||
| END $$; |
There was a problem hiding this comment.
Pre-check predicate must match the CHECK constraint exactly.
The pre-flight scan (lines 19–31) does not include the ^[A-Z2-7]{56}$ base32 charset rule that the CHECK adds on line 47. If a row is 56 chars, starts with G, isn't a known placeholder, but contains a non-base32 character (e.g., lowercase, 0, 1, 8, 9), the DO block will pass and the next statement (ALTER TABLE ... ADD CONSTRAINT) will fail with a generic check-constraint violation that does not enumerate the offending row. Mirror the constraint's predicates here so the friendlier RAISE EXCEPTION message wins.
🛠️ Suggested addition
FROM users
WHERE stellar_address IS NOT NULL
AND (
-- Invalid format: wrong length or doesn't start with G
LENGTH(stellar_address) != 56
OR stellar_address NOT LIKE 'G%'
+ OR stellar_address !~ '^[A-Z2-7]{56}$'
-- Placeholder patterns
OR stellar_address ~ '^G[A]{55}$'
...📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SELECT COUNT(*) INTO invalid_count | |
| FROM users | |
| WHERE stellar_address IS NOT NULL | |
| AND ( | |
| -- Invalid format: wrong length or doesn't start with G | |
| LENGTH(stellar_address) != 56 | |
| OR stellar_address NOT LIKE 'G%' | |
| -- Placeholder patterns | |
| OR stellar_address ~ '^G[A]{55}$' | |
| OR stellar_address ~ '^G[B]{55}$' | |
| OR stellar_address ~ '^G[0]{55}$' | |
| OR stellar_address LIKE 'GTEST%' | |
| OR stellar_address LIKE 'GDUMMY%' | |
| OR stellar_address LIKE 'GPLACEHOLDER%' | |
| OR stellar_address LIKE 'GXXXXXXXX%' | |
| ); | |
| IF invalid_count > 0 THEN | |
| RAISE EXCEPTION 'Found % users with invalid stellar_address format. Please clean up data before applying constraint.', invalid_count; | |
| END IF; | |
| END $$; | |
| SELECT COUNT(*) INTO invalid_count | |
| FROM users | |
| WHERE stellar_address IS NOT NULL | |
| AND ( | |
| -- Invalid format: wrong length or doesn't start with G | |
| LENGTH(stellar_address) != 56 | |
| OR stellar_address NOT LIKE 'G%' | |
| OR stellar_address !~ '^[A-Z2-7]{56}$' | |
| -- Placeholder patterns | |
| OR stellar_address ~ '^G[A]{55}$' | |
| OR stellar_address ~ '^G[B]{55}$' | |
| OR stellar_address ~ '^G[0]{55}$' | |
| OR stellar_address LIKE 'GTEST%' | |
| OR stellar_address LIKE 'GDUMMY%' | |
| OR stellar_address LIKE 'GPLACEHOLDER%' | |
| OR stellar_address LIKE 'GXXXXXXXX%' | |
| ); | |
| IF invalid_count > 0 THEN | |
| RAISE EXCEPTION 'Found % users with invalid stellar_address format. Please clean up data before applying constraint.', invalid_count; | |
| END IF; | |
| END $$; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql`
around lines 16 - 36, The pre-check in the DO block currently counts invalid
rows into invalid_count but doesn’t mirror the CHECK constraint’s base32 rule;
update the WHERE used to compute invalid_count (on table users and column
stellar_address) to also treat any value that does NOT match the CHECK’s base32
charset as invalid (i.e., add a predicate such as stellar_address !~
'^[A-Z2-7]{56}$' or equivalent so the SELECT COUNT(*) INTO invalid_count flags
rows that would fail the subsequent ALTER TABLE ... ADD CONSTRAINT); keep the
existing length, leading 'G', and placeholder checks and raise the same RAISE
EXCEPTION if invalid_count > 0.
| ALTER TABLE users | ||
| ADD CONSTRAINT chk_valid_stellar_address | ||
| CHECK ( | ||
| stellar_address IS NULL | ||
| OR ( | ||
| -- Valid format: 56 characters, starts with G, base32 characters only | ||
| LENGTH(stellar_address) = 56 | ||
| AND stellar_address LIKE 'G%' | ||
| AND stellar_address ~ '^[A-Z2-7]{56}$' | ||
| -- Not a placeholder | ||
| AND stellar_address !~ '^G[A]{55}$' | ||
| AND stellar_address !~ '^G[B]{55}$' | ||
| AND stellar_address !~ '^G[0]{55}$' | ||
| AND stellar_address NOT LIKE 'GTEST%' | ||
| AND stellar_address NOT LIKE 'GDUMMY%' | ||
| AND stellar_address NOT LIKE 'GPLACEHOLDER%' | ||
| AND stellar_address NOT LIKE 'GXXXXXXXX%' | ||
| ) | ||
| ); |
There was a problem hiding this comment.
ADD CONSTRAINT will take an ACCESS EXCLUSIVE lock and scan the full table.
In production, ALTER TABLE users ADD CONSTRAINT … CHECK (…) validates every existing row while holding ACCESS EXCLUSIVE on users, blocking all reads and writes for the duration of the scan. On a large users table this can be a noticeable outage. Consider the standard two-step pattern:
ALTER TABLE users
ADD CONSTRAINT chk_valid_stellar_address
CHECK (...) NOT VALID;
ALTER TABLE users VALIDATE CONSTRAINT chk_valid_stellar_address;NOT VALID adds the constraint without scanning existing rows (it still applies to new INSERT/UPDATE), and VALIDATE CONSTRAINT only takes a SHARE UPDATE EXCLUSIVE lock during the scan, allowing concurrent reads and writes. The pre-flight DO block above already guarantees existing rows comply, so validation should be quick and non-blocking.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql`
around lines 39 - 57, The ALTER TABLE ADD CONSTRAINT statement
(chk_valid_stellar_address on table users) will take an ACCESS EXCLUSIVE lock
and scan the whole table; replace the single-step ADD CONSTRAINT with the
two-step pattern: add the constraint as NOT VALID so it is enforced for new
INSERT/UPDATE without scanning existing rows, then run ALTER TABLE users
VALIDATE CONSTRAINT chk_valid_stellar_address to perform the full validation
(the existing pre-flight DO block means this validate step should be quick and
use a less disruptive lock).
| CREATE INDEX IF NOT EXISTS idx_users_stellar_address_not_null | ||
| ON users (stellar_address) | ||
| WHERE stellar_address IS NOT NULL; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Prisma 5.19 support partial indexes (WHERE clauses) in @@index without preview features?
💡 Result:
No. Partial (filtered) indexes in Prisma’s model-level @@index / @@unique support require the partialIndexes preview feature; without it, Prisma does not support where predicates for @@index in schema, and migrations/diffing will treat partial indexes as unsupported drift. Prisma’s docs state that the where argument on @unique, @@unique, and @@index “requires the partialIndexes Preview feature.” So in Prisma 5.19 specifically: there is no @@index(..., where: ...) support without enabling previewFeatures = ["partialIndexes"] (and in fact the feature was introduced later, in Prisma 7.4.0 behind that preview flag).
Citations:
- 1: https://www.prisma.io/docs/orm/prisma-schema/data-model/indexes
- 2: https://prisma.io/docs/orm/prisma-schema/data-model/indexes
- 3: https://github.com/prisma/prisma/releases/tag/7.4.0
- 4: https://www.prisma.io/changelog/2026-02-11
- 5: Prisma does not recognize partial index at all and wants to recreate it everytime prisma/prisma#23734
- 6: generates migration drop index for manually-created partial indexes prisma/prisma#29289
🏁 Script executed:
# First, let's check if schema.prisma exists and examine its contents
fd -t f "schema.prisma" && wc -l prisma/schema.prismaRepository: Pi-Defi-world/acbu-backend
Length of output: 116
🏁 Script executed:
# Search for stellar address related definitions in schema.prisma
rg -n "stellarAddress|stellar_address" prisma/schema.prisma -A 2 -B 2Repository: Pi-Defi-world/acbu-backend
Length of output: 1056
🏁 Script executed:
# Also check the migration file context to understand what's being created
cat -n prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql | head -70Repository: Pi-Defi-world/acbu-backend
Length of output: 2647
Unmanaged partial index will cause schema drift; partial indexes not supported in Prisma 5.19 without preview features.
schema.prisma declares @unique on stellarAddress (line 36) and @@index([stellarAddress], map: "idx_stellar_address") (line 74). The migration adds a partial index (WHERE stellar_address IS NOT NULL), which filters the index to non-NULL values—a legitimate query optimization distinct from the regular index. However, because Prisma 5.19 does not support partial indexes in @@index without the partialIndexes preview feature, this index cannot be declared in schema.prisma and will be treated as drift on the next prisma migrate dev or prisma db pull.
Options:
- Drop the partial index if query performance on non-NULL
stellarAddressis acceptable with the existing B-tree index. - Accept the index and document the drift: Keep this raw SQL and exclude it from Prisma's schema management (note that it must be manually maintained).
- Enable
partialIndexespreview feature inschema.prismaand declare the index there (requirespreviewFeatures = ["partialIndexes"]), though this feature was added in Prisma 7.4.0 and may not apply to Prisma 5.19 depending on backport status.
If retained, use CREATE INDEX CONCURRENTLY to avoid locking the users table during writes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql`
around lines 60 - 62, Migration adds a partial index
idx_users_stellar_address_not_null on users(stellar_address) with WHERE
stellar_address IS NOT NULL which will cause Prisma schema drift because
schema.prisma declares `@unique` on stellarAddress and @@index([stellarAddress],
map: "idx_stellar_address") but Prisma 5.19 does not support partial indexes;
fix by either (A) remove this partial index from the migration.sql and rely on
the existing B-tree index declared by @@index/@unique on stellarAddress, (B) if
you must keep it, convert the migration to create the index CONCURRENTLY and
document that this index is unmanaged by Prisma (manual maintenance required),
or (C) add the partialIndexes preview feature to schema.prisma and declare the
partial index there (only if your Prisma version supports it); reference
idx_users_stellar_address_not_null, stellar_address, the migration.sql CREATE
INDEX statement, and the schema.prisma annotations `@unique` on stellarAddress and
@@index(..., map: "idx_stellar_address") when making the change.
| @@ -0,0 +1,80 @@ | |||
| @echo off | |||
There was a problem hiding this comment.
Convert file to CRLF line endings.
The batch file uses Unix line endings (LF), which can cause Windows cmd.exe to misparse labels, parentheses, and GOTO targets at 512-byte block boundaries. Save the file with CRLF line endings (or set git config core.autocrlf true and re-add) before merging.
🧰 Tools
🪛 Blinter (1.0.112)
[error] 1-1: Unix line endings detected. Explanation: Batch file uses Unix line endings (LF-only) which can cause GOTO/CALL label parsing failures and script malfunction due to Windows batch parser 512-byte boundary bugs. Recommendation: Convert file to Windows line endings (CRLF). Use tools like dos2unix, notepad++, or configure git with 'git config core.autocrlf true'. Context: File uses Unix line endings (LF-only) - 80 LF sequences found
(E018)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/run_validation.bat` at line 1, The batch script run_validation.bat
currently uses LF endings (the file starts with "@echo off") which can break
cmd.exe parsing on Windows; convert/save the file with CRLF line endings (or run
a tool like unix2dos/dos2unix -i or set git config core.autocrlf true and re-add
the file) so the "@echo off" and any labels/GOTO/parentheses are parsed
correctly on Windows before merging.
| REM Set password for psql | ||
| set PGPASSWORD=%DB_PASS% | ||
|
|
||
| REM Run validation script | ||
| psql -U %DB_USER% -h %DB_HOST% -p %DB_PORT% -d %DB_NAME% -f scripts\validate_stellar_addresses.sql |
There was a problem hiding this comment.
PGPASSWORD leaks into the parent shell.
Setting PGPASSWORD without setlocal causes the password to persist in the user's environment after the script exits. Wrap the script in setlocal/endlocal (or explicitly set "PGPASSWORD=" after the psql call) so the secret is scoped to this run only.
🔒 Suggested change
`@echo` off
+setlocal
REM B-013: Validate Stellar Addresses
@@
-pause
+set "PGPASSWORD="
+pause
+endlocal🧰 Tools
🪛 Blinter (1.0.112)
[error] 65-65: Plain text credentials detected. Explanation: Hardcoded passwords and credentials in scripts pose serious security risks. Recommendation: Use secure credential storage or prompt for credentials at runtime. Context: Potential hardcoded credentials detected
(SEC008)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/run_validation.bat` around lines 64 - 68, The script sets PGPASSWORD
globally which leaks the secret to the parent shell; scope or clear it: wrap the
batch logic around setlocal and endlocal (or alternatively after the psql
invocation explicitly clear PGPASSWORD with set "PGPASSWORD=") so the password
is not persisted—apply this change around the PGPASSWORD assignment and the psql
call (the PGPASSWORD variable and the psql -U %DB_USER% ... -f
scripts\validate_stellar_addresses.sql invocation).
| WHERE stellar_address IS NOT NULL | ||
| AND ( | ||
| LENGTH(stellar_address) != 56 | ||
| OR stellar_address NOT LIKE 'G%' | ||
| OR stellar_address ~ '^G[A]{55}$' | ||
| OR stellar_address ~ '^G[B]{55}$' | ||
| OR stellar_address ~ '^G[0]{55}$' | ||
| OR stellar_address LIKE 'GTEST%' | ||
| OR stellar_address LIKE 'GDUMMY%' | ||
| OR stellar_address LIKE 'GPLACEHOLDER%' | ||
| OR stellar_address LIKE 'GXXXXXXXX%' | ||
| ); | ||
|
|
||
| -- Count of invalid addresses | ||
| SELECT COUNT(*) as invalid_address_count | ||
| FROM users | ||
| WHERE stellar_address IS NOT NULL | ||
| AND ( | ||
| LENGTH(stellar_address) != 56 | ||
| OR stellar_address NOT LIKE 'G%' | ||
| OR stellar_address ~ '^G[A]{55}$' | ||
| OR stellar_address ~ '^G[B]{55}$' | ||
| OR stellar_address ~ '^G[0]{55}$' | ||
| OR stellar_address LIKE 'GTEST%' | ||
| OR stellar_address LIKE 'GDUMMY%' | ||
| OR stellar_address LIKE 'GPLACEHOLDER%' | ||
| OR stellar_address LIKE 'GXXXXXXXX%' | ||
| ); |
There was a problem hiding this comment.
Validator under-reports vs. migration's CHECK constraint.
prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql (line 47) enforces stellar_address ~ '^[A-Z2-7]{56}$', but neither the diagnostic WHERE (lines 23–34) nor the count WHERE (lines 39–50) include that base32 character-set check. A row that is 56 chars, starts with G, isn't a known placeholder, but contains lowercase letters or 0/1/8/9 will be reported as valid here yet still cause ALTER TABLE … ADD CONSTRAINT to fail. Operators following the documented "run validator → expect 0 → apply migration" workflow will be surprised in production.
🛠️ Add the base32 charset predicate to both queries (and the CASE)
CASE
WHEN LENGTH(stellar_address) != 56 THEN 'Invalid length: ' || LENGTH(stellar_address) || ' (expected 56)'
WHEN stellar_address NOT LIKE 'G%' THEN 'Does not start with G'
+ WHEN stellar_address !~ '^[A-Z2-7]{56}$' THEN 'Contains non-base32 characters'
WHEN stellar_address ~ '^G[A]{55}$' THEN 'Placeholder pattern: All As'
...
END as issue
FROM users
WHERE stellar_address IS NOT NULL
AND (
LENGTH(stellar_address) != 56
OR stellar_address NOT LIKE 'G%'
+ OR stellar_address !~ '^[A-Z2-7]{56}$'
OR stellar_address ~ '^G[A]{55}$'
...
);Apply the same addition to the COUNT(*) query at lines 39–50.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/validate_stellar_addresses.sql` around lines 23 - 50, The validator
queries are missing the base32 character-set check used by the migration, so add
the same predicate (stellar_address NOT ~ '^[A-Z2-7]{56}$' or equivalent) to the
diagnostic WHERE, the COUNT(*) WHERE, and the CASE that classifies invalid
addresses so rows with invalid base32 chars (lowercase or disallowed digits) are
flagged; update the same predicate in the first WHERE block (the diagnostic
SELECT), the second COUNT(*) SELECT, and the CASE expression referencing
stellar_address to mirror the migration's regex '^[A-Z2-7]{56}$'.
| /** | ||
| * B-013: Validate existing stellar addresses in the database | ||
| * Run this script BEFORE applying the migration to ensure no invalid addresses exist | ||
| * | ||
| * Usage: npx ts-node prisma/validateStellarAddresses.ts | ||
| */ |
There was a problem hiding this comment.
Doc usage path is wrong.
The header says npx ts-node prisma/validateStellarAddresses.ts, but this file lives under scripts/. Operators following the documented command will get "Cannot find module".
📝 Proposed fix
- * Usage: npx ts-node prisma/validateStellarAddresses.ts
+ * Usage: npx ts-node scripts/validateStellarAddresses.ts📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * B-013: Validate existing stellar addresses in the database | |
| * Run this script BEFORE applying the migration to ensure no invalid addresses exist | |
| * | |
| * Usage: npx ts-node prisma/validateStellarAddresses.ts | |
| */ | |
| /** | |
| * B-013: Validate existing stellar addresses in the database | |
| * Run this script BEFORE applying the migration to ensure no invalid addresses exist | |
| * | |
| * Usage: npx ts-node scripts/validateStellarAddresses.ts | |
| */ |
🧰 Tools
🪛 ESLint
[error] 4-4: Delete ·
(prettier/prettier)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/validateStellarAddresses.ts` around lines 1 - 6, Update the script
usage documentation to point to the correct path: change the header line that
currently says "Usage: npx ts-node prisma/validateStellarAddresses.ts" to
reference the actual location "scripts/validateStellarAddresses.ts" so operators
run "npx ts-node scripts/validateStellarAddresses.ts"; ensure any README or
comment block in validateStellarAddresses.ts is updated accordingly to avoid
"Cannot find module" errors.
| async function main() { | ||
| console.log("🔍 B-013: Validating Stellar Addresses\n"); | ||
|
|
||
| try { | ||
| const users = await prisma.user.findMany({ | ||
| where: { stellarAddress: { not: null } }, | ||
| select: { | ||
| id: true, | ||
| username: true, | ||
| stellarAddress: true, | ||
| createdAt: true, | ||
| }, | ||
| }); | ||
|
|
||
| console.log(`Found ${users.length} users with stellar addresses\n`); | ||
|
|
||
| const invalid: any[] = []; | ||
|
|
||
| for (const user of users) { | ||
| const issues: string[] = []; | ||
| const addr = user.stellarAddress!; | ||
|
|
||
| if (addr.length !== 56) issues.push(`Wrong length: ${addr.length}`); | ||
| if (!addr.startsWith("G")) issues.push("Doesn't start with G"); | ||
| if (!isValidStellarAddress(addr)) issues.push("Invalid StrKey checksum"); | ||
| if (isPlaceholder(addr)) issues.push("Placeholder pattern"); | ||
|
|
||
| if (issues.length > 0) { | ||
| invalid.push({ ...user, issues }); | ||
| } | ||
| } | ||
|
|
||
| if (invalid.length === 0) { | ||
| console.log("✅ All stellar addresses are valid!"); | ||
| console.log("✅ Safe to apply migration\n"); | ||
| } else { | ||
| console.log(`❌ Found ${invalid.length} invalid address(es):\n`); | ||
| invalid.forEach((u) => { | ||
| console.log(`User: ${u.id}`); | ||
| console.log(`Username: ${u.username || "(none)"}`); | ||
| console.log(`Address: ${u.stellarAddress}`); | ||
| console.log(`Issues: ${u.issues.join(", ")}`); | ||
| console.log("---\n"); | ||
| }); | ||
| console.log("⚠️ Fix these before deploying migration\n"); | ||
| } | ||
|
|
||
| console.log("Summary:"); | ||
| console.log(` Total with addresses: ${users.length}`); | ||
| console.log(` Valid: ${users.length - invalid.length}`); | ||
| console.log(` Invalid: ${invalid.length}`); | ||
| } catch (error: any) { | ||
| console.error("❌ Error:", error.message); | ||
| if (error.message.includes("DATABASE_URL")) { | ||
| console.error("\nPlease set DATABASE_URL in .env file"); | ||
| } | ||
| process.exit(1); | ||
| } finally { | ||
| await prisma.$disconnect(); | ||
| } | ||
| } | ||
|
|
||
| main(); |
There was a problem hiding this comment.
Script exits 0 even when invalid addresses are found.
main() only calls process.exit(1) from the catch block (line 81); when invalid.length > 0 it prints the report and falls through to a normal return, so the script exits with code 0. That defeats its purpose as a pre-deployment gate — CI / run_validation.bat / a &&-chained prisma migrate deploy will happily proceed despite invalid rows. The sibling scripts/validateStellarAddresses.js correctly uses the boolean return to set the exit code (lines 118–121); please mirror that here.
🛠️ Suggested fix
- if (invalid.length === 0) {
+ let hasInvalid = false;
+ if (invalid.length === 0) {
console.log("✅ All stellar addresses are valid!");
console.log("✅ Safe to apply migration\n");
} else {
+ hasInvalid = true;
console.log(`❌ Found ${invalid.length} invalid address(es):\n`);
...
console.log("⚠️ Fix these before deploying migration\n");
}
console.log("Summary:");
console.log(` Total with addresses: ${users.length}`);
console.log(` Valid: ${users.length - invalid.length}`);
console.log(` Invalid: ${invalid.length}`);
+
+ if (hasInvalid) process.exit(1);
} catch (error: any) {
...
} finally {
await prisma.$disconnect();
}
}
-main();
+main().catch((e) => {
+ console.error(e);
+ process.exit(1);
+});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function main() { | |
| console.log("🔍 B-013: Validating Stellar Addresses\n"); | |
| try { | |
| const users = await prisma.user.findMany({ | |
| where: { stellarAddress: { not: null } }, | |
| select: { | |
| id: true, | |
| username: true, | |
| stellarAddress: true, | |
| createdAt: true, | |
| }, | |
| }); | |
| console.log(`Found ${users.length} users with stellar addresses\n`); | |
| const invalid: any[] = []; | |
| for (const user of users) { | |
| const issues: string[] = []; | |
| const addr = user.stellarAddress!; | |
| if (addr.length !== 56) issues.push(`Wrong length: ${addr.length}`); | |
| if (!addr.startsWith("G")) issues.push("Doesn't start with G"); | |
| if (!isValidStellarAddress(addr)) issues.push("Invalid StrKey checksum"); | |
| if (isPlaceholder(addr)) issues.push("Placeholder pattern"); | |
| if (issues.length > 0) { | |
| invalid.push({ ...user, issues }); | |
| } | |
| } | |
| if (invalid.length === 0) { | |
| console.log("✅ All stellar addresses are valid!"); | |
| console.log("✅ Safe to apply migration\n"); | |
| } else { | |
| console.log(`❌ Found ${invalid.length} invalid address(es):\n`); | |
| invalid.forEach((u) => { | |
| console.log(`User: ${u.id}`); | |
| console.log(`Username: ${u.username || "(none)"}`); | |
| console.log(`Address: ${u.stellarAddress}`); | |
| console.log(`Issues: ${u.issues.join(", ")}`); | |
| console.log("---\n"); | |
| }); | |
| console.log("⚠️ Fix these before deploying migration\n"); | |
| } | |
| console.log("Summary:"); | |
| console.log(` Total with addresses: ${users.length}`); | |
| console.log(` Valid: ${users.length - invalid.length}`); | |
| console.log(` Invalid: ${invalid.length}`); | |
| } catch (error: any) { | |
| console.error("❌ Error:", error.message); | |
| if (error.message.includes("DATABASE_URL")) { | |
| console.error("\nPlease set DATABASE_URL in .env file"); | |
| } | |
| process.exit(1); | |
| } finally { | |
| await prisma.$disconnect(); | |
| } | |
| } | |
| main(); | |
| async function main() { | |
| console.log("🔍 B-013: Validating Stellar Addresses\n"); | |
| try { | |
| const users = await prisma.user.findMany({ | |
| where: { stellarAddress: { not: null } }, | |
| select: { | |
| id: true, | |
| username: true, | |
| stellarAddress: true, | |
| createdAt: true, | |
| }, | |
| }); | |
| console.log(`Found ${users.length} users with stellar addresses\n`); | |
| const invalid: any[] = []; | |
| for (const user of users) { | |
| const issues: string[] = []; | |
| const addr = user.stellarAddress!; | |
| if (addr.length !== 56) issues.push(`Wrong length: ${addr.length}`); | |
| if (!addr.startsWith("G")) issues.push("Doesn't start with G"); | |
| if (!isValidStellarAddress(addr)) issues.push("Invalid StrKey checksum"); | |
| if (isPlaceholder(addr)) issues.push("Placeholder pattern"); | |
| if (issues.length > 0) { | |
| invalid.push({ ...user, issues }); | |
| } | |
| } | |
| let hasInvalid = false; | |
| if (invalid.length === 0) { | |
| console.log("✅ All stellar addresses are valid!"); | |
| console.log("✅ Safe to apply migration\n"); | |
| } else { | |
| hasInvalid = true; | |
| console.log(`❌ Found ${invalid.length} invalid address(es):\n`); | |
| invalid.forEach((u) => { | |
| console.log(`User: ${u.id}`); | |
| console.log(`Username: ${u.username || "(none)"}`); | |
| console.log(`Address: ${u.stellarAddress}`); | |
| console.log(`Issues: ${u.issues.join(", ")}`); | |
| console.log("---\n"); | |
| }); | |
| console.log("⚠️ Fix these before deploying migration\n"); | |
| } | |
| console.log("Summary:"); | |
| console.log(` Total with addresses: ${users.length}`); | |
| console.log(` Valid: ${users.length - invalid.length}`); | |
| console.log(` Invalid: ${invalid.length}`); | |
| if (hasInvalid) process.exit(1); | |
| } catch (error: any) { | |
| console.error("❌ Error:", error.message); | |
| if (error.message.includes("DATABASE_URL")) { | |
| console.error("\nPlease set DATABASE_URL in .env file"); | |
| } | |
| process.exit(1); | |
| } finally { | |
| await prisma.$disconnect(); | |
| } | |
| } | |
| main().catch((e) => { | |
| console.error(e); | |
| process.exit(1); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/validateWithPrisma.ts` around lines 25 - 87, The script currently
returns normally when invalid addresses are found; update main() so it fails the
process when invalid.length > 0 (instead of falling through) by calling
process.exit(1) or returning a rejected promise; specifically, inside the block
that logs invalid addresses (the else branch that builds the invalid array and
prints the report) add a non-zero exit (e.g., process.exit(1)) after printing
the summary so the CI gate fails; ensure prisma.$disconnect() still runs (keep
or move the exit to after finally or call process.exitCode = 1 before returning)
to mirror the behavior implemented in scripts/validateStellarAddresses.js.
closes #128
Impact: Non-G placeholders can break downstream Stellar validation and UX. Fix direction: Defer DB constraint until wallet created, or use valid funded temp account pattern. Acceptance check: No user row ships with invalid stellarAddress format in prod.
Summary by CodeRabbit
Release Notes
New Features
Chores