Skip to content

Pi-Dev#252

Merged
Junman140 merged 2 commits intoPi-Defi-world:devfrom
abdullahilateefat03-boop:Pi-Dev
Apr 28, 2026
Merged

Pi-Dev#252
Junman140 merged 2 commits intoPi-Defi-world:devfrom
abdullahilateefat03-boop:Pi-Dev

Conversation

@abdullahilateefat03-boop
Copy link
Copy Markdown
Contributor

@abdullahilateefat03-boop abdullahilateefat03-boop commented Apr 26, 2026

closes #128
Impact: Non-G placeholders can break downstream Stellar validation and UX. Fix direction: Defer DB constraint until wallet created, or use valid funded temp account pattern. Acceptance check: No user row ships with invalid stellarAddress format in prod.

Summary by CodeRabbit

Release Notes

  • New Features

    • Enhanced Stellar address validation during wallet creation and sign-in operations
    • Stricter format validation to reject invalid or placeholder addresses
  • Chores

    • Added database constraints to enforce Stellar address format requirements
    • Introduced validation tools for verifying existing address data integrity

@drips-wave
Copy link
Copy Markdown

drips-wave Bot commented Apr 26, 2026

@abdullahilateefat03-boop Great news! 🎉 Based on an automated assessment of this PR, the linked Wave issue(s) no longer count against your application limits.

You can now already apply to more issues while waiting for a review of this PR. Keep up the great work! 🚀

Learn more about application limits

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 26, 2026

Warning

Rate limit exceeded

@Junman140 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 44 minutes and 20 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 67237e89-a538-485e-81da-1c5eef04ee38

📥 Commits

Reviewing files that changed from the base of the PR and between deb64dd and 5edac9a.

📒 Files selected for processing (1)
  • prisma/schema.prisma
📝 Walkthrough

Walkthrough

A multi-layered Stellar address integrity hardening is introduced, combining application-level placeholder detection and strict validation in the wallet service with database-level constraints, pre-migration validation scripts in multiple languages, schema documentation updates, and deployment guidance to prevent invalid addresses from reaching production.

Changes

Cohort / File(s) Summary
Documentation & Specification
B-013_FIX_SUMMARY.md, B-013_PR_CHECKLIST.md, prisma/schema.prisma
Specification of multi-layer hardening approach, schema comment updates documenting format/validation requirements, and pre-submission/deployment checklists.
Database Migration & Constraints
prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql
Pre-migration validation counting invalid addresses; CHECK constraint enforcing 56-char length, G prefix, base32 charset, and excluding placeholder patterns; partial index on non-null addresses.
Validation Scripts
scripts/validate_stellar_addresses.sql, scripts/validateStellarAddresses.js, scripts/validateStellarAddresses.ts, scripts/validateWithPrisma.ts, scripts/run_validation.bat
Multi-language validation implementations checking address format, StrKey validity, and placeholder patterns; batch orchestration script for local DB validation pre-migration.
Application-Level Validation
src/services/wallet/walletService.ts
Defense-in-depth address validation on assertion and generation; new exported setStellarAddressForUser function with strict validation, placeholder rejection, and logging; centralized isPlaceholderAddress helper.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~35 minutes

Possibly related PRs

  • PR #90: Removes placeholder injection during signup, complementing this PR's downstream validation and prevention of invalid addresses entering the system.
  • PR #198: Adds exported wallet/Stellar address validation helpers in walletService, aligning with the new setStellarAddressForUser function introduced here.
  • PR #98: Implements isValidStellarAddress and validation guards in walletService to prevent persisting invalid addresses, foundational to this PR's defense-in-depth strategy.

Poem

🐰 A stellar fix hops into place,
Constraints guard each address space,
No placeholders shall pass the gate,
Validation strong at every state,
Production safe—let's celebrate! ✨

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 27.27% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Pi-Dev' is vague and generic; it does not clearly convey the actual change, which is hardening Stellar address validation and preventing invalid addresses. Revise the title to be more descriptive, such as 'Harden Stellar address validation with DB constraints and pre-migration checks' or similar.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed The PR successfully addresses the core requirement from #128 by implementing multi-layer Stellar address validation (application-level checks, database constraints, and pre-migration validation scripts).
Out of Scope Changes check ✅ Passed All changes are directly related to Stellar address validation and prevention of invalid addresses—the stated objective in #128. No unrelated or out-of-scope modifications detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Nitpick comments (7)
scripts/validateWithPrisma.ts (1)

41-41: Avoid any[] for the invalid-records buffer.

A small typed interface (id, username, stellarAddress, createdAt, issues) keeps the per-user console.log accesses below type-checked and prevents future refactors from silently breaking the report.

♻️ Suggested change
-    const invalid: any[] = [];
+    type InvalidRecord = {
+      id: string;
+      username: string | null;
+      stellarAddress: string | null;
+      createdAt: Date;
+      issues: string[];
+    };
+    const invalid: InvalidRecord[] = [];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/validateWithPrisma.ts` at line 41, The buffer `invalid` is typed as
`any[]`; define a small interface (e.g. `InvalidRecord` with fields id,
username, stellarAddress, createdAt, issues) and change `const invalid: any[] =
[];` to `const invalid: InvalidRecord[] = [];` in scripts/validateWithPrisma.ts
(inside the validateWithPrisma flow), then update all places that push into
`invalid` to construct objects matching `InvalidRecord` so subsequent per-user
`console.log` and property accesses are type-checked and safe.
prisma/schema.prisma (1)

37-38: Tighten the comment phrasing.

The comment reads as if it trails off ("validated at application level and database constraint"). Consider clarifying that validation happens at both layers.

📝 Proposed wording fix
-  // B-013: stellarAddress is validated at application level and database constraint
-  // Valid addresses: 56 chars, starts with 'G', base32 encoded, not a placeholder
+  // B-013: stellarAddress is validated at the application level and enforced by a
+  // database CHECK constraint (chk_valid_stellar_address). Valid addresses are
+  // 56 chars, start with 'G', use base32 (A-Z,2-7), and must not be placeholders.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prisma/schema.prisma` around lines 37 - 38, Update the trailing/ambiguous
comment for the stellarAddress field so it clearly states that validation occurs
both in the application and via a database constraint; edit the first line to
something like "B-013: stellarAddress is validated at the application level and
enforced by a database constraint" and keep the second line that documents the
allowed format ("Valid addresses: 56 chars, starts with 'G', base32 encoded, not
a placeholder") to preserve the format requirements for stellarAddress.
B-013_PR_CHECKLIST.md (1)

43-43: Use "sign in" (verb) consistently.

"signin" isn't a standard English compound; the verb form is "sign in" (and the noun is usually "sign-in"). Same nit at the post-deployment monitoring section in B-013_FIX_SUMMARY.md.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@B-013_PR_CHECKLIST.md` at line 43, Replace the incorrect compound "signin"
with the verb form "sign in" across the checklist and summary; specifically
update the checklist line "Test signin with existing user" to "Test sign in with
existing user" and also fix the matching occurrences of "signin" in
B-013_FIX_SUMMARY.md (and any other instances in these docs) to use "sign in"
(or "sign-in" only where used as a noun) so terminology is consistent.
scripts/validateStellarAddresses.js (2)

55-84: Redundant checks produce noisy multi-issue reports.

isValidStellarAddress already returns false for any of: non-string, length ≠ 56, missing G prefix, or failing StrKey.isValidEd25519PublicKey (lines 22–27). Re-running length/startsWith checks separately (lines 59–65) means a single bad address is reported with three overlapping issues (Invalid length, Does not start with 'G', Fails StrKey.isValidEd25519PublicKey validation), making the operator output harder to triage. Either drop the granular checks and rely on isValidStellarAddress, or invert: only run the granular checks when isValidStellarAddress returns false to yield a single, descriptive reason.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/validateStellarAddresses.js` around lines 55 - 84, The loop currently
reports overlapping issues because it always runs granular checks as well as
isValidStellarAddress; change the logic so you first call
isValidStellarAddress(address) and only when that returns false run the granular
checks (length, startsWith, placeholder) to collect specific reasons; keep the
isPlaceholderAddress(address) check inside that invalid branch and then push a
single invalidAddresses entry when issues.length>0; reference the
usersWithAddress loop, isValidStellarAddress, isPlaceholderAddress, and
invalidAddresses to locate where to reorder/remove the redundant standalone
length/startsWith checks.

1-126: Duplicate of scripts/validateWithPrisma.ts.

This .js script reimplements the same validation flow as scripts/validateWithPrisma.ts (Prisma findMany → length/prefix/StrKey/placeholder checks → diagnostic report) and also re-defines isValidStellarAddress/placeholder regexes that already live in src/utils/stellar.ts and src/services/wallet/walletService.ts. Maintaining four parallel implementations (here, the TS variant, the SQL script, and the migration's DO block) makes drift between them very likely — and the SQL/migration mismatch flagged elsewhere in this review is exactly that kind of drift.

Recommendation: keep one canonical validator (the TS one, since it can import from src/utils/stellar.ts) and delete this .js duplicate. If a no-build entrypoint is needed, document npx tsx scripts/validateWithPrisma.ts instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/validateStellarAddresses.js` around lines 1 - 126, This script
duplicates logic that already exists in src/utils/stellar.ts and
scripts/validateWithPrisma.ts; remove the redundant file
scripts/validateStellarAddresses.js and any CI/docs references to it, and update
README or developer docs to recommend running the canonical TS validator with
"npx tsx scripts/validateWithPrisma.ts" (or import and reuse
isValidStellarAddress and PLACEHOLDER_PATTERNS from src/utils/stellar.ts in
scripts/validateWithPrisma.ts if needed) so there is a single source of truth
instead of maintaining isValidStellarAddress, PLACEHOLDER_PATTERNS, and
validateStellarAddresses in multiple places.
scripts/validateStellarAddresses.ts (1)

30-44: Consider streaming/pagination for large users tables.

findMany without pagination loads every row with a non-null stellarAddress into memory. For typical user-counts this is fine, but if this is run against a production-scale DB during deployment, consider chunking (e.g., findMany with take/cursor or a raw cursor) so the script doesn't OOM the deploy host.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/validateStellarAddresses.ts` around lines 30 - 44, The current
prisma.user.findMany call in scripts/validateStellarAddresses.ts (producing
usersWithAddress) can OOM for large tables; change it to stream/paginate by
querying in chunks (use findMany with take and cursor or an explicit cursor
loop) and process each page before fetching the next, e.g., loop using a cursor
based on the last fetched id or createdAt until no more rows; ensure you replace
the single findMany call that populates usersWithAddress with a paginated
iterator and preserve the selected fields (id, stellarAddress, username,
createdAt) and existing validation logic.
src/services/wallet/walletService.ts (1)

79-98: Extract PLACEHOLDER_PATTERNS and isPlaceholderAddress to a shared module to avoid drift.

The exact same regex list and helper is duplicated in scripts/validateStellarAddresses.ts (lines 13-25). When new placeholder patterns get discovered in the wild, they'll need to be added in both places — and any divergence means the pre-migration validator will accept rows the runtime guard would reject (or vice versa).

Consider colocating with assertValidStellarAddress in src/utils/stellar.ts:

♻️ Proposed refactor

In src/utils/stellar.ts:

const PLACEHOLDER_PATTERNS: readonly RegExp[] = [
  /^GA{55}$/,
  /^GB{55}$/,
  /^G0{55}$/,
  /^GTEST/,
  /^GDUMMY/,
  /^GPLACEHOLDER/,
  /^GXXXXXXXX/,
];

export function isPlaceholderStellarAddress(address: string): boolean {
  if (!address || address.length !== 56) return true;
  return PLACEHOLDER_PATTERNS.some((p) => p.test(address));
}

Then in walletService.ts:

-import { assertValidStellarAddress, isValidStellarAddress } from "../../utils/stellar";
+import {
+  assertValidStellarAddress,
+  isValidStellarAddress,
+  isPlaceholderStellarAddress,
+} from "../../utils/stellar";

And drop the local isPlaceholderAddress (lines 79-98), updating both callsites.

Side note: Line 84 returns true when length !== 56, which conflates "invalid length" with "is a placeholder". It's safe given current call order (always after assertValidStellarAddress), but worth either renaming the function to reflect that or removing the length branch since the assertion already covers it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/wallet/walletService.ts` around lines 79 - 98, Extract the regex
list and helper into a shared util: move PLACEHOLDER_PATTERNS and an exported
isPlaceholderStellarAddress(address: string) into the existing stellar utility
alongside assertValidStellarAddress, then replace local isPlaceholderAddress
usages (and the duplicate in scripts/validateStellarAddresses.ts) to call the
new isPlaceholderStellarAddress; ensure isPlaceholderStellarAddress either drops
the length check (since assertValidStellarAddress already enforces 56 chars) or
rename the helper to reflect it returns true for invalid-length inputs so
behavior remains consistent with current callers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@B-013_FIX_SUMMARY.md`:
- Around line 112-118: Update the incorrect documentation paths that reference
prisma/validateStellarAddresses.ts and validateWithPrisma.ts to point to the
actual scripts directory: use scripts/validateStellarAddresses.ts (and
scripts/validateStellarAddresses.js where applicable) and
scripts/validateWithPrisma.ts; search the markdown for the three incorrect
occurrences (the Validation Script section around the heading and the two other
mentions noted at lines ~159 and the Files Modified section) and replace them so
the commands (e.g., npx ts-node scripts/validateStellarAddresses.ts) and file
lists accurately reflect the new locations.

In
`@prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql`:
- Around line 39-57: The ALTER TABLE ADD CONSTRAINT statement
(chk_valid_stellar_address on table users) will take an ACCESS EXCLUSIVE lock
and scan the whole table; replace the single-step ADD CONSTRAINT with the
two-step pattern: add the constraint as NOT VALID so it is enforced for new
INSERT/UPDATE without scanning existing rows, then run ALTER TABLE users
VALIDATE CONSTRAINT chk_valid_stellar_address to perform the full validation
(the existing pre-flight DO block means this validate step should be quick and
use a less disruptive lock).
- Around line 16-36: The pre-check in the DO block currently counts invalid rows
into invalid_count but doesn’t mirror the CHECK constraint’s base32 rule; update
the WHERE used to compute invalid_count (on table users and column
stellar_address) to also treat any value that does NOT match the CHECK’s base32
charset as invalid (i.e., add a predicate such as stellar_address !~
'^[A-Z2-7]{56}$' or equivalent so the SELECT COUNT(*) INTO invalid_count flags
rows that would fail the subsequent ALTER TABLE ... ADD CONSTRAINT); keep the
existing length, leading 'G', and placeholder checks and raise the same RAISE
EXCEPTION if invalid_count > 0.
- Around line 60-62: Migration adds a partial index
idx_users_stellar_address_not_null on users(stellar_address) with WHERE
stellar_address IS NOT NULL which will cause Prisma schema drift because
schema.prisma declares `@unique` on stellarAddress and @@index([stellarAddress],
map: "idx_stellar_address") but Prisma 5.19 does not support partial indexes;
fix by either (A) remove this partial index from the migration.sql and rely on
the existing B-tree index declared by @@index/@unique on stellarAddress, (B) if
you must keep it, convert the migration to create the index CONCURRENTLY and
document that this index is unmanaged by Prisma (manual maintenance required),
or (C) add the partialIndexes preview feature to schema.prisma and declare the
partial index there (only if your Prisma version supports it); reference
idx_users_stellar_address_not_null, stellar_address, the migration.sql CREATE
INDEX statement, and the schema.prisma annotations `@unique` on stellarAddress and
@@index(..., map: "idx_stellar_address") when making the change.

In `@scripts/run_validation.bat`:
- Line 1: The batch script run_validation.bat currently uses LF endings (the
file starts with "@echo off") which can break cmd.exe parsing on Windows;
convert/save the file with CRLF line endings (or run a tool like
unix2dos/dos2unix -i or set git config core.autocrlf true and re-add the file)
so the "@echo off" and any labels/GOTO/parentheses are parsed correctly on
Windows before merging.
- Around line 64-68: The script sets PGPASSWORD globally which leaks the secret
to the parent shell; scope or clear it: wrap the batch logic around setlocal and
endlocal (or alternatively after the psql invocation explicitly clear PGPASSWORD
with set "PGPASSWORD=") so the password is not persisted—apply this change
around the PGPASSWORD assignment and the psql call (the PGPASSWORD variable and
the psql -U %DB_USER% ... -f scripts\validate_stellar_addresses.sql invocation).
- Around line 40-51: The DATABASE_URL parsing is broken: remove the incorrect
substring replacement ("set DATABASE_URL=%DATABASE_URL:DATABASE_URL=%") and stop
manually slicing the URL with fragile FOR/token logic; instead update the FOR
statement to request five tokens (use tokens=1,2,3,4,5) if you keep this
approach, and correctly strip a leading "DATABASE_URL=" or "postgresql://"
prefix before tokenizing so DB_USER, DB_PASS, DB_HOST, DB_PORT, and DB_NAME are
assigned properly; better yet, replace this block entirely by delegating parsing
to PowerShell or by passing DATABASE_URL directly to psql/PG* env vars to
robustly handle optional ports and URL-encoded characters (affecting DB_PASS and
token boundaries).

In `@scripts/validate_stellar_addresses.sql`:
- Around line 23-50: The validator queries are missing the base32 character-set
check used by the migration, so add the same predicate (stellar_address NOT ~
'^[A-Z2-7]{56}$' or equivalent) to the diagnostic WHERE, the COUNT(*) WHERE, and
the CASE that classifies invalid addresses so rows with invalid base32 chars
(lowercase or disallowed digits) are flagged; update the same predicate in the
first WHERE block (the diagnostic SELECT), the second COUNT(*) SELECT, and the
CASE expression referencing stellar_address to mirror the migration's regex
'^[A-Z2-7]{56}$'.

In `@scripts/validateStellarAddresses.ts`:
- Around line 1-6: Update the script usage documentation to point to the correct
path: change the header line that currently says "Usage: npx ts-node
prisma/validateStellarAddresses.ts" to reference the actual location
"scripts/validateStellarAddresses.ts" so operators run "npx ts-node
scripts/validateStellarAddresses.ts"; ensure any README or comment block in
validateStellarAddresses.ts is updated accordingly to avoid "Cannot find module"
errors.

In `@scripts/validateWithPrisma.ts`:
- Around line 25-87: The script currently returns normally when invalid
addresses are found; update main() so it fails the process when invalid.length >
0 (instead of falling through) by calling process.exit(1) or returning a
rejected promise; specifically, inside the block that logs invalid addresses
(the else branch that builds the invalid array and prints the report) add a
non-zero exit (e.g., process.exit(1)) after printing the summary so the CI gate
fails; ensure prisma.$disconnect() still runs (keep or move the exit to after
finally or call process.exitCode = 1 before returning) to mirror the behavior
implemented in scripts/validateStellarAddresses.js.

In `@src/services/wallet/walletService.ts`:
- Around line 100-122: setStellarAddressForUser currently overwrites an existing
stellarAddress (and may leave an encryptedStellarSecret stale) and also can
surface Prisma P2025; change it to guard and be atomic by attempting a
conditional update that only succeeds if the user has no
stellarAddress/encryptedStellarSecret, e.g. use prisma.user.updateMany({ where:
{ id: userId, stellarAddress: null, encryptedStellarSecret: null }, data: {
stellarAddress } }) and if the returned count is 0 throw an AppError indicating
the user already has a wallet (or not found), ensuring you still validate via
assertValidStellarAddress/isPlaceholderAddress beforehand and log on success;
this avoids P2025 and prevents overwriting an existing wallet.

---

Nitpick comments:
In `@B-013_PR_CHECKLIST.md`:
- Line 43: Replace the incorrect compound "signin" with the verb form "sign in"
across the checklist and summary; specifically update the checklist line "Test
signin with existing user" to "Test sign in with existing user" and also fix the
matching occurrences of "signin" in B-013_FIX_SUMMARY.md (and any other
instances in these docs) to use "sign in" (or "sign-in" only where used as a
noun) so terminology is consistent.

In `@prisma/schema.prisma`:
- Around line 37-38: Update the trailing/ambiguous comment for the
stellarAddress field so it clearly states that validation occurs both in the
application and via a database constraint; edit the first line to something like
"B-013: stellarAddress is validated at the application level and enforced by a
database constraint" and keep the second line that documents the allowed format
("Valid addresses: 56 chars, starts with 'G', base32 encoded, not a
placeholder") to preserve the format requirements for stellarAddress.

In `@scripts/validateStellarAddresses.js`:
- Around line 55-84: The loop currently reports overlapping issues because it
always runs granular checks as well as isValidStellarAddress; change the logic
so you first call isValidStellarAddress(address) and only when that returns
false run the granular checks (length, startsWith, placeholder) to collect
specific reasons; keep the isPlaceholderAddress(address) check inside that
invalid branch and then push a single invalidAddresses entry when
issues.length>0; reference the usersWithAddress loop, isValidStellarAddress,
isPlaceholderAddress, and invalidAddresses to locate where to reorder/remove the
redundant standalone length/startsWith checks.
- Around line 1-126: This script duplicates logic that already exists in
src/utils/stellar.ts and scripts/validateWithPrisma.ts; remove the redundant
file scripts/validateStellarAddresses.js and any CI/docs references to it, and
update README or developer docs to recommend running the canonical TS validator
with "npx tsx scripts/validateWithPrisma.ts" (or import and reuse
isValidStellarAddress and PLACEHOLDER_PATTERNS from src/utils/stellar.ts in
scripts/validateWithPrisma.ts if needed) so there is a single source of truth
instead of maintaining isValidStellarAddress, PLACEHOLDER_PATTERNS, and
validateStellarAddresses in multiple places.

In `@scripts/validateStellarAddresses.ts`:
- Around line 30-44: The current prisma.user.findMany call in
scripts/validateStellarAddresses.ts (producing usersWithAddress) can OOM for
large tables; change it to stream/paginate by querying in chunks (use findMany
with take and cursor or an explicit cursor loop) and process each page before
fetching the next, e.g., loop using a cursor based on the last fetched id or
createdAt until no more rows; ensure you replace the single findMany call that
populates usersWithAddress with a paginated iterator and preserve the selected
fields (id, stellarAddress, username, createdAt) and existing validation logic.

In `@scripts/validateWithPrisma.ts`:
- Line 41: The buffer `invalid` is typed as `any[]`; define a small interface
(e.g. `InvalidRecord` with fields id, username, stellarAddress, createdAt,
issues) and change `const invalid: any[] = [];` to `const invalid:
InvalidRecord[] = [];` in scripts/validateWithPrisma.ts (inside the
validateWithPrisma flow), then update all places that push into `invalid` to
construct objects matching `InvalidRecord` so subsequent per-user `console.log`
and property accesses are type-checked and safe.

In `@src/services/wallet/walletService.ts`:
- Around line 79-98: Extract the regex list and helper into a shared util: move
PLACEHOLDER_PATTERNS and an exported isPlaceholderStellarAddress(address:
string) into the existing stellar utility alongside assertValidStellarAddress,
then replace local isPlaceholderAddress usages (and the duplicate in
scripts/validateStellarAddresses.ts) to call the new
isPlaceholderStellarAddress; ensure isPlaceholderStellarAddress either drops the
length check (since assertValidStellarAddress already enforces 56 chars) or
rename the helper to reflect it returns true for invalid-length inputs so
behavior remains consistent with current callers.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9f202316-316e-4dc8-a422-c029774c6cfc

📥 Commits

Reviewing files that changed from the base of the PR and between 26e9b8c and deb64dd.

📒 Files selected for processing (10)
  • B-013_FIX_SUMMARY.md
  • B-013_PR_CHECKLIST.md
  • prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql
  • prisma/schema.prisma
  • scripts/run_validation.bat
  • scripts/validateStellarAddresses.js
  • scripts/validateStellarAddresses.ts
  • scripts/validateWithPrisma.ts
  • scripts/validate_stellar_addresses.sql
  • src/services/wallet/walletService.ts

Comment thread B-013_FIX_SUMMARY.md
Comment on lines +112 to +118
### 3. Validation Script (`prisma/validateStellarAddresses.ts`)

A standalone script to validate existing data before applying migration:

```bash
npx ts-node prisma/validateStellarAddresses.ts
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validation script path is wrong.

The doc references prisma/validateStellarAddresses.ts, but the actual files added in this PR live under scripts/ (scripts/validateStellarAddresses.ts, scripts/validateStellarAddresses.js, scripts/validateWithPrisma.ts). The same incorrect path appears at line 159 and in the "Files Modified" section at line 233. Update all three to scripts/... so operators running the documented commands don't get a "module not found" error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@B-013_FIX_SUMMARY.md` around lines 112 - 118, Update the incorrect
documentation paths that reference prisma/validateStellarAddresses.ts and
validateWithPrisma.ts to point to the actual scripts directory: use
scripts/validateStellarAddresses.ts (and scripts/validateStellarAddresses.js
where applicable) and scripts/validateWithPrisma.ts; search the markdown for the
three incorrect occurrences (the Validation Script section around the heading
and the two other mentions noted at lines ~159 and the Files Modified section)
and replace them so the commands (e.g., npx ts-node
scripts/validateStellarAddresses.ts) and file lists accurately reflect the new
locations.

Comment on lines +16 to +36
SELECT COUNT(*) INTO invalid_count
FROM users
WHERE stellar_address IS NOT NULL
AND (
-- Invalid format: wrong length or doesn't start with G
LENGTH(stellar_address) != 56
OR stellar_address NOT LIKE 'G%'
-- Placeholder patterns
OR stellar_address ~ '^G[A]{55}$'
OR stellar_address ~ '^G[B]{55}$'
OR stellar_address ~ '^G[0]{55}$'
OR stellar_address LIKE 'GTEST%'
OR stellar_address LIKE 'GDUMMY%'
OR stellar_address LIKE 'GPLACEHOLDER%'
OR stellar_address LIKE 'GXXXXXXXX%'
);

IF invalid_count > 0 THEN
RAISE EXCEPTION 'Found % users with invalid stellar_address format. Please clean up data before applying constraint.', invalid_count;
END IF;
END $$;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Pre-check predicate must match the CHECK constraint exactly.

The pre-flight scan (lines 19–31) does not include the ^[A-Z2-7]{56}$ base32 charset rule that the CHECK adds on line 47. If a row is 56 chars, starts with G, isn't a known placeholder, but contains a non-base32 character (e.g., lowercase, 0, 1, 8, 9), the DO block will pass and the next statement (ALTER TABLE ... ADD CONSTRAINT) will fail with a generic check-constraint violation that does not enumerate the offending row. Mirror the constraint's predicates here so the friendlier RAISE EXCEPTION message wins.

🛠️ Suggested addition
   FROM users
   WHERE stellar_address IS NOT NULL
     AND (
       -- Invalid format: wrong length or doesn't start with G
       LENGTH(stellar_address) != 56
       OR stellar_address NOT LIKE 'G%'
+      OR stellar_address !~ '^[A-Z2-7]{56}$'
       -- Placeholder patterns
       OR stellar_address ~ '^G[A]{55}$'
       ...
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
SELECT COUNT(*) INTO invalid_count
FROM users
WHERE stellar_address IS NOT NULL
AND (
-- Invalid format: wrong length or doesn't start with G
LENGTH(stellar_address) != 56
OR stellar_address NOT LIKE 'G%'
-- Placeholder patterns
OR stellar_address ~ '^G[A]{55}$'
OR stellar_address ~ '^G[B]{55}$'
OR stellar_address ~ '^G[0]{55}$'
OR stellar_address LIKE 'GTEST%'
OR stellar_address LIKE 'GDUMMY%'
OR stellar_address LIKE 'GPLACEHOLDER%'
OR stellar_address LIKE 'GXXXXXXXX%'
);
IF invalid_count > 0 THEN
RAISE EXCEPTION 'Found % users with invalid stellar_address format. Please clean up data before applying constraint.', invalid_count;
END IF;
END $$;
SELECT COUNT(*) INTO invalid_count
FROM users
WHERE stellar_address IS NOT NULL
AND (
-- Invalid format: wrong length or doesn't start with G
LENGTH(stellar_address) != 56
OR stellar_address NOT LIKE 'G%'
OR stellar_address !~ '^[A-Z2-7]{56}$'
-- Placeholder patterns
OR stellar_address ~ '^G[A]{55}$'
OR stellar_address ~ '^G[B]{55}$'
OR stellar_address ~ '^G[0]{55}$'
OR stellar_address LIKE 'GTEST%'
OR stellar_address LIKE 'GDUMMY%'
OR stellar_address LIKE 'GPLACEHOLDER%'
OR stellar_address LIKE 'GXXXXXXXX%'
);
IF invalid_count > 0 THEN
RAISE EXCEPTION 'Found % users with invalid stellar_address format. Please clean up data before applying constraint.', invalid_count;
END IF;
END $$;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql`
around lines 16 - 36, The pre-check in the DO block currently counts invalid
rows into invalid_count but doesn’t mirror the CHECK constraint’s base32 rule;
update the WHERE used to compute invalid_count (on table users and column
stellar_address) to also treat any value that does NOT match the CHECK’s base32
charset as invalid (i.e., add a predicate such as stellar_address !~
'^[A-Z2-7]{56}$' or equivalent so the SELECT COUNT(*) INTO invalid_count flags
rows that would fail the subsequent ALTER TABLE ... ADD CONSTRAINT); keep the
existing length, leading 'G', and placeholder checks and raise the same RAISE
EXCEPTION if invalid_count > 0.

Comment on lines +39 to +57
ALTER TABLE users
ADD CONSTRAINT chk_valid_stellar_address
CHECK (
stellar_address IS NULL
OR (
-- Valid format: 56 characters, starts with G, base32 characters only
LENGTH(stellar_address) = 56
AND stellar_address LIKE 'G%'
AND stellar_address ~ '^[A-Z2-7]{56}$'
-- Not a placeholder
AND stellar_address !~ '^G[A]{55}$'
AND stellar_address !~ '^G[B]{55}$'
AND stellar_address !~ '^G[0]{55}$'
AND stellar_address NOT LIKE 'GTEST%'
AND stellar_address NOT LIKE 'GDUMMY%'
AND stellar_address NOT LIKE 'GPLACEHOLDER%'
AND stellar_address NOT LIKE 'GXXXXXXXX%'
)
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

ADD CONSTRAINT will take an ACCESS EXCLUSIVE lock and scan the full table.

In production, ALTER TABLE users ADD CONSTRAINT … CHECK (…) validates every existing row while holding ACCESS EXCLUSIVE on users, blocking all reads and writes for the duration of the scan. On a large users table this can be a noticeable outage. Consider the standard two-step pattern:

ALTER TABLE users
  ADD CONSTRAINT chk_valid_stellar_address
  CHECK (...) NOT VALID;

ALTER TABLE users VALIDATE CONSTRAINT chk_valid_stellar_address;

NOT VALID adds the constraint without scanning existing rows (it still applies to new INSERT/UPDATE), and VALIDATE CONSTRAINT only takes a SHARE UPDATE EXCLUSIVE lock during the scan, allowing concurrent reads and writes. The pre-flight DO block above already guarantees existing rows comply, so validation should be quick and non-blocking.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql`
around lines 39 - 57, The ALTER TABLE ADD CONSTRAINT statement
(chk_valid_stellar_address on table users) will take an ACCESS EXCLUSIVE lock
and scan the whole table; replace the single-step ADD CONSTRAINT with the
two-step pattern: add the constraint as NOT VALID so it is enforced for new
INSERT/UPDATE without scanning existing rows, then run ALTER TABLE users
VALIDATE CONSTRAINT chk_valid_stellar_address to perform the full validation
(the existing pre-flight DO block means this validate step should be quick and
use a less disruptive lock).

Comment on lines +60 to +62
CREATE INDEX IF NOT EXISTS idx_users_stellar_address_not_null
ON users (stellar_address)
WHERE stellar_address IS NOT NULL;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Does Prisma 5.19 support partial indexes (WHERE clauses) in @@index without preview features?

💡 Result:

No. Partial (filtered) indexes in Prisma’s model-level @@index / @@unique support require the partialIndexes preview feature; without it, Prisma does not support where predicates for @@index in schema, and migrations/diffing will treat partial indexes as unsupported drift. Prisma’s docs state that the where argument on @unique, @@unique, and @@index “requires the partialIndexes Preview feature.” So in Prisma 5.19 specifically: there is no @@index(..., where: ...) support without enabling previewFeatures = ["partialIndexes"] (and in fact the feature was introduced later, in Prisma 7.4.0 behind that preview flag).

Citations:


🏁 Script executed:

# First, let's check if schema.prisma exists and examine its contents
fd -t f "schema.prisma" && wc -l prisma/schema.prisma

Repository: Pi-Defi-world/acbu-backend

Length of output: 116


🏁 Script executed:

# Search for stellar address related definitions in schema.prisma
rg -n "stellarAddress|stellar_address" prisma/schema.prisma -A 2 -B 2

Repository: Pi-Defi-world/acbu-backend

Length of output: 1056


🏁 Script executed:

# Also check the migration file context to understand what's being created
cat -n prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql | head -70

Repository: Pi-Defi-world/acbu-backend

Length of output: 2647


Unmanaged partial index will cause schema drift; partial indexes not supported in Prisma 5.19 without preview features.

schema.prisma declares @unique on stellarAddress (line 36) and @@index([stellarAddress], map: "idx_stellar_address") (line 74). The migration adds a partial index (WHERE stellar_address IS NOT NULL), which filters the index to non-NULL values—a legitimate query optimization distinct from the regular index. However, because Prisma 5.19 does not support partial indexes in @@index without the partialIndexes preview feature, this index cannot be declared in schema.prisma and will be treated as drift on the next prisma migrate dev or prisma db pull.

Options:

  • Drop the partial index if query performance on non-NULL stellarAddress is acceptable with the existing B-tree index.
  • Accept the index and document the drift: Keep this raw SQL and exclude it from Prisma's schema management (note that it must be manually maintained).
  • Enable partialIndexes preview feature in schema.prisma and declare the index there (requires previewFeatures = ["partialIndexes"]), though this feature was added in Prisma 7.4.0 and may not apply to Prisma 5.19 depending on backport status.

If retained, use CREATE INDEX CONCURRENTLY to avoid locking the users table during writes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql`
around lines 60 - 62, Migration adds a partial index
idx_users_stellar_address_not_null on users(stellar_address) with WHERE
stellar_address IS NOT NULL which will cause Prisma schema drift because
schema.prisma declares `@unique` on stellarAddress and @@index([stellarAddress],
map: "idx_stellar_address") but Prisma 5.19 does not support partial indexes;
fix by either (A) remove this partial index from the migration.sql and rely on
the existing B-tree index declared by @@index/@unique on stellarAddress, (B) if
you must keep it, convert the migration to create the index CONCURRENTLY and
document that this index is unmanaged by Prisma (manual maintenance required),
or (C) add the partialIndexes preview feature to schema.prisma and declare the
partial index there (only if your Prisma version supports it); reference
idx_users_stellar_address_not_null, stellar_address, the migration.sql CREATE
INDEX statement, and the schema.prisma annotations `@unique` on stellarAddress and
@@index(..., map: "idx_stellar_address") when making the change.

@@ -0,0 +1,80 @@
@echo off
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Convert file to CRLF line endings.

The batch file uses Unix line endings (LF), which can cause Windows cmd.exe to misparse labels, parentheses, and GOTO targets at 512-byte block boundaries. Save the file with CRLF line endings (or set git config core.autocrlf true and re-add) before merging.

🧰 Tools
🪛 Blinter (1.0.112)

[error] 1-1: Unix line endings detected. Explanation: Batch file uses Unix line endings (LF-only) which can cause GOTO/CALL label parsing failures and script malfunction due to Windows batch parser 512-byte boundary bugs. Recommendation: Convert file to Windows line endings (CRLF). Use tools like dos2unix, notepad++, or configure git with 'git config core.autocrlf true'. Context: File uses Unix line endings (LF-only) - 80 LF sequences found

(E018)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/run_validation.bat` at line 1, The batch script run_validation.bat
currently uses LF endings (the file starts with "@echo off") which can break
cmd.exe parsing on Windows; convert/save the file with CRLF line endings (or run
a tool like unix2dos/dos2unix -i or set git config core.autocrlf true and re-add
the file) so the "@echo off" and any labels/GOTO/parentheses are parsed
correctly on Windows before merging.

Comment on lines +64 to +68
REM Set password for psql
set PGPASSWORD=%DB_PASS%

REM Run validation script
psql -U %DB_USER% -h %DB_HOST% -p %DB_PORT% -d %DB_NAME% -f scripts\validate_stellar_addresses.sql
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

PGPASSWORD leaks into the parent shell.

Setting PGPASSWORD without setlocal causes the password to persist in the user's environment after the script exits. Wrap the script in setlocal/endlocal (or explicitly set "PGPASSWORD=" after the psql call) so the secret is scoped to this run only.

🔒 Suggested change
 `@echo` off
+setlocal
 REM B-013: Validate Stellar Addresses
@@
-pause
+set "PGPASSWORD="
+pause
+endlocal
🧰 Tools
🪛 Blinter (1.0.112)

[error] 65-65: Plain text credentials detected. Explanation: Hardcoded passwords and credentials in scripts pose serious security risks. Recommendation: Use secure credential storage or prompt for credentials at runtime. Context: Potential hardcoded credentials detected

(SEC008)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/run_validation.bat` around lines 64 - 68, The script sets PGPASSWORD
globally which leaks the secret to the parent shell; scope or clear it: wrap the
batch logic around setlocal and endlocal (or alternatively after the psql
invocation explicitly clear PGPASSWORD with set "PGPASSWORD=") so the password
is not persisted—apply this change around the PGPASSWORD assignment and the psql
call (the PGPASSWORD variable and the psql -U %DB_USER% ... -f
scripts\validate_stellar_addresses.sql invocation).

Comment on lines +23 to +50
WHERE stellar_address IS NOT NULL
AND (
LENGTH(stellar_address) != 56
OR stellar_address NOT LIKE 'G%'
OR stellar_address ~ '^G[A]{55}$'
OR stellar_address ~ '^G[B]{55}$'
OR stellar_address ~ '^G[0]{55}$'
OR stellar_address LIKE 'GTEST%'
OR stellar_address LIKE 'GDUMMY%'
OR stellar_address LIKE 'GPLACEHOLDER%'
OR stellar_address LIKE 'GXXXXXXXX%'
);

-- Count of invalid addresses
SELECT COUNT(*) as invalid_address_count
FROM users
WHERE stellar_address IS NOT NULL
AND (
LENGTH(stellar_address) != 56
OR stellar_address NOT LIKE 'G%'
OR stellar_address ~ '^G[A]{55}$'
OR stellar_address ~ '^G[B]{55}$'
OR stellar_address ~ '^G[0]{55}$'
OR stellar_address LIKE 'GTEST%'
OR stellar_address LIKE 'GDUMMY%'
OR stellar_address LIKE 'GPLACEHOLDER%'
OR stellar_address LIKE 'GXXXXXXXX%'
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validator under-reports vs. migration's CHECK constraint.

prisma/migrations/20260426000000_add_stellar_address_validation/migration.sql (line 47) enforces stellar_address ~ '^[A-Z2-7]{56}$', but neither the diagnostic WHERE (lines 23–34) nor the count WHERE (lines 39–50) include that base32 character-set check. A row that is 56 chars, starts with G, isn't a known placeholder, but contains lowercase letters or 0/1/8/9 will be reported as valid here yet still cause ALTER TABLE … ADD CONSTRAINT to fail. Operators following the documented "run validator → expect 0 → apply migration" workflow will be surprised in production.

🛠️ Add the base32 charset predicate to both queries (and the CASE)
   CASE 
     WHEN LENGTH(stellar_address) != 56 THEN 'Invalid length: ' || LENGTH(stellar_address) || ' (expected 56)'
     WHEN stellar_address NOT LIKE 'G%' THEN 'Does not start with G'
+    WHEN stellar_address !~ '^[A-Z2-7]{56}$' THEN 'Contains non-base32 characters'
     WHEN stellar_address ~ '^G[A]{55}$' THEN 'Placeholder pattern: All As'
     ...
   END as issue
 FROM users
 WHERE stellar_address IS NOT NULL
   AND (
     LENGTH(stellar_address) != 56
     OR stellar_address NOT LIKE 'G%'
+    OR stellar_address !~ '^[A-Z2-7]{56}$'
     OR stellar_address ~ '^G[A]{55}$'
     ...
   );

Apply the same addition to the COUNT(*) query at lines 39–50.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/validate_stellar_addresses.sql` around lines 23 - 50, The validator
queries are missing the base32 character-set check used by the migration, so add
the same predicate (stellar_address NOT ~ '^[A-Z2-7]{56}$' or equivalent) to the
diagnostic WHERE, the COUNT(*) WHERE, and the CASE that classifies invalid
addresses so rows with invalid base32 chars (lowercase or disallowed digits) are
flagged; update the same predicate in the first WHERE block (the diagnostic
SELECT), the second COUNT(*) SELECT, and the CASE expression referencing
stellar_address to mirror the migration's regex '^[A-Z2-7]{56}$'.

Comment on lines +1 to +6
/**
* B-013: Validate existing stellar addresses in the database
* Run this script BEFORE applying the migration to ensure no invalid addresses exist
*
* Usage: npx ts-node prisma/validateStellarAddresses.ts
*/
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Doc usage path is wrong.

The header says npx ts-node prisma/validateStellarAddresses.ts, but this file lives under scripts/. Operators following the documented command will get "Cannot find module".

📝 Proposed fix
- * Usage: npx ts-node prisma/validateStellarAddresses.ts
+ * Usage: npx ts-node scripts/validateStellarAddresses.ts
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
* B-013: Validate existing stellar addresses in the database
* Run this script BEFORE applying the migration to ensure no invalid addresses exist
*
* Usage: npx ts-node prisma/validateStellarAddresses.ts
*/
/**
* B-013: Validate existing stellar addresses in the database
* Run this script BEFORE applying the migration to ensure no invalid addresses exist
*
* Usage: npx ts-node scripts/validateStellarAddresses.ts
*/
🧰 Tools
🪛 ESLint

[error] 4-4: Delete ·

(prettier/prettier)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/validateStellarAddresses.ts` around lines 1 - 6, Update the script
usage documentation to point to the correct path: change the header line that
currently says "Usage: npx ts-node prisma/validateStellarAddresses.ts" to
reference the actual location "scripts/validateStellarAddresses.ts" so operators
run "npx ts-node scripts/validateStellarAddresses.ts"; ensure any README or
comment block in validateStellarAddresses.ts is updated accordingly to avoid
"Cannot find module" errors.

Comment on lines +25 to +87
async function main() {
console.log("🔍 B-013: Validating Stellar Addresses\n");

try {
const users = await prisma.user.findMany({
where: { stellarAddress: { not: null } },
select: {
id: true,
username: true,
stellarAddress: true,
createdAt: true,
},
});

console.log(`Found ${users.length} users with stellar addresses\n`);

const invalid: any[] = [];

for (const user of users) {
const issues: string[] = [];
const addr = user.stellarAddress!;

if (addr.length !== 56) issues.push(`Wrong length: ${addr.length}`);
if (!addr.startsWith("G")) issues.push("Doesn't start with G");
if (!isValidStellarAddress(addr)) issues.push("Invalid StrKey checksum");
if (isPlaceholder(addr)) issues.push("Placeholder pattern");

if (issues.length > 0) {
invalid.push({ ...user, issues });
}
}

if (invalid.length === 0) {
console.log("✅ All stellar addresses are valid!");
console.log("✅ Safe to apply migration\n");
} else {
console.log(`❌ Found ${invalid.length} invalid address(es):\n`);
invalid.forEach((u) => {
console.log(`User: ${u.id}`);
console.log(`Username: ${u.username || "(none)"}`);
console.log(`Address: ${u.stellarAddress}`);
console.log(`Issues: ${u.issues.join(", ")}`);
console.log("---\n");
});
console.log("⚠️ Fix these before deploying migration\n");
}

console.log("Summary:");
console.log(` Total with addresses: ${users.length}`);
console.log(` Valid: ${users.length - invalid.length}`);
console.log(` Invalid: ${invalid.length}`);
} catch (error: any) {
console.error("❌ Error:", error.message);
if (error.message.includes("DATABASE_URL")) {
console.error("\nPlease set DATABASE_URL in .env file");
}
process.exit(1);
} finally {
await prisma.$disconnect();
}
}

main();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Script exits 0 even when invalid addresses are found.

main() only calls process.exit(1) from the catch block (line 81); when invalid.length > 0 it prints the report and falls through to a normal return, so the script exits with code 0. That defeats its purpose as a pre-deployment gate — CI / run_validation.bat / a &&-chained prisma migrate deploy will happily proceed despite invalid rows. The sibling scripts/validateStellarAddresses.js correctly uses the boolean return to set the exit code (lines 118–121); please mirror that here.

🛠️ Suggested fix
-    if (invalid.length === 0) {
+    let hasInvalid = false;
+    if (invalid.length === 0) {
       console.log("✅ All stellar addresses are valid!");
       console.log("✅ Safe to apply migration\n");
     } else {
+      hasInvalid = true;
       console.log(`❌ Found ${invalid.length} invalid address(es):\n`);
       ...
       console.log("⚠️  Fix these before deploying migration\n");
     }
 
     console.log("Summary:");
     console.log(`  Total with addresses: ${users.length}`);
     console.log(`  Valid: ${users.length - invalid.length}`);
     console.log(`  Invalid: ${invalid.length}`);
+
+    if (hasInvalid) process.exit(1);
   } catch (error: any) {
     ...
   } finally {
     await prisma.$disconnect();
   }
 }
 
-main();
+main().catch((e) => {
+  console.error(e);
+  process.exit(1);
+});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function main() {
console.log("🔍 B-013: Validating Stellar Addresses\n");
try {
const users = await prisma.user.findMany({
where: { stellarAddress: { not: null } },
select: {
id: true,
username: true,
stellarAddress: true,
createdAt: true,
},
});
console.log(`Found ${users.length} users with stellar addresses\n`);
const invalid: any[] = [];
for (const user of users) {
const issues: string[] = [];
const addr = user.stellarAddress!;
if (addr.length !== 56) issues.push(`Wrong length: ${addr.length}`);
if (!addr.startsWith("G")) issues.push("Doesn't start with G");
if (!isValidStellarAddress(addr)) issues.push("Invalid StrKey checksum");
if (isPlaceholder(addr)) issues.push("Placeholder pattern");
if (issues.length > 0) {
invalid.push({ ...user, issues });
}
}
if (invalid.length === 0) {
console.log("✅ All stellar addresses are valid!");
console.log("✅ Safe to apply migration\n");
} else {
console.log(`❌ Found ${invalid.length} invalid address(es):\n`);
invalid.forEach((u) => {
console.log(`User: ${u.id}`);
console.log(`Username: ${u.username || "(none)"}`);
console.log(`Address: ${u.stellarAddress}`);
console.log(`Issues: ${u.issues.join(", ")}`);
console.log("---\n");
});
console.log("⚠️ Fix these before deploying migration\n");
}
console.log("Summary:");
console.log(` Total with addresses: ${users.length}`);
console.log(` Valid: ${users.length - invalid.length}`);
console.log(` Invalid: ${invalid.length}`);
} catch (error: any) {
console.error("❌ Error:", error.message);
if (error.message.includes("DATABASE_URL")) {
console.error("\nPlease set DATABASE_URL in .env file");
}
process.exit(1);
} finally {
await prisma.$disconnect();
}
}
main();
async function main() {
console.log("🔍 B-013: Validating Stellar Addresses\n");
try {
const users = await prisma.user.findMany({
where: { stellarAddress: { not: null } },
select: {
id: true,
username: true,
stellarAddress: true,
createdAt: true,
},
});
console.log(`Found ${users.length} users with stellar addresses\n`);
const invalid: any[] = [];
for (const user of users) {
const issues: string[] = [];
const addr = user.stellarAddress!;
if (addr.length !== 56) issues.push(`Wrong length: ${addr.length}`);
if (!addr.startsWith("G")) issues.push("Doesn't start with G");
if (!isValidStellarAddress(addr)) issues.push("Invalid StrKey checksum");
if (isPlaceholder(addr)) issues.push("Placeholder pattern");
if (issues.length > 0) {
invalid.push({ ...user, issues });
}
}
let hasInvalid = false;
if (invalid.length === 0) {
console.log("✅ All stellar addresses are valid!");
console.log("✅ Safe to apply migration\n");
} else {
hasInvalid = true;
console.log(`❌ Found ${invalid.length} invalid address(es):\n`);
invalid.forEach((u) => {
console.log(`User: ${u.id}`);
console.log(`Username: ${u.username || "(none)"}`);
console.log(`Address: ${u.stellarAddress}`);
console.log(`Issues: ${u.issues.join(", ")}`);
console.log("---\n");
});
console.log("⚠️ Fix these before deploying migration\n");
}
console.log("Summary:");
console.log(` Total with addresses: ${users.length}`);
console.log(` Valid: ${users.length - invalid.length}`);
console.log(` Invalid: ${invalid.length}`);
if (hasInvalid) process.exit(1);
} catch (error: any) {
console.error("❌ Error:", error.message);
if (error.message.includes("DATABASE_URL")) {
console.error("\nPlease set DATABASE_URL in .env file");
}
process.exit(1);
} finally {
await prisma.$disconnect();
}
}
main().catch((e) => {
console.error(e);
process.exit(1);
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/validateWithPrisma.ts` around lines 25 - 87, The script currently
returns normally when invalid addresses are found; update main() so it fails the
process when invalid.length > 0 (instead of falling through) by calling
process.exit(1) or returning a rejected promise; specifically, inside the block
that logs invalid addresses (the else branch that builds the invalid array and
prints the report) add a non-zero exit (e.g., process.exit(1)) after printing
the summary so the CI gate fails; ensure prisma.$disconnect() still runs (keep
or move the exit to after finally or call process.exitCode = 1 before returning)
to mirror the behavior implemented in scripts/validateStellarAddresses.js.

Comment thread src/services/wallet/walletService.ts
@Junman140 Junman140 merged commit 4ad13f2 into Pi-Defi-world:dev Apr 28, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

B-013 — Placeholder / invalid Stellar address on signup

2 participants