Skip to content

Comments

Dev -> main#89

Open
Prometheo wants to merge 543 commits intomainfrom
dev
Open

Dev -> main#89
Prometheo wants to merge 543 commits intomainfrom
dev

Conversation

@Prometheo
Copy link
Collaborator

@Prometheo Prometheo commented Oct 21, 2024

Summary by CodeRabbit

  • New Features

    • Major new v1 API surface: campaigns, grantpicks (rounds/voting), pots, lists and many on-demand sync endpoints.
    • New production deploy workflow and several management commands plus indexer/background tasks.
  • Improvements

    • Accounts now surface NEAR social/profile data and multi‑chain awareness; richer API docs with examples and caching.
    • Pagination moved to page/page_size (default 30); anonymous rate limit increased to 500/min.
    • Admin list views and pot/list UIs enhanced.
  • Bug Fixes

    • More robust transaction/donation handling, token consistency, and migration updates.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@campaigns/sync.py`:
- Around line 263-296: The sync endpoints are missing authentication; add
permission_classes = [IsAuthenticated] to the CampaignSyncAPI and
CampaignDonationSyncAPI class definitions so only authenticated users can call
post; also import IsAuthenticated from rest_framework.permissions if not already
imported and ensure the class-level attribute is placed alongside other DRF view
attributes (above the post method) for both sync_campaign_from_chain usage in
CampaignSyncAPI and the corresponding donation sync handler in
CampaignDonationSyncAPI.
- Around line 109-130: The function parse_donation_from_tx currently can raise
binascii.Error when base64 decoding invalid SuccessValue and is annotated to
return dict while returning None; update the signature to -> Optional[dict]
(import Optional from typing) and expand the exception handling in the decoding
block to catch binascii.Error in addition to json.JSONDecodeError and
UnicodeDecodeError, so invalid Base64 payloads are skipped and the function can
safely return None when no donation data is found; ensure callers handle the
nullable return.
- Around line 294-296: Replace the current except handlers that call
logger.error(...) and return str(e) with handlers that call
logger.exception(...) to log full stack traces, and return a generic
Response({"error": "RPC failed"}, status=502) to clients; update both the except
block handling campaign sync (the block referencing campaign_id around the
existing logger.error(f"Error syncing campaign {campaign_id}: {e}") /
Response(...) code) and the similar handler later (the one around lines
~397-399) to use logger.exception(...) and the generic "RPC failed" message.
- Around line 172-175: The defaults dict currently always includes "created_at"
(set to created_ms-derived time or datetime.now()), which causes
update_or_create() to reset creation timestamps when created_ms is missing;
instead, remove "created_at" from the defaults by default and only add it to the
defaults dict when data.get("created_ms") is truthy so that update_or_create()
preserves the existing created_at on updates—modify the code building defaults
in campaigns/sync.py (the dict used with update_or_create()) to conditionally
set "created_at" only when created_ms exists.
🧹 Nitpick comments (3)
api/urls.py (1)

165-186: Inconsistent trailing slash on pot detail endpoint.

Line 166 (v1/pots/<str:pot_id>/) has a trailing slash while other detail endpoints (e.g., v1/lists/<int:list_id>, v1/campaigns/<int:campaign_id>) and pot sub-resource endpoints don't. This inconsistency can cause 301 redirects or 404 errors depending on Django's APPEND_SLASH setting.

-    path("v1/pots/<str:pot_id>/", PotDetailAPI.as_view(), name="pots_api_by_id"),
+    path("v1/pots/<str:pot_id>", PotDetailAPI.as_view(), name="pots_api_by_id"),
campaigns/sync.py (2)

35-35: Use explicit Optional annotations for nullable parameters.

Ruff flags implicit Optional on these signatures; prefer T | None.

♻️ Proposed fix
-def fetch_from_rpc(method_name: str, args: dict = None, contract_id: str = None):
+def fetch_from_rpc(method_name: str, args: dict | None = None, contract_id: str | None = None):
@@
-def sync_donation_from_data(campaign: Campaign, donation_data: dict, tx_hash: str = None) -> CampaignDonation:
+def sync_donation_from_data(campaign: Campaign, donation_data: dict, tx_hash: str | None = None) -> CampaignDonation:

Also applies to: 197-197


250-255: Silence unused created from update_or_create.

Ruff flags the unused variable; rename to _created to avoid lint noise.

♻️ Proposed fix
-    donation, created = CampaignDonation.objects.update_or_create(
+    donation, _created = CampaignDonation.objects.update_or_create(
         on_chain_id=donation_data["id"],
         campaign=campaign,
         donor=donor,
         defaults=donation_defaults,
     )

Comment on lines +109 to +130
def parse_donation_from_tx(tx_result: dict) -> dict:
"""
Parse donation data from transaction execution result.
Looks through receipts_outcome to find the SuccessValue containing donation data.
"""
receipts_outcome = tx_result.get("receipts_outcome", [])

for outcome in receipts_outcome:
status = outcome.get("outcome", {}).get("status", {})
if isinstance(status, dict) and "SuccessValue" in status:
success_value = status["SuccessValue"]
if success_value:
try:
decoded = base64.b64decode(success_value).decode()
data = json.loads(decoded)
# Check if this looks like donation data
if isinstance(data, dict) and "donor_id" in data and "total_amount" in data:
return data
except (json.JSONDecodeError, UnicodeDecodeError):
continue

return None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's check the current state of the file and its imports
head -150 campaigns/sync.py | tail -100

Repository: PotLock/django-indexer

Length of output: 3300


🏁 Script executed:

# Also get the imports section
head -20 campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 753


🏁 Script executed:

# Check if binascii is already imported anywhere
rg "import binascii" campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 48


🌐 Web query:

Python base64.b64decode binascii.Error exception handling

💡 Result:

base64.b64decode() can raise binascii.Error mainly for two input problems:

  • Incorrect/invalid padding (e.g., missing/extra =). binascii.Error is raised if the input is incorrectly padded. [1]
  • Invalid characters when you enable strict mode. With validate=True, any non‑Base64 alphabet character causes a binascii.Error; with the default validate=False, such characters are discarded before the padding check. [1]

It may also raise:

  • ValueError if altchars length isn’t exactly 2 (implementation-dependent wording, but documented). [1]
  • TypeError for wrong input types / non-bytes-like altchars. [1]

Typical handling pattern:

import base64, binascii

def safe_b64decode(s: str | bytes) -> bytes | None:
    try:
        return base64.b64decode(s, validate=True)  # strict: fail on invalid chars
    except (binascii.Error, ValueError, TypeError):
        return None

If you want to attempt recovery from missing padding, you can normalize padding before decoding, but still catch binascii.Error because the content may be invalid:

import base64, binascii

def b64decode_with_padding(s: str) -> bytes:
    s = s.strip()
    s += "=" * (-len(s) % 4)  # pad to multiple of 4
    try:
        return base64.b64decode(s, validate=True)
    except binascii.Error as e:
        raise ValueError(f"Invalid base64: {e}") from e

Sources: Python base64.b64decode docs (exceptions, validate behavior, altchars errors). [1]

[1] (docs.python.org)

Citations:


Handle invalid SuccessValue payloads and reflect nullable return type.

base64.b64decode can raise binascii.Error for invalid padding or non-Base64 characters, which currently bubbles and aborts parsing. Additionally, the function returns None on line 128 but is annotated as -> dict.

🛠️ Proposed fix
+import binascii
@@
-def parse_donation_from_tx(tx_result: dict) -> dict:
+def parse_donation_from_tx(tx_result: dict) -> dict | None:
@@
-                except (json.JSONDecodeError, UnicodeDecodeError):
+                except (json.JSONDecodeError, UnicodeDecodeError, binascii.Error):
                     continue
🤖 Prompt for AI Agents
In `@campaigns/sync.py` around lines 109 - 130, The function
parse_donation_from_tx currently can raise binascii.Error when base64 decoding
invalid SuccessValue and is annotated to return dict while returning None;
update the signature to -> Optional[dict] (import Optional from typing) and
expand the exception handling in the decoding block to catch binascii.Error in
addition to json.JSONDecodeError and UnicodeDecodeError, so invalid Base64
payloads are skipped and the function can safely return None when no donation
data is found; ensure callers handle the nullable return.

Comment on lines +172 to +175
"created_at": datetime.fromtimestamp(data["created_ms"] / 1000, tz=timezone.utc)
if data.get("created_ms")
else datetime.now(tz=timezone.utc),
"target_amount": str(data["target_amount"]),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "sync.py" | head -20

Repository: PotLock/django-indexer

Length of output: 85


🏁 Script executed:

cat -n ./campaigns/sync.py | head -200

Repository: PotLock/django-indexer

Length of output: 8465


🏁 Script executed:

cat -n ./campaigns/sync.py | sed -n '160,195p'

Repository: PotLock/django-indexer

Length of output: 1755


🏁 Script executed:

find . -type f -name "models.py" -path "*/campaigns/*" | head -5

Repository: PotLock/django-indexer

Length of output: 87


🏁 Script executed:

cat -n ./campaigns/models.py | grep -A 100 "class Campaign"

Repository: PotLock/django-indexer

Length of output: 6872


🏁 Script executed:

rg "created_ms" --type py -B 2 -A 2

Repository: PotLock/django-indexer

Length of output: 2364


🏁 Script executed:

cat -n ./indexer_app/utils.py | sed -n '1,150p'

Repository: PotLock/django-indexer

Length of output: 5890


🏁 Script executed:

rg "get_campaign" --type py -B 3 -A 10

Repository: PotLock/django-indexer

Length of output: 5083


🏁 Script executed:

rg "created_ms" --type py -B 5 -A 5 | grep -A 10 -B 10 "get_or_create\|update_or_create" | head -60

Repository: PotLock/django-indexer

Length of output: 48


🏁 Script executed:

find . -type f -name "test*.py" -o -name "*test.py" | head -10

Repository: PotLock/django-indexer

Length of output: 263


🏁 Script executed:

cat -n ./campaigns/tests.py

Repository: PotLock/django-indexer

Length of output: 15081


🏁 Script executed:

rg "created_ms" --type py -C 2 | grep -E "(get\(|if|missing|None|default)" -B 2 -A 2

Repository: PotLock/django-indexer

Length of output: 1352


🏁 Script executed:

cat -n ./campaigns/management/commands/populatecampaigndata.py | sed -n '1,150p'

Repository: PotLock/django-indexer

Length of output: 6753


🏁 Script executed:

cat -n ./campaigns/management/commands/populatecampaigndata.py | sed -n '150,230p'

Repository: PotLock/django-indexer

Length of output: 4332


🏁 Script executed:

rg "created_ms" --type py -B 10 | grep -E "(Optional|nullable|can be|might|may|optional)" -i

Repository: PotLock/django-indexer

Length of output: 48


🏁 Script executed:

python3 << 'EOF'
# Let's check Django's update_or_create behavior with defaults
# and understand how it handles missing fields

# Key behaviors to understand:
# 1. update_or_create(on_chain_id=X, defaults={...})
# 2. If object exists (UPDATE): only fields in defaults are updated
# 3. If object doesn't exist (CREATE): defaults dict + lookup params create the object
# 4. For null=False fields on CREATE: they must be in defaults or have a model default

print("Django update_or_create behavior:")
print("==============================")
print("UPDATE case:")
print("  - If created_at is in defaults: it will be UPDATED to datetime.now()")
print("  - If created_at is NOT in defaults: it will be PRESERVED (not changed)")
print()
print("CREATE case:")
print("  - If created_at is in defaults: it will use that value")
print("  - If created_at is NOT in defaults and null=False: FAIL (no value for required field)")
print()
print("Review concern VALID:")
print("  - Current code ALWAYS includes created_at in defaults")
print("  - On UPDATE without created_ms: sets to datetime.now() (overwrites historical data)")
print("  - Proposed fix: Conditionally include created_at only when provided")
print("    * On UPDATE without created_ms: field is preserved ✓")
print("    * On CREATE without created_ms: field would fail (needs model default or guarantee) ⚠")
EOF

Repository: PotLock/django-indexer

Length of output: 828


Don't reset created_at to "now" on every sync—preserve the historical creation timestamp.

The current code always sets created_at in the defaults dict, which means update_or_create() will overwrite it to datetime.now() on every update if created_ms is missing. This erases historical data. Only include created_at in defaults when created_ms is provided, allowing the field to remain unchanged during updates:

🛠️ Proposed fix
-        "created_at": datetime.fromtimestamp(data["created_ms"] / 1000, tz=timezone.utc)
-        if data.get("created_ms")
-        else datetime.now(tz=timezone.utc),
         "target_amount": str(data["target_amount"]),

Then conditionally set it after the dict:

+    if data.get("created_ms"):
+        campaign_defaults["created_at"] = datetime.fromtimestamp(
+            data["created_ms"] / 1000, tz=timezone.utc
+        )
+    else:
+        # On create, set to now; on update, field is preserved
+        campaign_defaults["created_at"] = datetime.now(tz=timezone.utc)
🤖 Prompt for AI Agents
In `@campaigns/sync.py` around lines 172 - 175, The defaults dict currently always
includes "created_at" (set to created_ms-derived time or datetime.now()), which
causes update_or_create() to reset creation timestamps when created_ms is
missing; instead, remove "created_at" from the defaults by default and only add
it to the defaults dict when data.get("created_ms") is truthy so that
update_or_create() preserves the existing created_at on updates—modify the code
building defaults in campaigns/sync.py (the dict used with update_or_create())
to conditionally set "created_at" only when created_ms exists.

Comment on lines +263 to +296
class CampaignSyncAPI(APIView):
"""
Sync a campaign from blockchain to database.

Called by frontend after user creates or updates a campaign.
Fetches current state from RPC via get_campaign, creates/updates in DB.
"""

@extend_schema(
summary="Sync campaign from blockchain",
responses={
200: OpenApiResponse(description="Campaign synced"),
404: OpenApiResponse(description="Campaign not found on chain"),
502: OpenApiResponse(description="RPC failed"),
},
)
def post(self, request, campaign_id: int):
try:
campaign, created = sync_campaign_from_chain(campaign_id)

if not campaign:
return Response({"error": "Campaign not found on chain"}, status=404)

return Response(
{
"success": True,
"message": "Campaign created" if created else "Campaign updated",
"on_chain_id": campaign.on_chain_id,
}
)

except Exception as e:
logger.error(f"Error syncing campaign {campaign_id}: {e}")
return Response({"error": str(e)}, status=502)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's explore the repository structure
find . -name "sync.py" -type f | head -20

Repository: PotLock/django-indexer

Length of output: 85


🏁 Script executed:

#!/bin/bash
# Check the campaigns/sync.py file - first get its size
wc -l campaigns/sync.py 2>/dev/null || find . -name "sync.py" -path "*campaigns*"

Repository: PotLock/django-indexer

Length of output: 87


🏁 Script executed:

#!/bin/bash
# Read the campaigns/sync.py file to examine the endpoints
cat -n campaigns/sync.py 2>/dev/null | head -400

Repository: PotLock/django-indexer

Length of output: 17326


🏁 Script executed:

#!/bin/bash
# Find urls.py files that might register these endpoints
find . -name "urls.py" -type f | xargs grep -l "CampaignSyncAPI\|CampaignDonationSyncAPI" 2>/dev/null

Repository: PotLock/django-indexer

Length of output: 79


🏁 Script executed:

#!/bin/bash
# Search for where these API views are used/registered
rg "CampaignSyncAPI|CampaignDonationSyncAPI" --type py -A 3 -B 3

Repository: PotLock/django-indexer

Length of output: 1928


🏁 Script executed:

#!/bin/bash
# Check for REST_FRAMEWORK settings that might define default authentication
find . -name "settings.py" -type f -exec grep -A 20 "REST_FRAMEWORK" {} +

Repository: PotLock/django-indexer

Length of output: 718


Add authentication to sync endpoints.

These endpoints write to the DB and hit external RPCs. While throttling is already configured (500 req/min for anonymous users), they lack authentication. Add permission_classes = [IsAuthenticated] to restrict access to authenticated users only.

+from rest_framework.permissions import IsAuthenticated
+
 class CampaignSyncAPI(APIView):
+    permission_classes = [IsAuthenticated]

Also applies to: CampaignDonationSyncAPI (lines 299-399)

🧰 Tools
🪛 Ruff (0.14.14)

[warning] 279-279: Unused method argument: request

(ARG002)


[warning] 294-294: Do not catch blind exception: Exception

(BLE001)


[warning] 295-295: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
In `@campaigns/sync.py` around lines 263 - 296, The sync endpoints are missing
authentication; add permission_classes = [IsAuthenticated] to the
CampaignSyncAPI and CampaignDonationSyncAPI class definitions so only
authenticated users can call post; also import IsAuthenticated from
rest_framework.permissions if not already imported and ensure the class-level
attribute is placed alongside other DRF view attributes (above the post method)
for both sync_campaign_from_chain usage in CampaignSyncAPI and the corresponding
donation sync handler in CampaignDonationSyncAPI.

Comment on lines +294 to +296
except Exception as e:
logger.error(f"Error syncing campaign {campaign_id}: {e}")
return Response({"error": str(e)}, status=502)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cd /root && find . -name "sync.py" -path "*/campaigns/*" -type f

Repository: PotLock/django-indexer

Length of output: 121


🏁 Script executed:

find . -name "sync.py" -path "*/campaigns/*" -type f

Repository: PotLock/django-indexer

Length of output: 85


🏁 Script executed:

wc -l ./campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 89


🏁 Script executed:

sed -n '290,300p' ./campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 366


🏁 Script executed:

sed -n '393,399p' ./campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 329


🏁 Script executed:

sed -n '1,50p' ./campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 1778


🏁 Script executed:

sed -n '270,310p' ./campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 1498


🏁 Script executed:

sed -n '100,200p' ./campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 4030


🏁 Script executed:

grep -n "logger.error\|logger.exception" ./campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 228


🏁 Script executed:

sed -n '200,250p' ./campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 1878


🏁 Script executed:

sed -n '35,110p' ./campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 2375


🏁 Script executed:

sed -n '360,390p' ./campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 1331


🏁 Script executed:

grep -n "except\|return Response" ./campaigns/sync.py | head -30

Repository: PotLock/django-indexer

Length of output: 798


🏁 Script executed:

sed -n '35,50p' ./campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 768


Use logger.exception() and return a generic error message to avoid leaking RPC internals to API clients.

The current code returns str(e) to external clients, which exposes internal RPC error details. The OpenAPI schema documents a generic "RPC failed" response, but the implementation contradicts this by leaking error details (e.g., "RPC error: ..." from fetch_from_rpc()).

Use logger.exception() to capture the full stack trace server-side and return "RPC failed" to the client instead.

Proposed fix
-        except Exception as e:
-            logger.error(f"Error syncing campaign {campaign_id}: {e}")
-            return Response({"error": str(e)}, status=502)
+        except Exception as e:
+            logger.exception(f"Error syncing campaign {campaign_id}")
+            return Response({"error": "RPC failed"}, status=502)
@@
-        except Exception as e:
-            logger.error(f"Error syncing donation for campaign {campaign_id}: {e}")
-            return Response({"error": str(e)}, status=502)
+        except Exception as e:
+            logger.exception(f"Error syncing donation for campaign {campaign_id}")
+            return Response({"error": "RPC failed"}, status=502)

Also applies to: 397-399

🧰 Tools
🪛 Ruff (0.14.14)

[warning] 294-294: Do not catch blind exception: Exception

(BLE001)


[warning] 295-295: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
In `@campaigns/sync.py` around lines 294 - 296, Replace the current except
handlers that call logger.error(...) and return str(e) with handlers that call
logger.exception(...) to log full stack traces, and return a generic
Response({"error": "RPC failed"}, status=502) to clients; update both the except
block handling campaign sync (the block referencing campaign_id around the
existing logger.error(f"Error syncing campaign {campaign_id}: {e}") /
Response(...) code) and the similar handler later (the one around lines
~397-399) to use logger.exception(...) and the generic "RPC failed" message.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🤖 Fix all issues with AI agents
In `@api/sync.py`:
- Around line 120-198: ListSyncAPI and other unauthenticated sync endpoints
(e.g., DirectDonationSyncAPI and every APIView in this file that accepts
anonymous POSTs) currently allow anonymous RPC calls and DB writes; add
authentication and throttling to prevent abuse: apply appropriate
authentication_classes and permission_classes (e.g., TokenAuthentication and
IsAuthenticated or a service-level permission) on ListSyncAPI and the other sync
APIView classes, and attach throttle_classes (e.g., ScopedRateThrottle or a
custom throttle) with a defined rate scope; additionally, ensure critical DB
writes are wrapped in atomic transactions (use transaction.atomic in methods
like ListSyncAPI.post) and validate the authenticated caller/role before
performing RPC or write operations.
- Line 154: The code uses datetime.fromtimestamp(...) which produces naive
datetimes (e.g., assigning to existing_list.updated_at), causing issues when
USE_TZ=True; change those calls to create timezone-aware datetimes by calling
datetime.fromtimestamp(value / 1000, tz=timezone.utc) and ensure timezone is
imported (e.g., from datetime import timezone) or use the same pattern as in
donations/sync.py; apply the same change to all other datetime.fromtimestamp
usages in this module so DateTimeField values are timezone-aware.
- Around line 169-183: The List is being created with owner_id=data["owner"]
before ensuring the related Account exists, which can violate FK constraints;
move the Account.objects.get_or_create call to before List.objects.create so the
owner Account is guaranteed to exist first (update the order around
Account.objects.get_or_create(id=data["owner"]) and List.objects.create(...),
keeping the same fields and timestamps).

In `@api/urls.py`:
- Line 179: The pot detail route string is inconsistent with other routes;
update the path in api/urls.py from "v1/pots/<str:pot_id>/" to
"v1/pots/<str:pot_id>" so the route without a trailing slash matches the rest of
the API, keeping the view PotDetailAPI.as_view() and name "pots_api_by_id"
unchanged; run URL tests or lint to confirm no other endpoints expect the
trailing slash.

In `@donations/sync.py`:
- Around line 84-205: The endpoint DirectDonationSyncAPI.post is unauthenticated
and unthrottled, allowing anyone to trigger fetch_tx_result and upsert
Account/Token/Donation; add protection by attaching appropriate DRF throttling
and authentication on the view (e.g., set throttle_classes and
permission_classes or authentication_classes on DirectDonationSyncAPI) and
validate requests (e.g., require a signed HMAC or API key parameter and verify
it before calling fetch_tx_result), and ensure expensive RPC calls
(fetch_tx_result) only run after signature/auth success to mitigate abuse and
DoS.
- Line 183: The current dict sets "referrer_fee" using a falsy check which turns
0 into None; update the assignment for "referrer_fee" to only treat
missing/undefined as None (e.g., use an explicit None check) so that
referrer_fee == 0 becomes "0" like protocol_fee; locate the dict entry that uses
the variables referrer_fee and protocol_fee (the line with "referrer_fee":
str(referrer_fee) if referrer_fee else None) and replace the conditional with an
explicit None check (or always stringify) to match protocol_fee's behavior.
- Around line 117-205: The broad except Exception in the sync block (surrounding
fetch_tx_result, parse_donation_from_tx, and Donation.objects.update_or_create)
is treating client/input problems as 502; instead, explicitly catch
input/validation exceptions (e.g., KeyError, ValueError, TypeError) raised while
parsing donation_data (fields like "donated_at_ms", "total_amount", "id") and
while converting ints, and return a 400 Response with the error message; keep a
separate broad except Exception to log unexpected infra/runtime errors and
return 502. Locate the try/except around parse_donation_from_tx, the int()
conversions and update_or_create call, and split the handlers accordingly (catch
KeyError/ValueError/TypeError first -> Response(status=400), then a final
generic except -> logger.error + Response(status=502)).

In `@lists/api.py`:
- Around line 213-217: The code interpolates user-controlled category_param
directly into a regex used in the registrations.filter call
(registrant__near_social_profile_data__plCategories__iregex), which risks ReDoS;
fix by escaping category_param with re.escape() before building
category_regex_pattern so the pattern becomes safe, e.g., compute escaped =
re.escape(category_param) and use that when constructing category_regex_pattern
in the block that checks if category_param (the code handling category_param and
registrations.filter).
- Around line 271-279: The code currently materializes the entire QuerySet via
list(registrations) and uses random.choice, which loads all rows into memory;
update the logic in the view handling "registrations" to let the DB pick a
random row by replacing the list()/random.choice pattern with a DB-level random
selection like registrations.order_by('?').first(), then check for None (if no
registration found) and return the same 404 Response; ensure you update
references to the chosen object (registration) accordingly so subsequent code
uses the object returned by .first().
🧹 Nitpick comments (5)
donations/sync.py (2)

26-26: Unused constant DONATION_CONTRACT.

DONATION_CONTRACT is defined but never referenced anywhere in this file.

🧹 Proposed fix
-DONATION_CONTRACT = f"donate.{settings.POTLOCK_TLA}"
-
-
 def fetch_tx_result(tx_hash: str, sender_id: str):

29-57: No retry/fallback logic in fetch_tx_result, unlike fetch_from_rpc.

api/sync.py has a well-structured fetch_from_rpc with session retries, multiple RPC fallback endpoints, and detailed logging. fetch_tx_result here uses a bare requests.post against a single endpoint with no retry. Consider reusing the retry/fallback pattern for consistency and resilience.

api/sync.py (2)

36-117: requests.Session is never closed.

The Session created on Line 51 is used across multiple RPC calls but never explicitly closed (no session.close() or with block). While it will eventually be garbage-collected, in a request-heavy environment this can leak connections.

♻️ Proposed fix — use a context manager
-    session = requests.Session()
-    retries = Retry(total=2, backoff_factor=0.5, status_forcelist=[502, 503, 504])
-    session.mount("https://", HTTPAdapter(max_retries=retries))
+    with requests.Session() as session:
+        retries = Retry(total=2, backoff_factor=0.5, status_forcelist=[502, 503, 504])
+        session.mount("https://", HTTPAdapter(max_retries=retries))
+        # ... rest of the function body indented under `with`

196-198: str(e) in error responses may leak sensitive internals.

All exception handlers return {"error": str(e)}. For DB or RPC errors, str(e) can expose connection strings, table names, or RPC URLs. Return a generic message to the client and log the detail server-side.

♻️ Proposed fix (example for ListSyncAPI; apply to all)
         except Exception as e:
             logger.error(f"Error syncing list {list_id}: {e}")
-            return Response({"error": str(e)}, status=502)
+            return Response({"error": "Internal error while syncing list"}, status=502)

Also applies to: 262-264, 331-333, 397-399

lists/api.py (1)

200-203: Remove commented-out code.

Lines 200, 203, 259, and 262 contain commented-out query code. This is dead code that adds noise; if needed later, it can be recovered from version control.

Also applies to: 259-262

Comment on lines +120 to +198
class ListSyncAPI(APIView):
"""
Sync a list from blockchain to database.

Called by frontend after user creates a list.
Fetches current state from RPC, creates/updates in DB.
"""

@extend_schema(
summary="Sync list from blockchain",
responses={
200: OpenApiResponse(description="List synced"),
404: OpenApiResponse(description="List not found on chain"),
502: OpenApiResponse(description="RPC failed"),
}
)
def post(self, request, list_id: int):
try:
# Fetch from RPC
data = fetch_from_rpc("get_list", {"list_id": int(list_id)})

if not data:
return Response({"error": "List not found on chain"}, status=404)

# Check if already exists
existing_list = List.objects.filter(on_chain_id=int(list_id)).first()

if existing_list:
# Update existing list
existing_list.name = data["name"]
existing_list.description = data.get("description", "")
existing_list.cover_image_url = data.get("cover_image_url")
existing_list.admin_only_registrations = data.get("admin_only_registrations", False)
existing_list.default_registration_status = data.get("default_registration_status", "Pending")
existing_list.updated_at = datetime.fromtimestamp(data["updated_at"] / 1000)
existing_list.save()

# Update admins
existing_list.admins.clear()
for admin_id in data.get("admins", []):
admin, _ = Account.objects.get_or_create(id=admin_id)
existing_list.admins.add(admin)

return Response({
"success": True,
"message": "List updated",
"on_chain_id": list_id
})

# Create list (on_chain_id is the blockchain ID, id is auto-generated)
list_obj = List.objects.create(
on_chain_id=data["id"],
owner_id=data["owner"],
name=data["name"],
description=data.get("description", ""),
cover_image_url=data.get("cover_image_url"),
admin_only_registrations=data.get("admin_only_registrations", False),
default_registration_status=data.get("default_registration_status", "Pending"),
created_at=datetime.fromtimestamp(data["created_at"] / 1000),
updated_at=datetime.fromtimestamp(data["updated_at"] / 1000),
)

# Create owner account
Account.objects.get_or_create(id=data["owner"])

# Add admins
for admin_id in data.get("admins", []):
admin, _ = Account.objects.get_or_create(id=admin_id)
list_obj.admins.add(admin)

return Response({
"success": True,
"message": "List created",
"on_chain_id": list_obj.on_chain_id
})

except Exception as e:
logger.error(f"Error syncing list {list_id}: {e}")
return Response({"error": str(e)}, status=502)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

All sync endpoints are unauthenticated — same concern as DirectDonationSyncAPI.

Every APIView in this file accepts anonymous POST requests that trigger outbound RPC calls and DB writes. This enables abuse (DoS via RPC fan-out, data pollution). At minimum, apply throttling; ideally add authentication.

Also applies to: 201-264, 267-333, 336-399

🤖 Prompt for AI Agents
In `@api/sync.py` around lines 120 - 198, ListSyncAPI and other unauthenticated
sync endpoints (e.g., DirectDonationSyncAPI and every APIView in this file that
accepts anonymous POSTs) currently allow anonymous RPC calls and DB writes; add
authentication and throttling to prevent abuse: apply appropriate
authentication_classes and permission_classes (e.g., TokenAuthentication and
IsAuthenticated or a service-level permission) on ListSyncAPI and the other sync
APIView classes, and attach throttle_classes (e.g., ScopedRateThrottle or a
custom throttle) with a defined rate scope; additionally, ensure critical DB
writes are wrapped in atomic transactions (use transaction.atomic in methods
like ListSyncAPI.post) and validate the authenticated caller/role before
performing RPC or write operations.

Comment on lines +169 to +183
# Create list (on_chain_id is the blockchain ID, id is auto-generated)
list_obj = List.objects.create(
on_chain_id=data["id"],
owner_id=data["owner"],
name=data["name"],
description=data.get("description", ""),
cover_image_url=data.get("cover_image_url"),
admin_only_registrations=data.get("admin_only_registrations", False),
default_registration_status=data.get("default_registration_status", "Pending"),
created_at=datetime.fromtimestamp(data["created_at"] / 1000),
updated_at=datetime.fromtimestamp(data["updated_at"] / 1000),
)

# Create owner account
Account.objects.get_or_create(id=data["owner"])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Owner Account is created after the List — FK integrity error if constraint is enforced.

Line 170 sets owner_id=data["owner"] during List.objects.create(...), but the corresponding Account.objects.get_or_create(id=data["owner"]) only happens on Line 183. If the account doesn't already exist and the DB enforces the foreign key, this will raise an IntegrityError.

Move the account creation before the list creation.

🐛 Proposed fix
-            # Create list (on_chain_id is the blockchain ID, id is auto-generated)
+            # Create owner account first (needed for FK)
+            Account.objects.get_or_create(id=data["owner"])
+
+            # Create list (on_chain_id is the blockchain ID, id is auto-generated)
             list_obj = List.objects.create(
                 on_chain_id=data["id"],
                 owner_id=data["owner"],
@@ ...
             )

-            # Create owner account
-            Account.objects.get_or_create(id=data["owner"])
-
             # Add admins
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Create list (on_chain_id is the blockchain ID, id is auto-generated)
list_obj = List.objects.create(
on_chain_id=data["id"],
owner_id=data["owner"],
name=data["name"],
description=data.get("description", ""),
cover_image_url=data.get("cover_image_url"),
admin_only_registrations=data.get("admin_only_registrations", False),
default_registration_status=data.get("default_registration_status", "Pending"),
created_at=datetime.fromtimestamp(data["created_at"] / 1000),
updated_at=datetime.fromtimestamp(data["updated_at"] / 1000),
)
# Create owner account
Account.objects.get_or_create(id=data["owner"])
# Create owner account first (needed for FK)
Account.objects.get_or_create(id=data["owner"])
# Create list (on_chain_id is the blockchain ID, id is auto-generated)
list_obj = List.objects.create(
on_chain_id=data["id"],
owner_id=data["owner"],
name=data["name"],
description=data.get("description", ""),
cover_image_url=data.get("cover_image_url"),
admin_only_registrations=data.get("admin_only_registrations", False),
default_registration_status=data.get("default_registration_status", "Pending"),
created_at=datetime.fromtimestamp(data["created_at"] / 1000),
updated_at=datetime.fromtimestamp(data["updated_at"] / 1000),
)
🤖 Prompt for AI Agents
In `@api/sync.py` around lines 169 - 183, The List is being created with
owner_id=data["owner"] before ensuring the related Account exists, which can
violate FK constraints; move the Account.objects.get_or_create call to before
List.objects.create so the owner Account is guaranteed to exist first (update
the order around Account.objects.get_or_create(id=data["owner"]) and
List.objects.create(...), keeping the same fields and timestamps).

path("v1/pots", PotsAPI.as_view(), name="pots_api"),
path("v1/pots/<str:pot_id>/", PotsAPI.as_view(), name="pots_api_by_id"),
path("v1/pots", PotsListAPI.as_view(), name="pots_api"),
path("v1/pots/<str:pot_id>/", PotDetailAPI.as_view(), name="pots_api_by_id"),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent trailing slash on pot detail route.

"v1/pots/<str:pot_id>/" has a trailing slash while almost all other routes do not. This inconsistency may cause 404s if consumers are unaware, unless APPEND_SLASH is enabled.

🤖 Prompt for AI Agents
In `@api/urls.py` at line 179, The pot detail route string is inconsistent with
other routes; update the path in api/urls.py from "v1/pots/<str:pot_id>/" to
"v1/pots/<str:pot_id>" so the route without a trailing slash matches the rest of
the API, keeping the view PotDetailAPI.as_view() and name "pots_api_by_id"
unchanged; run URL tests or lint to confirm no other endpoints expect the
trailing slash.

Comment on lines +84 to +205
class DirectDonationSyncAPI(APIView):
"""
Sync a direct donation from blockchain to database.

Called by frontend after a user makes a direct donation.
Frontend passes the transaction hash, backend parses the donation from tx result.
"""

@extend_schema(
summary="Sync a direct donation",
description="Sync a single direct donation using the transaction hash from the donation response.",
parameters=[
OpenApiParameter(
name="tx_hash",
description="Transaction hash from the donation transaction",
required=True,
type=str,
),
OpenApiParameter(
name="sender_id",
description="Account ID of the transaction sender (donor)",
required=True,
type=str,
),
],
responses={
200: OpenApiResponse(description="Donation synced"),
400: OpenApiResponse(description="Missing required parameters"),
404: OpenApiResponse(description="Donation not found in transaction"),
502: OpenApiResponse(description="RPC failed"),
},
)
def post(self, request):
try:
# Get required parameters
tx_hash = request.data.get("tx_hash") or request.query_params.get("tx_hash")
sender_id = request.data.get("sender_id") or request.query_params.get("sender_id")

if not tx_hash or not sender_id:
return Response(
{"error": "tx_hash and sender_id are required"},
status=400,
)

# Fetch transaction result and parse donation data
tx_result = fetch_tx_result(tx_hash, sender_id)
if not tx_result:
return Response({"error": "Transaction not found"}, status=404)

donation_data = parse_donation_from_tx(tx_result)
if not donation_data:
return Response(
{"error": "Could not parse donation from transaction result"},
status=404,
)

# Upsert accounts
donor, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["donor_id"]
)
recipient, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["recipient_id"]
)

referrer = None
if donation_data.get("referrer_id"):
referrer, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["referrer_id"]
)

# Get or create token
token_id = donation_data.get("ft_id") or "near"
token_acct, _ = Account.objects.get_or_create(defaults={"chain_id": 1}, id=token_id)
token, _ = Token.objects.get_or_create(account=token_acct, defaults={"decimals": 24})

# Parse timestamp
donated_at = datetime.fromtimestamp(
donation_data["donated_at_ms"] / 1000, tz=timezone.utc
)

# Calculate net_amount if not provided (total - protocol_fee - referrer_fee)
total_amount = int(donation_data["total_amount"])
protocol_fee = int(donation_data.get("protocol_fee", 0))
referrer_fee = int(donation_data.get("referrer_fee", 0) or 0)
net_amount = donation_data.get("net_amount")
if net_amount is None:
net_amount = total_amount - protocol_fee - referrer_fee

# Create or update donation
donation_defaults = {
"donor": donor,
"recipient": recipient,
"token": token,
"total_amount": str(total_amount),
"net_amount": str(net_amount),
"message": donation_data.get("message"),
"donated_at": donated_at,
"protocol_fee": str(protocol_fee),
"referrer": referrer,
"referrer_fee": str(referrer_fee) if referrer_fee else None,
"matching_pool": False,
"tx_hash": tx_hash,
}

donation, created = Donation.objects.update_or_create(
on_chain_id=donation_data["id"],
pot__isnull=True, # Direct donations have no pot
defaults=donation_defaults,
)

return Response(
{
"success": True,
"message": "Donation synced",
"donation_id": donation.on_chain_id,
"created": created,
}
)

except Exception as e:
logger.error(f"Error syncing direct donation: {e}")
return Response({"error": str(e)}, status=502)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Unauthenticated POST endpoint that writes to the database — potential abuse vector.

DirectDonationSyncAPI has no authentication or rate-limiting. Any anonymous caller can POST arbitrary tx_hash/sender_id pairs, causing the server to make outbound RPC calls and upsert Account, Token, and Donation records. This opens the door to:

  • Denial-of-service via RPC fan-out (each request triggers an outbound RPC call).
  • Data pollution if a valid but unrelated transaction happens to match the donor_id/recipient_id heuristic.

At a minimum, consider adding throttle_classes (DRF throttling) and, ideally, some form of authentication or HMAC verification.

🤖 Prompt for AI Agents
In `@donations/sync.py` around lines 84 - 205, The endpoint
DirectDonationSyncAPI.post is unauthenticated and unthrottled, allowing anyone
to trigger fetch_tx_result and upsert Account/Token/Donation; add protection by
attaching appropriate DRF throttling and authentication on the view (e.g., set
throttle_classes and permission_classes or authentication_classes on
DirectDonationSyncAPI) and validate requests (e.g., require a signed HMAC or API
key parameter and verify it before calling fetch_tx_result), and ensure
expensive RPC calls (fetch_tx_result) only run after signature/auth success to
mitigate abuse and DoS.

Comment on lines +117 to +205
try:
# Get required parameters
tx_hash = request.data.get("tx_hash") or request.query_params.get("tx_hash")
sender_id = request.data.get("sender_id") or request.query_params.get("sender_id")

if not tx_hash or not sender_id:
return Response(
{"error": "tx_hash and sender_id are required"},
status=400,
)

# Fetch transaction result and parse donation data
tx_result = fetch_tx_result(tx_hash, sender_id)
if not tx_result:
return Response({"error": "Transaction not found"}, status=404)

donation_data = parse_donation_from_tx(tx_result)
if not donation_data:
return Response(
{"error": "Could not parse donation from transaction result"},
status=404,
)

# Upsert accounts
donor, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["donor_id"]
)
recipient, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["recipient_id"]
)

referrer = None
if donation_data.get("referrer_id"):
referrer, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["referrer_id"]
)

# Get or create token
token_id = donation_data.get("ft_id") or "near"
token_acct, _ = Account.objects.get_or_create(defaults={"chain_id": 1}, id=token_id)
token, _ = Token.objects.get_or_create(account=token_acct, defaults={"decimals": 24})

# Parse timestamp
donated_at = datetime.fromtimestamp(
donation_data["donated_at_ms"] / 1000, tz=timezone.utc
)

# Calculate net_amount if not provided (total - protocol_fee - referrer_fee)
total_amount = int(donation_data["total_amount"])
protocol_fee = int(donation_data.get("protocol_fee", 0))
referrer_fee = int(donation_data.get("referrer_fee", 0) or 0)
net_amount = donation_data.get("net_amount")
if net_amount is None:
net_amount = total_amount - protocol_fee - referrer_fee

# Create or update donation
donation_defaults = {
"donor": donor,
"recipient": recipient,
"token": token,
"total_amount": str(total_amount),
"net_amount": str(net_amount),
"message": donation_data.get("message"),
"donated_at": donated_at,
"protocol_fee": str(protocol_fee),
"referrer": referrer,
"referrer_fee": str(referrer_fee) if referrer_fee else None,
"matching_pool": False,
"tx_hash": tx_hash,
}

donation, created = Donation.objects.update_or_create(
on_chain_id=donation_data["id"],
pot__isnull=True, # Direct donations have no pot
defaults=donation_defaults,
)

return Response(
{
"success": True,
"message": "Donation synced",
"donation_id": donation.on_chain_id,
"created": created,
}
)

except Exception as e:
logger.error(f"Error syncing direct donation: {e}")
return Response({"error": str(e)}, status=502)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Broad except Exception swallows client-caused errors as 502.

KeyError (e.g., missing "donated_at_ms", "total_amount", or "id" in donation_data) and ValueError (e.g., non-numeric total_amount) are legitimate input-validation failures and should return 400, not 502. The blanket catch on Line 203 masks these as RPC/server errors.

🐛 Proposed fix — separate input errors from infra errors
+        except KeyError as e:
+            return Response(
+                {"error": f"Missing required field in donation data: {e}"},
+                status=400,
+            )
+        except ValueError as e:
+            return Response(
+                {"error": f"Invalid donation data: {e}"},
+                status=400,
+            )
         except Exception as e:
             logger.error(f"Error syncing direct donation: {e}")
             return Response({"error": str(e)}, status=502)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try:
# Get required parameters
tx_hash = request.data.get("tx_hash") or request.query_params.get("tx_hash")
sender_id = request.data.get("sender_id") or request.query_params.get("sender_id")
if not tx_hash or not sender_id:
return Response(
{"error": "tx_hash and sender_id are required"},
status=400,
)
# Fetch transaction result and parse donation data
tx_result = fetch_tx_result(tx_hash, sender_id)
if not tx_result:
return Response({"error": "Transaction not found"}, status=404)
donation_data = parse_donation_from_tx(tx_result)
if not donation_data:
return Response(
{"error": "Could not parse donation from transaction result"},
status=404,
)
# Upsert accounts
donor, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["donor_id"]
)
recipient, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["recipient_id"]
)
referrer = None
if donation_data.get("referrer_id"):
referrer, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["referrer_id"]
)
# Get or create token
token_id = donation_data.get("ft_id") or "near"
token_acct, _ = Account.objects.get_or_create(defaults={"chain_id": 1}, id=token_id)
token, _ = Token.objects.get_or_create(account=token_acct, defaults={"decimals": 24})
# Parse timestamp
donated_at = datetime.fromtimestamp(
donation_data["donated_at_ms"] / 1000, tz=timezone.utc
)
# Calculate net_amount if not provided (total - protocol_fee - referrer_fee)
total_amount = int(donation_data["total_amount"])
protocol_fee = int(donation_data.get("protocol_fee", 0))
referrer_fee = int(donation_data.get("referrer_fee", 0) or 0)
net_amount = donation_data.get("net_amount")
if net_amount is None:
net_amount = total_amount - protocol_fee - referrer_fee
# Create or update donation
donation_defaults = {
"donor": donor,
"recipient": recipient,
"token": token,
"total_amount": str(total_amount),
"net_amount": str(net_amount),
"message": donation_data.get("message"),
"donated_at": donated_at,
"protocol_fee": str(protocol_fee),
"referrer": referrer,
"referrer_fee": str(referrer_fee) if referrer_fee else None,
"matching_pool": False,
"tx_hash": tx_hash,
}
donation, created = Donation.objects.update_or_create(
on_chain_id=donation_data["id"],
pot__isnull=True, # Direct donations have no pot
defaults=donation_defaults,
)
return Response(
{
"success": True,
"message": "Donation synced",
"donation_id": donation.on_chain_id,
"created": created,
}
)
except Exception as e:
logger.error(f"Error syncing direct donation: {e}")
return Response({"error": str(e)}, status=502)
try:
# Get required parameters
tx_hash = request.data.get("tx_hash") or request.query_params.get("tx_hash")
sender_id = request.data.get("sender_id") or request.query_params.get("sender_id")
if not tx_hash or not sender_id:
return Response(
{"error": "tx_hash and sender_id are required"},
status=400,
)
# Fetch transaction result and parse donation data
tx_result = fetch_tx_result(tx_hash, sender_id)
if not tx_result:
return Response({"error": "Transaction not found"}, status=404)
donation_data = parse_donation_from_tx(tx_result)
if not donation_data:
return Response(
{"error": "Could not parse donation from transaction result"},
status=404,
)
# Upsert accounts
donor, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["donor_id"]
)
recipient, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["recipient_id"]
)
referrer = None
if donation_data.get("referrer_id"):
referrer, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["referrer_id"]
)
# Get or create token
token_id = donation_data.get("ft_id") or "near"
token_acct, _ = Account.objects.get_or_create(defaults={"chain_id": 1}, id=token_id)
token, _ = Token.objects.get_or_create(account=token_acct, defaults={"decimals": 24})
# Parse timestamp
donated_at = datetime.fromtimestamp(
donation_data["donated_at_ms"] / 1000, tz=timezone.utc
)
# Calculate net_amount if not provided (total - protocol_fee - referrer_fee)
total_amount = int(donation_data["total_amount"])
protocol_fee = int(donation_data.get("protocol_fee", 0))
referrer_fee = int(donation_data.get("referrer_fee", 0) or 0)
net_amount = donation_data.get("net_amount")
if net_amount is None:
net_amount = total_amount - protocol_fee - referrer_fee
# Create or update donation
donation_defaults = {
"donor": donor,
"recipient": recipient,
"token": token,
"total_amount": str(total_amount),
"net_amount": str(net_amount),
"message": donation_data.get("message"),
"donated_at": donated_at,
"protocol_fee": str(protocol_fee),
"referrer": referrer,
"referrer_fee": str(referrer_fee) if referrer_fee else None,
"matching_pool": False,
"tx_hash": tx_hash,
}
donation, created = Donation.objects.update_or_create(
on_chain_id=donation_data["id"],
pot__isnull=True, # Direct donations have no pot
defaults=donation_defaults,
)
return Response(
{
"success": True,
"message": "Donation synced",
"donation_id": donation.on_chain_id,
"created": created,
}
)
except KeyError as e:
return Response(
{"error": f"Missing required field in donation data: {e}"},
status=400,
)
except ValueError as e:
return Response(
{"error": f"Invalid donation data: {e}"},
status=400,
)
except Exception as e:
logger.error(f"Error syncing direct donation: {e}")
return Response({"error": str(e)}, status=502)
🤖 Prompt for AI Agents
In `@donations/sync.py` around lines 117 - 205, The broad except Exception in the
sync block (surrounding fetch_tx_result, parse_donation_from_tx, and
Donation.objects.update_or_create) is treating client/input problems as 502;
instead, explicitly catch input/validation exceptions (e.g., KeyError,
ValueError, TypeError) raised while parsing donation_data (fields like
"donated_at_ms", "total_amount", "id") and while converting ints, and return a
400 Response with the error message; keep a separate broad except Exception to
log unexpected infra/runtime errors and return 502. Locate the try/except around
parse_donation_from_tx, the int() conversions and update_or_create call, and
split the handlers accordingly (catch KeyError/ValueError/TypeError first ->
Response(status=400), then a final generic except -> logger.error +
Response(status=502)).

"donated_at": donated_at,
"protocol_fee": str(protocol_fee),
"referrer": referrer,
"referrer_fee": str(referrer_fee) if referrer_fee else None,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

referrer_fee stores None when the fee is 0, inconsistent with protocol_fee.

Line 183: "referrer_fee": str(referrer_fee) if referrer_fee else None — when referrer_fee is 0, this evaluates to None. Meanwhile protocol_fee on Line 182 always stores str(protocol_fee) (which would be "0"). This inconsistency could confuse downstream consumers.

Proposed fix
-                "referrer_fee": str(referrer_fee) if referrer_fee else None,
+                "referrer_fee": str(referrer_fee),
🤖 Prompt for AI Agents
In `@donations/sync.py` at line 183, The current dict sets "referrer_fee" using a
falsy check which turns 0 into None; update the assignment for "referrer_fee" to
only treat missing/undefined as None (e.g., use an explicit None check) so that
referrer_fee == 0 becomes "0" like protocol_fee; locate the dict entry that uses
the variables referrer_fee and protocol_fee (the line with "referrer_fee":
str(referrer_fee) if referrer_fee else None) and replace the conditional with an
explicit None check (or always stringify) to match protocol_fee's behavior.

Comment on lines +213 to +217
if category_param:
category_regex_pattern = rf'\[.*?"{category_param}".*?\]'
registrations = registrations.filter(
registrant__near_social_profile_data__plCategories__iregex=category_regex_pattern
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

User-supplied category_param injected directly into regex — potential ReDoS.

category_param comes straight from request.query_params and is interpolated into an iregex pattern without sanitization. A crafted input like (a+)+$ can cause catastrophic backtracking in the regex engine.

Escape the input with re.escape() before interpolation.

🐛 Proposed fix
+import re
 ...
         if category_param:
-            category_regex_pattern = rf'\[.*?"{category_param}".*?\]'
+            category_regex_pattern = rf'\[.*?"{re.escape(category_param)}".*?\]'
             registrations = registrations.filter(
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if category_param:
category_regex_pattern = rf'\[.*?"{category_param}".*?\]'
registrations = registrations.filter(
registrant__near_social_profile_data__plCategories__iregex=category_regex_pattern
)
if category_param:
category_regex_pattern = rf'\[.*?"{re.escape(category_param)}".*?\]'
registrations = registrations.filter(
registrant__near_social_profile_data__plCategories__iregex=category_regex_pattern
)
🤖 Prompt for AI Agents
In `@lists/api.py` around lines 213 - 217, The code interpolates user-controlled
category_param directly into a regex used in the registrations.filter call
(registrant__near_social_profile_data__plCategories__iregex), which risks ReDoS;
fix by escaping category_param with re.escape() before building
category_regex_pattern so the pattern becomes safe, e.g., compute escaped =
re.escape(category_param) and use that when constructing category_regex_pattern
in the block that checks if category_param (the code handling category_param and
registrations.filter).

Comment on lines +271 to +279
# Get a random registration
registrations_list = list(registrations)
if not registrations_list:
return Response(
{"message": "No registrations found for the given criteria."},
status=404,
)

registration = random.choice(registrations_list)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Loading entire queryset into memory for random selection — use DB-level random instead.

list(registrations) on Line 272 materializes every matching registration into Python memory. For large lists this is a significant memory and latency hit, only to pick one element.

Use .order_by('?').first() to let the DB handle the random selection with constant memory.

🐛 Proposed fix
-        # Get a random registration
-        registrations_list = list(registrations)
-        if not registrations_list:
+        # Get a random registration (DB-level)
+        registration = registrations.order_by('?').first()
+        if not registration:
             return Response(
                 {"message": "No registrations found for the given criteria."},
                 status=404,
             )

-        registration = random.choice(registrations_list)
         serializer = ListRegistrationSerializer(registration)
         return Response(serializer.data)

Note: order_by('?') translates to ORDER BY RANDOM() which can be slow on very large tables, but it's far better than loading everything into Python. If performance is a concern later, a count + offset approach can be used.

🤖 Prompt for AI Agents
In `@lists/api.py` around lines 271 - 279, The code currently materializes the
entire QuerySet via list(registrations) and uses random.choice, which loads all
rows into memory; update the logic in the view handling "registrations" to let
the DB pick a random row by replacing the list()/random.choice pattern with a
DB-level random selection like registrations.order_by('?').first(), then check
for None (if no registration found) and return the same 404 Response; ensure you
update references to the chosen object (registration) accordingly so subsequent
code uses the object returned by .first().

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🤖 Fix all issues with AI agents
In `@api/sync.py`:
- Around line 216-227: The post method currently calls
ListSyncAPI().post(request, list_id) which bypasses DRF dispatch and middleware;
refactor the list-sync logic into a shared helper function (e.g.,
sync_list_by_on_chain_id(list_id, request) or a ListSyncService.sync(list_id))
and have both ListSyncAPI.post and this post method call that helper; update
ListSyncAPI.post to call the helper and keep its DRF view behavior (so
middleware/auth remains applied for external requests) while this internal
caller uses the helper directly and checks the helper's result to fetch
List.objects.get(on_chain_id=int(list_id)) or return the same error Response.
- Around line 236-254: The loop issues two Account.objects.get_or_create calls
per registration causing an N+1; instead, before iterating registrations collect
all unique account IDs from reg["registrant_id"] and reg.get("registered_by",
...), query existing Account objects for those IDs, compute the missing IDs, and
bulk_create Account instances for the missing ones; then iterate registrations
and call ListRegistration.objects.update_or_create as before (referencing the
registrations loop, Account.objects.get_or_create, and
ListRegistration.objects.update_or_create) without per-iteration account
creation to eliminate the per-row DB calls.

In `@api/urls.py`:
- Around line 215-216: The route path("v1/<str:account_id>/projects",
AccountProjectListAPI.as_view(), name="user_projects_api") is ambiguous and
should be made explicit by prefixing the segment with "accounts/" (e.g.
"v1/accounts/<str:account_id>/projects") so it only matches account IDs, and
also fix the typo in the URL name for the ProjectStatsAPI route by renaming
name="projects_stat__api" to the intended name (e.g. "projects_stat_api");
update both the AccountProjectListAPI.as_view() and ProjectStatsAPI.as_view()
entries accordingly to preserve behavior while making the URLs unambiguous.

In `@donations/sync.py`:
- Around line 155-157: The code unconditionally defaults Token.decimals to 24
when creating a Token (via Token.objects.get_or_create) which is only correct
for the NEAR native token; update the logic around token_id,
Account.objects.get_or_create and Token.objects.get_or_create to handle
non-"near" FTs: if token_id == "near" keep decimals=24, otherwise attempt to
fetch token metadata from the chain (or a registry) to determine decimals and
use that when creating the Token, and if metadata lookup fails log a warning
including token_id and fall back to a safe default (or mark the Token as having
unknown decimals) so Token.format_price() does not mis-format amounts.
- Around line 29-57: fetch_tx_result does not handle non-JSON or non-200
responses; before calling response.json() (response variable) check
response.status_code and raise a clear exception for non-2xx responses, and wrap
response.json() in a try/except catching JSONDecodeError/ValueError to raise an
informative exception containing the HTTP status and response.text; update the
function fetch_tx_result to validate response.ok, handle parsing errors, and
include tx_hash/sender_id in the raised error messages to aid debugging.
- Around line 140-146: The hardcoded chain_id=1 in Account.objects.get_or_create
is fragile; instead, resolve the NEAR Chain once (e.g., chain =
Chain.objects.get(name="NEAR") or Chain.objects.get(slug="near") with a safe
fallback like Chain.objects.get_or_create) and use chain.id in the defaults for
all Account.objects.get_or_create calls in this module (references:
Account.objects.get_or_create and Chain). Update every occurrence noted in the
review so accounts link to the resolved Chain.id rather than the literal 1.

In `@README.md`:
- Around line 210-211: Remove the stray trailing ".." artifact in the README.md
at the indicated location and either delete it entirely or replace it with
meaningful content (e.g., a completed sentence, section heading, or intended
markdown element) so the document reads correctly; search for the exact literal
".." near the shown context and update that token accordingly.
- Around line 143-150: The fenced block shows a language-tag mismatch: "enum
PotApplicationStatus { ... }" is not valid Python and will confuse readers;
update the snippet for clarity by either converting it to a correct Python Enum
(e.g., define a PotApplicationStatus Enum class) or remove the "py" language tag
and use a language-agnostic/list format (e.g., list PotApplicationStatus values)
so the example matches the fence; ensure references to PotApplicationStatus,
Pending, Approved, Rejected, and InReview are preserved and consistent.
🧹 Nitpick comments (5)
donations/sync.py (1)

26-26: Unused constant DONATION_CONTRACT.

DONATION_CONTRACT is defined but never referenced anywhere in this file.

Proposed fix
-DONATION_CONTRACT = f"donate.{settings.POTLOCK_TLA}"
-
-
lists/api.py (2)

1-13: Consolidate django.db.models imports.

Q (line 3) and Count (line 13) are imported from django.db.models in separate statements. Combine them.

Proposed fix
-from django.db.models import Q
 from django.utils import timezone
 from django.utils.decorators import method_decorator
 from django.views.decorators.cache import cache_page
...
-from django.db.models import Count
+from django.db.models import Count, Q

200-203: Remove commented-out code.

Lines 200 and 203 contain commented-out query logic that adds noise and should be removed before merging to main.

Proposed fix
         list_id = kwargs.get("list_id")
         chain = request.query_params.get("chain")
-        # list_obj = List.objects.get(on_chain_id=list_id)
         registrations = ListRegistration.objects.filter(list__on_chain_id=list_id, list__chain__name="NEAR" if not chain else chain).select_related("list__chain", "list__owner", "registrant", "registered_by").prefetch_related("list__admins", "list__upvotes")
-
-        # registrations = list_obj.registrations.select_related("list", "list__owner", "registrant", "registered_by").prefetch_related("list__admins").annotate(registrations_count=Count('list_registrations')).all()
         status_param = request.query_params.get("status")
api/sync.py (2)

36-118: fetch_from_rpc — well-structured RPC fallback with retry logic.

Good use of requests.Session with Retry and multiple endpoint fallbacks. A few minor notes:

  1. The session is never explicitly closed. Consider using it as a context manager or calling session.close().
  2. Type hints on args and contract_id should use T | None instead of implicit Optional (Ruff RUF013).
Type hint fix
-def fetch_from_rpc(method_name: str, args: dict = None, contract_id: str = None, timeout: int = 60):
+def fetch_from_rpc(method_name: str, args: dict | None = None, contract_id: str | None = None, timeout: int = 60):

336-399: AccountSyncAPI — donation stats recalculation looks correct.

The aggregation logic for total_donations_in_usd, total_donations_out_usd, and donors_count is sound. One note: account.save() on line 385 will trigger Account.save() override which calls fetch_near_social_profile_data only on _state.adding. Since the account may already exist, the social profile is fetched separately via RPC (lines 358-365), which is the correct approach.

However, logger.error on line 398 should use logger.exception to capture the full traceback (Ruff TRY400). This applies to all logger.error calls in exception handlers throughout this file (lines 197, 263, 332, 398).

Proposed fix (apply to all exception handlers)
         except Exception as e:
-            logger.error(f"Error syncing account {account_id}: {e}")
+            logger.exception(f"Error syncing account {account_id}: {e}")
             return Response({"error": str(e)}, status=502)

Comment on lines +216 to +227
def post(self, request, list_id: int):
try:
# Ensure list exists
try:
list_obj = List.objects.get(on_chain_id=int(list_id))
except List.DoesNotExist:
# Sync list first
list_sync = ListSyncAPI()
resp = list_sync.post(request, list_id)
if resp.status_code != 200:
return Response({"error": "List not found"}, status=404)
list_obj = List.objects.get(on_chain_id=int(list_id))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Calling ListSyncAPI().post(request, list_id) directly bypasses DRF middleware.

Directly instantiating ListSyncAPI and calling .post() skips authentication, throttling, and other middleware that DRF applies through dispatch(). If auth/throttling is added to ListSyncAPI later, this internal call path won't enforce it. Extract the sync logic into a shared helper function instead.

Proposed approach
+def _sync_list_from_chain(list_id: int) -> tuple[List | None, bool]:
+    """Shared helper: fetch list from RPC and upsert in DB. Returns (list_obj, created)."""
+    data = fetch_from_rpc("get_list", {"list_id": int(list_id)})
+    if not data:
+        return None, False
+    # ... move create/update logic here ...
+
 class ListSyncAPI(APIView):
     def post(self, request, list_id: int):
-        ...
+        list_obj, created = _sync_list_from_chain(list_id)
+        ...

 class ListRegistrationsSyncAPI(APIView):
     def post(self, request, list_id: int):
         try:
             list_obj = List.objects.get(on_chain_id=int(list_id))
         except List.DoesNotExist:
-            list_sync = ListSyncAPI()
-            resp = list_sync.post(request, list_id)
-            if resp.status_code != 200:
+            list_obj, created = _sync_list_from_chain(list_id)
+            if not list_obj:
                 return Response({"error": "List not found"}, status=404)
-            list_obj = List.objects.get(on_chain_id=int(list_id))
🤖 Prompt for AI Agents
In `@api/sync.py` around lines 216 - 227, The post method currently calls
ListSyncAPI().post(request, list_id) which bypasses DRF dispatch and middleware;
refactor the list-sync logic into a shared helper function (e.g.,
sync_list_by_on_chain_id(list_id, request) or a ListSyncService.sync(list_id))
and have both ListSyncAPI.post and this post method call that helper; update
ListSyncAPI.post to call the helper and keep its DRF view behavior (so
middleware/auth remains applied for external requests) while this internal
caller uses the helper directly and checks the helper's result to fetch
List.objects.get(on_chain_id=int(list_id)) or return the same error Response.

Comment on lines +236 to +254
for reg in registrations:
# Create accounts
Account.objects.get_or_create(id=reg["registrant_id"])
Account.objects.get_or_create(id=reg.get("registered_by", reg["registrant_id"]))

# Create/update registration (id is AutoField, use list+registrant as unique key)
ListRegistration.objects.update_or_create(
list=list_obj,
registrant_id=reg["registrant_id"],
defaults={
"registered_by_id": reg.get("registered_by", reg["registrant_id"]),
"status": reg.get("status", "Pending"),
"submitted_at": datetime.fromtimestamp(reg["submitted_ms"] / 1000),
"updated_at": datetime.fromtimestamp(reg["updated_ms"] / 1000),
"admin_notes": reg.get("admin_notes"),
"registrant_notes": reg.get("registrant_notes"),
}
)
synced += 1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

N+1 Account.objects.get_or_create calls inside the registration sync loop.

For each registration, two get_or_create calls are issued (lines 238-239). For a list with hundreds of registrations, this generates hundreds of individual DB queries. Consider bulk-collecting unique account IDs first and batch-creating them.

Proposed approach
+            # Bulk-create accounts first
+            account_ids = set()
+            for reg in registrations:
+                account_ids.add(reg["registrant_id"])
+                account_ids.add(reg.get("registered_by", reg["registrant_id"]))
+            existing = set(Account.objects.filter(id__in=account_ids).values_list("id", flat=True))
+            new_accounts = [Account(id=aid) for aid in account_ids - existing]
+            Account.objects.bulk_create(new_accounts, ignore_conflicts=True)
+
             synced = 0
             for reg in registrations:
-                # Create accounts
-                Account.objects.get_or_create(id=reg["registrant_id"])
-                Account.objects.get_or_create(id=reg.get("registered_by", reg["registrant_id"]))
-
                 # Create/update registration
🤖 Prompt for AI Agents
In `@api/sync.py` around lines 236 - 254, The loop issues two
Account.objects.get_or_create calls per registration causing an N+1; instead,
before iterating registrations collect all unique account IDs from
reg["registrant_id"] and reg.get("registered_by", ...), query existing Account
objects for those IDs, compute the missing IDs, and bulk_create Account
instances for the missing ones; then iterate registrations and call
ListRegistration.objects.update_or_create as before (referencing the
registrations loop, Account.objects.get_or_create, and
ListRegistration.objects.update_or_create) without per-iteration account
creation to eliminate the per-row DB calls.

Comment on lines +215 to +216
path("v1/<str:account_id>/projects", AccountProjectListAPI.as_view(), name="user_projects_api"),
path("v1/<str:account_id>/project-stats", ProjectStatsAPI.as_view(), name="projects_stat__api"),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Ambiguous catch-all route v1/<str:account_id>/projects and typo in URL name.

Line 215: The pattern v1/<str:account_id>/projects matches any string in the first segment. If a request for a non-existent route like v1/campaigns/projects comes in, Django may match it here with account_id="campaigns" depending on URL ordering. This is fragile — prefix it with accounts/ to be explicit:

-    path("v1/<str:account_id>/projects", AccountProjectListAPI.as_view(), name="user_projects_api"),
-    path("v1/<str:account_id>/project-stats", ProjectStatsAPI.as_view(), name="projects_stat__api"),
+    path("v1/accounts/<str:account_id>/projects", AccountProjectListAPI.as_view(), name="user_projects_api"),
+    path("v1/accounts/<str:account_id>/project-stats", ProjectStatsAPI.as_view(), name="projects_stats_api"),

Also, "projects_stat__api" has a double underscore typo (previously flagged).

🤖 Prompt for AI Agents
In `@api/urls.py` around lines 215 - 216, The route
path("v1/<str:account_id>/projects", AccountProjectListAPI.as_view(),
name="user_projects_api") is ambiguous and should be made explicit by prefixing
the segment with "accounts/" (e.g. "v1/accounts/<str:account_id>/projects") so
it only matches account IDs, and also fix the typo in the URL name for the
ProjectStatsAPI route by renaming name="projects_stat__api" to the intended name
(e.g. "projects_stat_api"); update both the AccountProjectListAPI.as_view() and
ProjectStatsAPI.as_view() entries accordingly to preserve behavior while making
the URLs unambiguous.

Comment on lines +29 to +57
def fetch_tx_result(tx_hash: str, sender_id: str):
"""
Fetch transaction result from NEAR RPC.
Returns the parsed result from the transaction execution.
"""
rpc_url = (
"https://test.rpc.fastnear.com"
if settings.ENVIRONMENT == "testnet"
else "https://free.rpc.fastnear.com"
)

payload = {
"jsonrpc": "2.0",
"id": "dontcare",
"method": "tx",
"params": {
"tx_hash": tx_hash,
"sender_account_id": sender_id,
"wait_until": "EXECUTED_OPTIMISTIC",
},
}

response = requests.post(rpc_url, json=payload, timeout=30)
result = response.json()

if "error" in result:
raise Exception(f"RPC error fetching tx: {result['error']}")

return result.get("result")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

fetch_tx_result does not handle non-JSON RPC responses.

If the RPC returns a non-JSON response (e.g., HTML error page, 5xx), response.json() on line 52 will raise a JSONDecodeError that bubbles up unhandled with no useful context. Add a status code check or wrap the JSON parsing.

Proposed fix
     response = requests.post(rpc_url, json=payload, timeout=30)
+    response.raise_for_status()
     result = response.json()
🧰 Tools
🪛 Ruff (0.14.14)

[warning] 55-55: Create your own exception

(TRY002)


[warning] 55-55: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In `@donations/sync.py` around lines 29 - 57, fetch_tx_result does not handle
non-JSON or non-200 responses; before calling response.json() (response
variable) check response.status_code and raise a clear exception for non-2xx
responses, and wrap response.json() in a try/except catching
JSONDecodeError/ValueError to raise an informative exception containing the HTTP
status and response.text; update the function fetch_tx_result to validate
response.ok, handle parsing errors, and include tx_hash/sender_id in the raised
error messages to aid debugging.

Comment on lines +140 to +146
# Upsert accounts
donor, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["donor_id"]
)
recipient, _ = Account.objects.get_or_create(
defaults={"chain_id": 1}, id=donation_data["recipient_id"]
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hardcoded chain_id=1 is fragile — may break if the Chain table's PK differs.

Multiple Account.objects.get_or_create(defaults={"chain_id": 1}, ...) calls assume the NEAR chain always has id=1. If the Chain table is populated differently (e.g., via migrations or fixtures), this will either raise an IntegrityError or link accounts to the wrong chain. Use a named lookup instead.

Proposed fix
+from chains.models import Chain
+
+NEAR_CHAIN_NAME = "NEAR"
+
 ...
-            donor, _ = Account.objects.get_or_create(
-                defaults={"chain_id": 1}, id=donation_data["donor_id"]
-            )
+            near_chain = Chain.objects.get(name=NEAR_CHAIN_NAME)
+            donor, _ = Account.objects.get_or_create(
+                defaults={"chain": near_chain}, id=donation_data["donor_id"]
+            )

Apply the same pattern to all Account.objects.get_or_create calls in this file (lines 144, 150–151, 156).

🤖 Prompt for AI Agents
In `@donations/sync.py` around lines 140 - 146, The hardcoded chain_id=1 in
Account.objects.get_or_create is fragile; instead, resolve the NEAR Chain once
(e.g., chain = Chain.objects.get(name="NEAR") or Chain.objects.get(slug="near")
with a safe fallback like Chain.objects.get_or_create) and use chain.id in the
defaults for all Account.objects.get_or_create calls in this module (references:
Account.objects.get_or_create and Chain). Update every occurrence noted in the
review so accounts link to the resolved Chain.id rather than the literal 1.

Comment on lines +155 to +157
token_id = donation_data.get("ft_id") or "near"
token_acct, _ = Account.objects.get_or_create(defaults={"chain_id": 1}, id=token_id)
token, _ = Token.objects.get_or_create(account=token_acct, defaults={"decimals": 24})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Default decimals=24 only valid for NEAR native token — other FTs will have different decimals.

When token_id is not "near", the token may have a different number of decimals (e.g., USDC has 6). Defaulting to 24 will cause incorrect amount formatting downstream in Token.format_price(). Consider fetching token metadata from the chain for non-NEAR tokens, or at least logging a warning.

🤖 Prompt for AI Agents
In `@donations/sync.py` around lines 155 - 157, The code unconditionally defaults
Token.decimals to 24 when creating a Token (via Token.objects.get_or_create)
which is only correct for the NEAR native token; update the logic around
token_id, Account.objects.get_or_create and Token.objects.get_or_create to
handle non-"near" FTs: if token_id == "near" keep decimals=24, otherwise attempt
to fetch token metadata from the chain (or a registry) to determine decimals and
use that when creating the Token, and if metadata lookup fails log a warning
including token_id and fall back to a safe default (or mark the Token as having
unknown decimals) so Token.format_price() does not mis-format amounts.

Comment on lines +143 to +150
```py
enum PotApplicationStatus {
Pending,
Approved,
Rejected,
InReview,
}
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Invalid syntax in enum example — use a language-agnostic or correct format.

enum PotApplicationStatus { ... } isn't valid Python. Since the code fence is tagged as py, this will confuse readers. Consider using a Python Enum or just removing the language tag.

Proposed fix
-```py
-enum PotApplicationStatus {
-  Pending,
-  Approved,
-  Rejected,
-  InReview,
-}
-```
+```
+PotApplicationStatus:
+  - Pending
+  - Approved
+  - Rejected
+  - InReview
+```
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```py
enum PotApplicationStatus {
Pending,
Approved,
Rejected,
InReview,
}
```
🤖 Prompt for AI Agents
In `@README.md` around lines 143 - 150, The fenced block shows a language-tag
mismatch: "enum PotApplicationStatus { ... }" is not valid Python and will
confuse readers; update the snippet for clarity by either converting it to a
correct Python Enum (e.g., define a PotApplicationStatus Enum class) or remove
the "py" language tag and use a language-agnostic/list format (e.g., list
PotApplicationStatus values) so the example matches the fence; ensure references
to PotApplicationStatus, Pending, Approved, Rejected, and InReview are preserved
and consistent.

Comment on lines +210 to +211

..
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Trailing .. appears to be an incomplete artifact — remove or replace with meaningful content.

🤖 Prompt for AI Agents
In `@README.md` around lines 210 - 211, Remove the stray trailing ".." artifact in
the README.md at the indicated location and either delete it entirely or replace
it with meaningful content (e.g., a completed sentence, section heading, or
intended markdown element) so the document reads correctly; search for the exact
literal ".." near the shown context and update that token accordingly.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@api/sync.py`:
- Around line 198-200: The current except blocks leak internal exception text to
API consumers (e.g., the block that logs logger.error(f"Error syncing list
{list_id}: {e}") and returns Response({"error": str(e)}, status=502)); change
each handler to log the full error server-side (use logger.exception(...) or
logger.error(..., exc_info=True) with contextual identifiers like list_id or the
function name) and return a generic error payload such as Response({"error":
"Internal server error"}, status=502). Apply this replacement for the shown
block and the other except handlers referenced (lines around 264-266, 333-335,
468-470, 583-585, 662-664, 751-753, 841-843, 907-909), ensuring you preserve the
contextual log message but remove str(e) from responses.
- Around line 546-550: The code creates Token objects with a hardcoded
decimals=24 and no symbol; instead, remove the hardcoded default and fetch
on-chain metadata for ft_id before creating/updating the Token: call or
implement a helper like fetch_ft_metadata(ft_id) (or use the existing FT client)
to obtain decimals and symbol, and pass those into Token.objects.get_or_create /
Token.objects.update_or_create defaults; if metadata fetch fails, set decimals
and symbol to None (or a sentinel) and set an "unknown_metadata" flag or log a
warning so downstream code can handle/resolve missing decimals rather than
assuming 24. Ensure you update references to Token.objects.get_or_create and any
code relying on token.format_price() to handle nullable decimals.
- Around line 788-803: The pagination loop in sync.py using fetch_from_rpc (with
variables from_index, limit, all_challenges, pot_id) can loop forever if the RPC
keeps returning exactly limit items; add a max-iteration safeguard: introduce a
max_iterations constant (e.g., MAX_PAGINATION_ITER = 1000 or configurable),
track an iterations counter inside the while True, increment it each loop, and
break (and optionally log a warning/error) when iterations >=
MAX_PAGINATION_ITER; keep existing break conditions (no challenges or
len(challenges) < limit) intact and ensure from_index is still incremented by
limit per iteration.
- Around line 138-200: The post method in the List sync endpoint performs
multiple DB writes without a transaction, risking partial commits; wrap the
entire create/update block in a Django transaction.atomic() context (import
transaction from django.db) so that operations in ListSyncAPI.post (the
existing_list update branch, owner Account creation, admin adds, and the new
List.objects.create branch) are executed atomically and will roll back on
failure; ensure the transaction encloses both the update path
(existing_list.save(), admins.clear()/add()) and the create path
(List.objects.create(), Account.get_or_create(), admins.add()) before returning
the Response.
🧹 Nitpick comments (2)
base/api.py (1)

21-26: FloatField misrepresents the actual DecimalField precision for USD amounts.

The underlying model fields (total_amount_usd, amount_paid_usd) are DecimalField(max_digits=20, decimal_places=2). Using serializers.FloatField in the OpenAPI schema documents these as IEEE 754 floats, which loses precision for large monetary values. Use serializers.DecimalField to match the actual data type.

Proposed fix
 class StatsResponseSerializer(serializers.Serializer):
-    total_donations_usd = serializers.FloatField()
-    total_payouts_usd = serializers.FloatField()
+    total_donations_usd = serializers.DecimalField(max_digits=20, decimal_places=2)
+    total_payouts_usd = serializers.DecimalField(max_digits=20, decimal_places=2)
     total_donations_count = serializers.IntegerField()
     total_donors_count = serializers.IntegerField()
     total_recipients_count = serializers.IntegerField()
api/sync.py (1)

38-55: requests.Session is never closed.

The session created at line 53 is used across multiple requests but never explicitly closed. While Python's GC will eventually clean it up, it's good practice to use a context manager or call session.close() to release connection pool resources promptly.

Proposed fix

Consider restructuring to use a context manager or adding a finally block, or at minimum, creating the session at module level for reuse.

Comment on lines +138 to +200
def post(self, request, list_id: int):
try:
# Fetch from RPC
data = fetch_from_rpc("get_list", {"list_id": int(list_id)})

if not data:
return Response({"error": "List not found on chain"}, status=404)

# Check if already exists
existing_list = List.objects.filter(on_chain_id=int(list_id)).first()

if existing_list:
# Update existing list
existing_list.name = data["name"]
existing_list.description = data.get("description", "")
existing_list.cover_image_url = data.get("cover_image_url")
existing_list.admin_only_registrations = data.get("admin_only_registrations", False)
existing_list.default_registration_status = data.get("default_registration_status", "Pending")
existing_list.updated_at = datetime.fromtimestamp(data["updated_at"] / 1000)
existing_list.save()

# Update admins
existing_list.admins.clear()
for admin_id in data.get("admins", []):
admin, _ = Account.objects.get_or_create(id=admin_id)
existing_list.admins.add(admin)

return Response({
"success": True,
"message": "List updated",
"on_chain_id": list_id
})

# Create list (on_chain_id is the blockchain ID, id is auto-generated)
list_obj = List.objects.create(
on_chain_id=data["id"],
owner_id=data["owner"],
name=data["name"],
description=data.get("description", ""),
cover_image_url=data.get("cover_image_url"),
admin_only_registrations=data.get("admin_only_registrations", False),
default_registration_status=data.get("default_registration_status", "Pending"),
created_at=datetime.fromtimestamp(data["created_at"] / 1000),
updated_at=datetime.fromtimestamp(data["updated_at"] / 1000),
)

# Create owner account
Account.objects.get_or_create(id=data["owner"])

# Add admins
for admin_id in data.get("admins", []):
admin, _ = Account.objects.get_or_create(id=admin_id)
list_obj.admins.add(admin)

return Response({
"success": True,
"message": "List created",
"on_chain_id": list_obj.on_chain_id
})

except Exception as e:
logger.error(f"Error syncing list {list_id}: {e}")
return Response({"error": str(e)}, status=502)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing transaction.atomic() — partial writes on failure.

ListSyncAPI.post creates a List, then the owner Account, then adds admins in separate DB operations. If an exception occurs mid-way (e.g., during admin creation), the DB is left in an inconsistent state with a list but no admins. The same pattern repeats in all other sync endpoints.

Wrap the create/update logic in transaction.atomic().

Proposed fix (example for ListSyncAPI; apply similarly elsewhere)
+from django.db import transaction
+
     def post(self, request, list_id: int):
         try:
             data = fetch_from_rpc("get_list", {"list_id": int(list_id)})
             if not data:
                 return Response({"error": "List not found on chain"}, status=404)

-            existing_list = List.objects.filter(on_chain_id=int(list_id)).first()
-            if existing_list:
-                # Update ...
+            with transaction.atomic():
+                existing_list = List.objects.filter(on_chain_id=int(list_id)).first()
+                if existing_list:
+                    # Update ...
🧰 Tools
🪛 Ruff (0.15.0)

[warning] 138-138: Unused method argument: request

(ARG002)


[warning] 198-198: Do not catch blind exception: Exception

(BLE001)


[warning] 199-199: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
In `@api/sync.py` around lines 138 - 200, The post method in the List sync
endpoint performs multiple DB writes without a transaction, risking partial
commits; wrap the entire create/update block in a Django transaction.atomic()
context (import transaction from django.db) so that operations in
ListSyncAPI.post (the existing_list update branch, owner Account creation, admin
adds, and the new List.objects.create branch) are executed atomically and will
roll back on failure; ensure the transaction encloses both the update path
(existing_list.save(), admins.clear()/add()) and the create path
(List.objects.create(), Account.get_or_create(), admins.add()) before returning
the Response.

Comment on lines +198 to +200
except Exception as e:
logger.error(f"Error syncing list {list_id}: {e}")
return Response({"error": str(e)}, status=502)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Internal error details leaked to API consumers via str(e).

Every except block returns Response({"error": str(e)}, status=502), exposing internal exception messages (potentially including DB errors, RPC responses, stack info) to unauthenticated callers. This is an information disclosure risk.

Return a generic message and log the details server-side.

Proposed fix (apply to all error handlers)
         except Exception as e:
-            logger.error(f"Error syncing list {list_id}: {e}")
-            return Response({"error": str(e)}, status=502)
+            logger.exception(f"Error syncing list {list_id}: {e}")
+            return Response({"error": "Sync failed due to an internal error"}, status=502)

Also applies to: 264-266, 333-335, 468-470, 583-585, 662-664, 751-753, 841-843, 907-909

🧰 Tools
🪛 Ruff (0.15.0)

[warning] 198-198: Do not catch blind exception: Exception

(BLE001)


[warning] 199-199: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
In `@api/sync.py` around lines 198 - 200, The current except blocks leak internal
exception text to API consumers (e.g., the block that logs logger.error(f"Error
syncing list {list_id}: {e}") and returns Response({"error": str(e)},
status=502)); change each handler to log the full error server-side (use
logger.exception(...) or logger.error(..., exc_info=True) with contextual
identifiers like list_id or the function name) and return a generic error
payload such as Response({"error": "Internal server error"}, status=502). Apply
this replacement for the shown block and the other except handlers referenced
(lines around 264-266, 333-335, 468-470, 583-585, 662-664, 751-753, 841-843,
907-909), ensuring you preserve the contextual log message but remove str(e)
from responses.

Comment on lines +546 to +550
ft_account, _ = Account.objects.get_or_create(id=ft_id)
token, _ = Token.objects.get_or_create(
account=ft_account,
defaults={"name": ft_id, "decimals": 24}
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Non-NEAR token created with hardcoded decimals=24 and missing symbol.

Line 549 defaults decimals to 24 for all fungible tokens, but many FTs use different decimal values (e.g., USDC uses 6). The symbol field is also omitted. This will cause incorrect USD price calculations downstream when token.format_price() divides by 10**decimals.

Suggestion

At minimum, avoid hardcoding decimals. Consider fetching token metadata from the chain or flagging tokens with unknown decimals for later resolution:

                     token, _ = Token.objects.get_or_create(
                         account=ft_account,
-                        defaults={"name": ft_id, "decimals": 24}
+                        defaults={"name": ft_id, "symbol": ft_id, "decimals": 0}  # TODO: fetch actual decimals from token contract
                     )
🤖 Prompt for AI Agents
In `@api/sync.py` around lines 546 - 550, The code creates Token objects with a
hardcoded decimals=24 and no symbol; instead, remove the hardcoded default and
fetch on-chain metadata for ft_id before creating/updating the Token: call or
implement a helper like fetch_ft_metadata(ft_id) (or use the existing FT client)
to obtain decimals and symbol, and pass those into Token.objects.get_or_create /
Token.objects.update_or_create defaults; if metadata fetch fails, set decimals
and symbol to None (or a sentinel) and set an "unknown_metadata" flag or log a
warning so downstream code can handle/resolve missing decimals rather than
assuming 24. Ensure you update references to Token.objects.get_or_create and any
code relying on token.format_price() to handle nullable decimals.

Comment on lines +788 to +803
while True:
challenges = fetch_from_rpc(
"get_payouts_challenges",
{"from_index": from_index, "limit": limit},
contract_id=pot_id,
timeout=120
)

if not challenges:
break

all_challenges.extend(challenges)

if len(challenges) < limit:
break
from_index += limit
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a safety cap to the pagination loop.

The while True loop fetching payout challenges could run indefinitely if the RPC consistently returns exactly limit results (e.g., due to a bug). Add a maximum iteration cap.

Proposed fix
+            MAX_PAGES = 100  # Safety cap
             while True:
+                if from_index // limit >= MAX_PAGES:
+                    logger.warning(f"Hit pagination cap for pot {pot_id} challenges")
+                    break
                 challenges = fetch_from_rpc(...)
🤖 Prompt for AI Agents
In `@api/sync.py` around lines 788 - 803, The pagination loop in sync.py using
fetch_from_rpc (with variables from_index, limit, all_challenges, pot_id) can
loop forever if the RPC keeps returning exactly limit items; add a max-iteration
safeguard: introduce a max_iterations constant (e.g., MAX_PAGINATION_ITER = 1000
or configurable), track an iterations counter inside the while True, increment
it each loop, and break (and optionally log a warning/error) when iterations >=
MAX_PAGINATION_ITER; keep existing break conditions (no challenges or
len(challenges) < limit) intact and ensure from_index is still incremented by
limit per iteration.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (3)
api/urls.py (2)

171-173: Verb /delete/ embedded in URL path violates REST conventions.

v1/campaigns/<int:campaign_id>/delete/sync uses an action verb as a path segment. The /delete/ segment is redundant — deletion intent should be expressed via the HTTP DELETE method, not the URL. Consider renaming to v1/campaigns/<int:campaign_id>/sync and routing DELETE to CampaignDeleteSyncAPI, or at minimum renaming the segment to something that isn't a verb (e.g., removal).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/urls.py` around lines 171 - 173, The route path string
"v1/campaigns/<int:campaign_id>/delete/sync" uses a verb segment; change routing
so deletion is expressed by the HTTP DELETE method rather than a "/delete/" path
segment: update the URL pattern to "v1/campaigns/<int:campaign_id>/sync" (or a
non-verb segment like "removal/sync" if you must) and ensure the view class
CampaignDeleteSyncAPI is registered to handle DELETE requests (or map the DELETE
method to the view in your URL dispatcher) so clients call DELETE
/v1/campaigns/{id}/sync instead of embedding "delete" in the path.

86-118: Mixed hyphen/underscore URL path segments within the accounts group.

Routes like active_pots, pot_applications, donations_received, donations_sent, payouts_received use underscores, while list-registrations (line 111) and upvoted-lists (line 116) use hyphens. REST convention recommends hyphens consistently throughout.

♻️ Proposed fix
-        "v1/accounts/<str:account_id>/active_pots",
+        "v1/accounts/<str:account_id>/active-pots",
...
-        "v1/accounts/<str:account_id>/pot_applications",
+        "v1/accounts/<str:account_id>/pot-applications",
...
-        "v1/accounts/<str:account_id>/donations_received",
+        "v1/accounts/<str:account_id>/donations-received",
...
-        "v1/accounts/<str:account_id>/donations_sent",
+        "v1/accounts/<str:account_id>/donations-sent",
...
-        "v1/accounts/<str:account_id>/payouts_received",
+        "v1/accounts/<str:account_id>/payouts-received",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/urls.py` around lines 86 - 118, The URL patterns under the accounts group
mix underscores and hyphens (e.g., "active_pots", "donations_received" vs
"list-registrations", "upvoted-lists"); update the paths to use hyphens
consistently per REST conventions by renaming the underscore segments to
hyphenated equivalents (change "v1/accounts/<str:account_id>/active_pots" etc.
to "v1/accounts/<str:account_id>/active-pots") and ensure the corresponding
route names (e.g., those referring to AccountActivePotsAPI,
AccountPotApplicationsAPI, AccountDonationsReceivedAPI, AccountDonationsSentAPI,
AccountPayoutsReceivedAPI) remain correct and any internal references or tests
updated to the new paths.
campaigns/sync.py (1)

222-285: Implicit Optional on tx_hash and unused created variable.

  • Line 222: tx_hash: str = None is an implicit Optional (Ruff RUF013); use tx_hash: str | None = None.
  • Line 275: created is never used after the update_or_create call (Ruff RUF059); prefix it with _.
🛠️ Proposed fix
-def sync_donation_from_data(campaign: Campaign, donation_data: dict, tx_hash: str = None) -> CampaignDonation:
+def sync_donation_from_data(campaign: Campaign, donation_data: dict, tx_hash: str | None = None) -> CampaignDonation:
-    donation, created = CampaignDonation.objects.update_or_create(
+    donation, _created = CampaignDonation.objects.update_or_create(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@campaigns/sync.py` around lines 222 - 285, The function signature
sync_donation_from_data declares tx_hash with an implicit Optional (tx_hash: str
= None) — change it to an explicit union annotation (tx_hash: str | None = None)
and update the unused variable from CampaignDonation.objects.update_or_create to
ignore the created flag by renaming created to _created (or prefixing with an
underscore) so the linter no longer flags an unused variable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@campaigns/sync.py`:
- Around line 489-491: In CampaignDeleteSyncAPI, CampaignRefundSyncAPI and
CampaignUnescrowSyncAPI replace the current logger.error(f"... {e}") and
Response({"error": str(e)}, status=502) pattern with server-side stack capture
and a generic client message: call logger.exception("Error syncing campaign
<operation> %s", campaign_id) (or similar) to record the stack trace, and return
a non-sensitive response like Response({"error": "Internal server error"},
status=502); update the three handlers (the except blocks currently logging and
returning str(e)) accordingly so internal exception text is not exposed to API
consumers.
- Around line 628-634: The unescrow bulk update in CampaignUnescrowSyncAPI is
not scoped to the current campaign and may affect donations from other
campaigns; update the CampaignDonation queryset used in the unescrow loop
(inside the for event_data in unescrow_events block) to include the campaign
constraint (campaign__on_chain_id=int(campaign_id)) like the refund handler
does, so the .filter(on_chain_id__in=donation_ids) becomes
.filter(on_chain_id__in=donation_ids, campaign__on_chain_id=int(campaign_id))
before calling .update(escrowed=False).
- Line 427: The three APIView classes CampaignDeleteSyncAPI,
CampaignRefundSyncAPI, and CampaignUnescrowSyncAPI are missing permission checks
and thus allow unauthenticated writes; update each class to require
authentication by adding a permission_classes attribute (e.g.,
permission_classes = [IsAuthenticated]) and ensure the IsAuthenticated symbol is
imported from rest_framework.permissions; apply the same pattern used for
CampaignSyncAPI/CampaignDonationSyncAPI so all write RPC endpoints enforce
authentication.
- Around line 550-555: In CampaignRefundSyncAPI, fix three problems: (1) clarify
whether event_data["escrow_balance"] is the post-refund total or the refunded
delta and either assign it directly to campaign.escrow_balance or subtract
accordingly (confirm contract schema) instead of always subtracting; (2) stop
using int() on string CharField money values (campaign.escrow_balance,
campaign.total_raised_amount, campaign.net_raised_amount and donation amount
parsing) — parse and compute using Decimal with explicit
validation/normalization (e.g., Decimal(event_value)) to handle decimals and
invalid input and raise/log on parse errors; (3) protect concurrent updates by
acquiring a DB lock (e.g., select_for_update within a transaction) when loading
Campaign and persist only changed fields using
campaign.save(update_fields=["escrow_balance","total_raised_amount","net_raised_amount"]).
Ensure you update the code paths that compute donation adjustments to use
Decimal arithmetic and the same save/update_fields pattern.

---

Duplicate comments:
In `@api/urls.py`:
- Around line 193-198: The URL pattern for ListRandomRegistrationAPI.as_view()
reuses the existing name "lists_api_by_id_registrations", causing a duplicate
URL name; update the name argument on the path for ListRandomRegistrationAPI
(the path with "v1/lists/<int:list_id>/random_registration" and view
ListRandomRegistrationAPI.as_view()) to a unique identifier (for example
"lists_api_random_registration" or similar) so it no longer collides with the
existing lists_api_by_id_registrations name.
- Line 202: The route string for PotDetailAPI is inconsistent with other
endpoints due to a trailing slash; change the URL pattern in the path(...) call
from "v1/pots/<str:pot_id>/" to "v1/pots/<str:pot_id>" so PotDetailAPI.as_view()
is registered without a trailing slash, and then run/update any tests or client
code that reference the named route "pots_api_by_id" to match the new URL.
- Around line 225-226: The reclaim endpoint path("v1/reclaim/generate-request",
ReclaimProofRequestView.as_view(), name="stats_api") reuses the same URL name as
the stats endpoint (path("v1/stats", StatsAPI.as_view(), name="stats_api")),
which causes the latter to be shadowed; update the name argument on the
ReclaimProofRequestView route to a unique identifier (e.g.,
"reclaim_generate_request" or "reclaim_proof_request") so that
StatsAPI.as_view() and ReclaimProofRequestView.as_view() each have distinct URL
names.
- Around line 229-237: The route definitions have inconsistent resource naming
and parameter types: change the singular "v1/round/<int:round_id>/" and mixed
converters to match the plural "v1/rounds" convention and a single ID type
across all endpoints; update the path strings and converters used by
RoundDetailAPI, ProjectRoundVotesAPI, and RoundApplicationsAPI to use the same
base ("v1/rounds") and the same converter (either <int:round_id> or
<str:round_id>) as RoundsListAPI/ProjectListAPI, and adjust the route names if
needed so RoundsListAPI, RoundDetailAPI, ProjectRoundVotesAPI, and
RoundApplicationsAPI all use consistent pluralization and a single round_id
type.
- Around line 238-239: The routes define an ambiguous catch-all for projects and
contain a typo in the route name: change the URL name "projects_stat__api" to
"projects_stat_api" for ProjectStatsAPI, and make the projects route more
specific (or reorder) so it cannot shadow project-stats — e.g., ensure
AccountProjectListAPI path is defined as "v1/<str:account_id>/projects/" or
split into "v1/<str:account_id>/projects/" and
"v1/<str:account_id>/projects/<str:project_id>/" (or move the ProjectStatsAPI
path above the projects path) so ProjectStatsAPI matching is unambiguous.

In `@campaigns/sync.py`:
- Around line 422-424: In CampaignDonationSyncAPI's exception handler replace
logger.error(...) with logger.exception(...) to record the full traceback, and
stop returning internal RPC details to clients by removing str(e) from the
response body; instead return a generic error payload (e.g., {"error": "Internal
server error"}) with status 502 while keeping the detailed exception only in the
logs (reference logger, the except Exception as e block, and the Response(...)
return).
- Around line 288-321: CampaignSyncAPI.post currently allows unauthenticated
access and leaks internal errors; add appropriate DRF permission_classes (e.g.,
permission_classes = [IsAuthenticated] or another project-specific permission)
on the CampaignSyncAPI class to prevent unauthenticated RPC/DB writes, change
the exception logging call in post from logger.error(...) to
logger.exception(...) to capture the full stack trace, and replace the response
body on exception with a generic message (e.g., {"error":"Internal server
error"}) while keeping the 502 status so internal RPC details are not returned
to clients.
- Around line 134-155: The parse_donation_from_tx function both fails to catch
binascii.Error from base64.b64decode and is annotated to return dict while it
can return None; update parse_donation_from_tx to catch binascii.Error in the
except clause (along with json.JSONDecodeError and UnicodeDecodeError) so
invalid Base64 doesn't raise, and change the return type annotation from -> dict
to -> Optional[dict] (or Dict[str, Any] | None) and add the necessary typing
import(s) so callers know None may be returned.
- Around line 197-200: The defaults dict currently always sets "created_at"
(using datetime.now() when data["created_ms"] is missing), which causes
update_or_create to overwrite historical creation timestamps; change the logic
that builds the defaults dict in sync.py so that "created_at" is only included
when data.get("created_ms") is present (i.e., set "created_at" from
datetime.fromtimestamp(...) only if created_ms exists), otherwise omit
"created_at" from the defaults passed to update_or_create to preserve the stored
creation time.

---

Nitpick comments:
In `@api/urls.py`:
- Around line 171-173: The route path string
"v1/campaigns/<int:campaign_id>/delete/sync" uses a verb segment; change routing
so deletion is expressed by the HTTP DELETE method rather than a "/delete/" path
segment: update the URL pattern to "v1/campaigns/<int:campaign_id>/sync" (or a
non-verb segment like "removal/sync" if you must) and ensure the view class
CampaignDeleteSyncAPI is registered to handle DELETE requests (or map the DELETE
method to the view in your URL dispatcher) so clients call DELETE
/v1/campaigns/{id}/sync instead of embedding "delete" in the path.
- Around line 86-118: The URL patterns under the accounts group mix underscores
and hyphens (e.g., "active_pots", "donations_received" vs "list-registrations",
"upvoted-lists"); update the paths to use hyphens consistently per REST
conventions by renaming the underscore segments to hyphenated equivalents
(change "v1/accounts/<str:account_id>/active_pots" etc. to
"v1/accounts/<str:account_id>/active-pots") and ensure the corresponding route
names (e.g., those referring to AccountActivePotsAPI, AccountPotApplicationsAPI,
AccountDonationsReceivedAPI, AccountDonationsSentAPI, AccountPayoutsReceivedAPI)
remain correct and any internal references or tests updated to the new paths.

In `@campaigns/sync.py`:
- Around line 222-285: The function signature sync_donation_from_data declares
tx_hash with an implicit Optional (tx_hash: str = None) — change it to an
explicit union annotation (tx_hash: str | None = None) and update the unused
variable from CampaignDonation.objects.update_or_create to ignore the created
flag by renaming created to _created (or prefixing with an underscore) so the
linter no longer flags an unused variable.

Comment on lines +72 to +79
response = requests.post(rpc_url, json=payload, timeout=15)
result = response.json()

if "error" in result:
raise Exception(f"RPC error: {result['error']}")

result_bytes = bytes(result["result"]["result"])
return json.loads(result_bytes.decode())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

fetch_from_rpc JSON-RPC fallback: missing HTTP status check and unsafe nested key access.

Two issues in the fallback path:

  1. No HTTP status checkresponse.json() is called unconditionally on line 73. A 4xx/5xx response with an HTML or plain-text body will raise json.JSONDecodeError, which bubbles up with an unhelpful error message.

  2. Unsafe nested key accessresult["result"]["result"] on line 78 raises KeyError if the outer "result" key exists but doesn't contain an inner "result" key (e.g., a malformed or partial RPC response). A guarded access with an explicit validation is safer.

🛠️ Proposed fix
-    response = requests.post(rpc_url, json=payload, timeout=15)
-    result = response.json()
-
-    if "error" in result:
-        raise Exception(f"RPC error: {result['error']}")
-
-    result_bytes = bytes(result["result"]["result"])
-    return json.loads(result_bytes.decode())
+    response = requests.post(rpc_url, json=payload, timeout=15)
+    response.raise_for_status()
+    result = response.json()
+
+    if "error" in result:
+        raise Exception(f"RPC error: {result['error']}")
+
+    result_data = result.get("result", {}).get("result")
+    if result_data is None:
+        raise Exception("Unexpected RPC response structure")
+    result_bytes = bytes(result_data)
+    return json.loads(result_bytes.decode())

The same HTTP status check gap exists in fetch_tx_result at lines 104–106.

🧰 Tools
🪛 Ruff (0.15.1)

[warning] 76-76: Create your own exception

(TRY002)


[warning] 76-76: Avoid specifying long messages outside the exception class

(TRY003)

return Response({"error": str(e)}, status=502)


class CampaignDeleteSyncAPI(APIView):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

CampaignDeleteSyncAPI, CampaignRefundSyncAPI, CampaignUnescrowSyncAPI: missing authentication.

All three new API views perform write operations (DB deletes, bulk updates, cascading saves) and trigger external RPC calls, but none define permission_classes. Any unauthenticated user can call these endpoints. The same concern was raised for CampaignSyncAPI and CampaignDonationSyncAPI in a prior review.

🛠️ Proposed fix
+from rest_framework.permissions import IsAuthenticated
+
 class CampaignDeleteSyncAPI(APIView):
+    permission_classes = [IsAuthenticated]

Apply the same to CampaignRefundSyncAPI and CampaignUnescrowSyncAPI.

Also applies to: 494-494, 589-589

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@campaigns/sync.py` at line 427, The three APIView classes
CampaignDeleteSyncAPI, CampaignRefundSyncAPI, and CampaignUnescrowSyncAPI are
missing permission checks and thus allow unauthenticated writes; update each
class to require authentication by adding a permission_classes attribute (e.g.,
permission_classes = [IsAuthenticated]) and ensure the IsAuthenticated symbol is
imported from rest_framework.permissions; apply the same pattern used for
CampaignSyncAPI/CampaignDonationSyncAPI so all write RPC endpoints enforce
authentication.

Comment on lines +489 to +491
except Exception as e:
logger.error(f"Error syncing campaign deletion {campaign_id}: {e}")
return Response({"error": str(e)}, status=502)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

CampaignDeleteSyncAPI, CampaignRefundSyncAPI, CampaignUnescrowSyncAPI: logger.error leaks internal error details to clients.

All three new views use logger.error(f"... {e}") and return Response({"error": str(e)}, status=502), exposing raw RPC/exception messages to API consumers. Replace with logger.exception(...) (captures stack trace server-side) and return a generic client message. This pattern was flagged in a prior review for the first two views and remains unaddressed in these new classes.

🛠️ Proposed fix (apply to all three handlers)
-        except Exception as e:
-            logger.error(f"Error syncing campaign deletion {campaign_id}: {e}")
-            return Response({"error": str(e)}, status=502)
+        except Exception:
+            logger.exception(f"Error syncing campaign deletion {campaign_id}")
+            return Response({"error": "RPC failed"}, status=502)

Also applies to: 584-586, 648-649

🧰 Tools
🪛 Ruff (0.15.1)

[warning] 489-489: Do not catch blind exception: Exception

(BLE001)


[warning] 490-490: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@campaigns/sync.py` around lines 489 - 491, In CampaignDeleteSyncAPI,
CampaignRefundSyncAPI and CampaignUnescrowSyncAPI replace the current
logger.error(f"... {e}") and Response({"error": str(e)}, status=502) pattern
with server-side stack capture and a generic client message: call
logger.exception("Error syncing campaign <operation> %s", campaign_id) (or
similar) to record the stack trace, and return a non-sensitive response like
Response({"error": "Internal server error"}, status=502); update the three
handlers (the except blocks currently logging and returning str(e)) accordingly
so internal exception text is not exposed to API consumers.

Comment on lines +550 to +555
try:
campaign = Campaign.objects.get(on_chain_id=int(campaign_id))
escrow_balance = event_data.get("escrow_balance", "0")
campaign.escrow_balance = str(
int(campaign.escrow_balance) - int(escrow_balance)
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, check if campaigns/models.py exists and find field definitions
fd "models.py" campaigns/

Repository: PotLock/django-indexer

Length of output: 85


🏁 Script executed:

#!/bin/bash
# Look at the Campaign model field definitions
rg -n "escrow_balance|total_amount|net_amount|total_raised" campaigns/models.py -A 2

Repository: PotLock/django-indexer

Length of output: 1908


🏁 Script executed:

#!/bin/bash
# Examine the sync.py file around the mentioned lines
wc -l campaigns/sync.py

Repository: PotLock/django-indexer

Length of output: 87


🏁 Script executed:

#!/bin/bash
# Read the specific section of sync.py
cat -n campaigns/sync.py | sed -n '545,575p'

Repository: PotLock/django-indexer

Length of output: 1754


CampaignRefundSyncAPI: escrow balance arithmetic uses string fields and lacks concurrent update protection.

Three issues on lines 552–570:

  1. Semantic ambiguityevent_data.get("escrow_balance") is subtracted from campaign.escrow_balance on lines 553–555. If the event field represents the remaining escrow balance after the refund (i.e., the new total), this subtraction produces the wrong value — it should be assigned directly, not subtracted. If it represents the refunded delta, subtraction is correct. Confirm the on-chain contract event schema.

  2. int() conversion on CharField fieldsescrow_balance, total_raised_amount, and net_raised_amount are stored as CharField(max_length=64) (confirmed in models.py lines 145, 117, and similar). The code calls int() on these string fields (lines 554, 565, 568, and on donation amounts at lines 561–562). This will raise ValueError if any field contains decimal strings (e.g., "1000.5") or non-numeric content. Since the model uses token.format_price() on these fields, they may contain formatted decimals. Use proper decimal arithmetic or add explicit validation.

  3. campaign.save() on line 570 lacks update_fields parameter — Saving all fields instead of just the changed ones (escrow_balance, total_raised_amount, net_raised_amount) risks overwriting fields that may have been concurrently updated. Use campaign.save(update_fields=["escrow_balance", "total_raised_amount", "net_raised_amount"]).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@campaigns/sync.py` around lines 550 - 555, In CampaignRefundSyncAPI, fix
three problems: (1) clarify whether event_data["escrow_balance"] is the
post-refund total or the refunded delta and either assign it directly to
campaign.escrow_balance or subtract accordingly (confirm contract schema)
instead of always subtracting; (2) stop using int() on string CharField money
values (campaign.escrow_balance, campaign.total_raised_amount,
campaign.net_raised_amount and donation amount parsing) — parse and compute
using Decimal with explicit validation/normalization (e.g.,
Decimal(event_value)) to handle decimals and invalid input and raise/log on
parse errors; (3) protect concurrent updates by acquiring a DB lock (e.g.,
select_for_update within a transaction) when loading Campaign and persist only
changed fields using
campaign.save(update_fields=["escrow_balance","total_raised_amount","net_raised_amount"]).
Ensure you update the code paths that compute donation adjustments to use
Decimal arithmetic and the same save/update_fields pattern.

Comment on lines +628 to +634
for event_data in unescrow_events:
donation_ids = event_data.get("donation_ids", [])

# Mark donations as unescrowed (mirrors handle_campaign_donation_unescrowed)
updated_count = CampaignDonation.objects.filter(
on_chain_id__in=donation_ids
).update(escrowed=False)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

CampaignUnescrowSyncAPI: unescrow filter is not scoped to the current campaign.

Line 632–634:

CampaignDonation.objects.filter(
    on_chain_id__in=donation_ids
).update(escrowed=False)

There is no campaign__on_chain_id=int(campaign_id) constraint. If on_chain_id values are not globally unique across campaigns, this bulk update will flip escrowed=False on donations belonging to other campaigns. Compare with the refund handler at lines 543–545, which correctly scopes the filter to the campaign:

CampaignDonation.objects.filter(
    on_chain_id__in=donation_ids, campaign__on_chain_id=int(campaign_id)
).update(returned_at=now)
🛠️ Proposed fix
-                updated_count = CampaignDonation.objects.filter(
-                    on_chain_id__in=donation_ids
-                ).update(escrowed=False)
+                updated_count = CampaignDonation.objects.filter(
+                    on_chain_id__in=donation_ids,
+                    campaign__on_chain_id=int(campaign_id),
+                ).update(escrowed=False)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@campaigns/sync.py` around lines 628 - 634, The unescrow bulk update in
CampaignUnescrowSyncAPI is not scoped to the current campaign and may affect
donations from other campaigns; update the CampaignDonation queryset used in the
unescrow loop (inside the for event_data in unescrow_events block) to include
the campaign constraint (campaign__on_chain_id=int(campaign_id)) like the refund
handler does, so the .filter(on_chain_id__in=donation_ids) becomes
.filter(on_chain_id__in=donation_ids, campaign__on_chain_id=int(campaign_id))
before calling .update(escrowed=False).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants