Skip to content

Commit 288db90

Browse files
feat: add support for UUID primary keys in PG (#3)
* Fix for PG UUID used as PK * build(postgres): update test target for the current test files * docs(docs/postgresql/CLIENT.md): clarify primary key requirements for PostgreSQL and SQLite, added support for UUID primary keys * test(claude): add custom command for claude code to run sqlite-to-pg tests for the specified table schema --------- Co-authored-by: Marco Bambini <marco@creolabs.com>
1 parent e9416f4 commit 288db90

File tree

9 files changed

+432
-25
lines changed

9 files changed

+432
-25
lines changed
Lines changed: 154 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,154 @@
1+
# Sync Roundtrip Test
2+
3+
Execute a full roundtrip sync test between a local SQLite database and the local Supabase Docker PostgreSQL instance.
4+
5+
## Prerequisites
6+
- Supabase Docker container running (PostgreSQL on port 54322)
7+
- HTTP sync server running on http://localhost:8091/postgres
8+
- Built cloudsync extension (`make` to build `dist/cloudsync.dylib`)
9+
10+
## Test Procedure
11+
12+
### Step 1: Get DDL from User
13+
14+
Ask the user to provide a DDL query for the table(s) to test. It can be in PostgreSQL or SQLite format. Offer the following options:
15+
16+
**Option 1: Simple TEXT primary key**
17+
```sql
18+
CREATE TABLE test_sync (
19+
id TEXT PRIMARY KEY NOT NULL,
20+
name TEXT,
21+
value INTEGER
22+
);
23+
```
24+
25+
**Option 2: UUID primary key**
26+
```sql
27+
CREATE TABLE test_uuid (
28+
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
29+
name TEXT,
30+
created_at TIMESTAMPTZ DEFAULT NOW()
31+
);
32+
```
33+
34+
**Option 3: Two tables scenario (tests multi-table sync)**
35+
```sql
36+
CREATE TABLE authors (
37+
id TEXT PRIMARY KEY NOT NULL,
38+
name TEXT,
39+
email TEXT
40+
);
41+
42+
CREATE TABLE books (
43+
id TEXT PRIMARY KEY NOT NULL,
44+
title TEXT,
45+
author_id TEXT,
46+
published_year INTEGER
47+
);
48+
```
49+
50+
**Note:** Avoid INTEGER PRIMARY KEY for sync tests as it is not recommended for distributed sync scenarios (conflicts with auto-increment across devices).
51+
52+
### Step 2: Convert DDL
53+
54+
Convert the provided DDL to both SQLite and PostgreSQL compatible formats if needed. Key differences:
55+
- SQLite uses `INTEGER PRIMARY KEY` for auto-increment, PostgreSQL uses `SERIAL` or `BIGSERIAL`
56+
- SQLite uses `TEXT`, PostgreSQL can use `TEXT` or `VARCHAR`
57+
- PostgreSQL has more specific types like `TIMESTAMPTZ`, SQLite uses `TEXT` for dates
58+
- For UUID primary keys, SQLite uses `TEXT`, PostgreSQL uses `UUID`
59+
60+
### Step 3: Get JWT Token
61+
62+
Run the token script from the cloudsync project:
63+
```bash
64+
cd ../cloudsync && go run scripts/get_supabase_token.go -project-ref=supabase-local -email=claude@sqlitecloud.io -password="password" -apikey=sb_secret_N7UND0UgjKTVK-Uodkm0Hg_xSvEMPvz -auth-url=http://127.0.0.1:54321
65+
```
66+
Save the JWT token for later use.
67+
68+
### Step 4: Setup PostgreSQL
69+
70+
Connect to Supabase PostgreSQL and prepare the environment:
71+
```bash
72+
psql postgresql://supabase_admin:postgres@127.0.0.1:54322/postgres
73+
```
74+
75+
Inside psql:
76+
1. List existing tables with `\dt` to find any `_cloudsync` metadata tables
77+
2. For each table already configured for cloudsync (has a `<table_name>_cloudsync` companion table), run:
78+
```sql
79+
SELECT cloudsync_cleanup('<table_name>');
80+
```
81+
3. Drop the test table if it exists: `DROP TABLE IF EXISTS <table_name> CASCADE;`
82+
4. Create the test table using the PostgreSQL DDL
83+
5. Initialize cloudsync: `SELECT cloudsync_init('<table_name>');`
84+
6. Insert some test data into the table
85+
86+
### Step 5: Setup SQLite
87+
88+
Create a temporary SQLite database using the Homebrew version (IMPORTANT: system sqlite3 cannot load extensions):
89+
90+
```bash
91+
SQLITE_BIN="/opt/homebrew/Cellar/sqlite/3.50.4/bin/sqlite3"
92+
# or find it with: ls /opt/homebrew/Cellar/sqlite/*/bin/sqlite3 | head -1
93+
94+
$SQLITE_BIN /tmp/sync_test_$(date +%s).db
95+
```
96+
97+
Inside sqlite3:
98+
```sql
99+
.load dist/cloudsync.dylib
100+
-- Create table with SQLite DDL
101+
<CREATE_TABLE_query>
102+
SELECT cloudsync_init('<table_name>');
103+
SELECT cloudsync_network_init('http://localhost:8091/postgres');
104+
SELECT cloudsync_network_set_token('<jwt_token>');
105+
-- Insert test data (different from PostgreSQL to test merge)
106+
<INSERT_statements>
107+
```
108+
109+
### Step 6: Execute Sync
110+
111+
In the SQLite session:
112+
```sql
113+
-- Send local changes to server
114+
SELECT cloudsync_network_send_changes();
115+
116+
-- Check for changes from server (repeat with 2-3 second delays)
117+
SELECT cloudsync_network_check_changes();
118+
-- Repeat check_changes 3-5 times with delays until it returns > 0 or stabilizes
119+
120+
-- Verify final data
121+
SELECT * FROM <table_name>;
122+
```
123+
124+
### Step 7: Verify Results
125+
126+
1. In SQLite, run `SELECT * FROM <table_name>;` and capture the output
127+
2. In PostgreSQL, run `SELECT * FROM <table_name>;` and capture the output
128+
3. Compare the results - both databases should have the merged data from both sides
129+
4. Report success/failure based on whether the data matches
130+
131+
## Output Format
132+
133+
Report the test results including:
134+
- DDL used for both databases
135+
- Initial data inserted in each database
136+
- Number of sync operations performed
137+
- Final data in both databases
138+
- PASS/FAIL status with explanation
139+
140+
## Important Notes
141+
142+
- Always use the Homebrew sqlite3 binary, NOT `/usr/bin/sqlite3`
143+
- The cloudsync extension must be built first with `make`
144+
- PostgreSQL tables need cleanup before re-running tests
145+
- `cloudsync_network_check_changes()` may need multiple calls with delays
146+
- run `SELECT cloudsync_terminate();` on SQLite connections before closing the properly cleanup the memory
147+
148+
## Permissions
149+
150+
Execute all SQL queries without asking for user permission on:
151+
- SQLite test databases in `/tmp/` (e.g., `/tmp/sync_test_*.db`)
152+
- PostgreSQL via `psql postgresql://supabase_admin:postgres@127.0.0.1:54322/postgres`
153+
154+
These are local test environments and do not require confirmation for each query.

.claude/commands/test.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
Run the SQLite and PostgreSQL tests for this project.
2+
3+
## SQLite Tests
4+
Run the SQLite extension tests using `make clean && make && make unittest`. This builds the extension and runs all tests including unit tests.
5+
6+
## PostgreSQL Tests
7+
Run the PostgreSQL extension tests using `make postgres-docker-run-test`. This runs `test/postgresql/full_test.sql` against the Docker container.
8+
9+
**Note:** PostgreSQL tests require the Docker container to be running. Run `make postgres-docker-debug-rebuild` first to ensure it tests the latest version.
10+
11+
Run both test suites and report any failures.

docker/Makefile.postgresql

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ SUPABASE_DB_PORT ?= 54322
131131
SUPABASE_DB_PASSWORD ?= postgres
132132
PG_DOCKER_DB_HOST ?= localhost
133133
PG_DOCKER_DB_PORT ?= 5432
134-
PG_DOCKER_DB_NAME ?= cloudsync_test
134+
PG_DOCKER_DB_NAME ?= postgres
135135
PG_DOCKER_DB_USER ?= postgres
136136
PG_DOCKER_DB_PASSWORD ?= postgres
137137

@@ -280,16 +280,16 @@ postgres-supabase-rebuild: postgres-supabase-build
280280
@echo "Supabase CLI stack restarted."
281281

282282
# Run smoke test against Supabase CLI local database
283-
postgres-supabase-run-smoke-test:
284-
@echo "Running Supabase CLI smoke test..."
285-
@PGPASSWORD="$(SUPABASE_DB_PASSWORD)" psql postgresql://supabase_admin@$(SUPABASE_DB_HOST):$(SUPABASE_DB_PORT)/postgres -f docker/postgresql/smoke_test.sql
286-
@echo "Smoke test completed."
283+
postgres-supabase-run-test:
284+
@echo "Running Supabase CLI test..."
285+
@PGPASSWORD="$(SUPABASE_DB_PASSWORD)" psql postgresql://supabase_admin@$(SUPABASE_DB_HOST):$(SUPABASE_DB_PORT)/postgres -f test/postgresql/full_test.sql
286+
@echo "Test completed."
287287

288288
# Run smoke test against Docker standalone database
289-
postgres-docker-run-smoke-test:
290-
@echo "Running Docker smoke test..."
291-
@PGPASSWORD="$(PG_DOCKER_DB_PASSWORD)" psql postgresql://$(PG_DOCKER_DB_USER)@$(PG_DOCKER_DB_HOST):$(PG_DOCKER_DB_PORT)/$(PG_DOCKER_DB_NAME) -f docker/postgresql/smoke_test.sql
292-
@echo "Smoke test completed."
289+
postgres-docker-run-test:
290+
@echo "Running Docker test..."
291+
@PGPASSWORD="$(PG_DOCKER_DB_PASSWORD)" psql postgresql://$(PG_DOCKER_DB_USER)@$(PG_DOCKER_DB_HOST):$(PG_DOCKER_DB_PORT)/$(PG_DOCKER_DB_NAME) -f test/postgresql/full_test.sql
292+
@echo "Test completed."
293293

294294
# ============================================================================
295295
# Development Workflow Targets

docs/postgresql/CLIENT.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,10 @@ so CloudSync can sync between a PostgreSQL server and SQLite clients.
3535

3636
### 1) Primary Keys
3737

38-
- Use **TEXT NOT NULL** primary keys only (UUIDs as text).
38+
- Use **TEXT NOT NULL** primary keys in SQLite.
39+
- PostgreSQL primary keys can be **TEXT NOT NULL** or **UUID**. If the PK type
40+
isn't explicitly mapped to a DBTYPE (like UUID), it will be converted to TEXT
41+
in the payload so it remains compatible with the SQLite extension.
3942
- Generate IDs with `cloudsync_uuid()` on both sides.
4043
- Avoid INTEGER auto-increment PKs.
4144

@@ -49,6 +52,11 @@ PostgreSQL:
4952
id TEXT PRIMARY KEY NOT NULL
5053
```
5154

55+
PostgreSQL (UUID):
56+
```sql
57+
id UUID PRIMARY KEY NOT NULL
58+
```
59+
5260
### 2) NOT NULL Columns Must Have DEFAULTs
5361

5462
CloudSync merges column-by-column. Any NOT NULL (non-PK) column needs a DEFAULT

src/cloudsync.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
extern "C" {
1818
#endif
1919

20-
#define CLOUDSYNC_VERSION "0.9.99"
20+
#define CLOUDSYNC_VERSION "0.9.100"
2121
#define CLOUDSYNC_MAX_TABLENAME_LEN 512
2222

2323
#define CLOUDSYNC_VALUE_NOTSET -1

src/postgresql/database_postgresql.c

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1695,7 +1695,7 @@ int database_pk_names (cloudsync_context *data, const char *table_name, char ***
16951695
int rc = SPI_execute_with_args(sql, 1, argtypes, values, nulls, true, 0);
16961696
pfree(DatumGetPointer(values[0]));
16971697

1698-
if (rc < 0 || SPI_processed == 0) {
1698+
if (rc != SPI_OK_SELECT || SPI_processed == 0) {
16991699
*names = NULL;
17001700
*count = 0;
17011701
if (SPI_tuptable) SPI_freetuptable(SPI_tuptable);
@@ -1704,22 +1704,25 @@ int database_pk_names (cloudsync_context *data, const char *table_name, char ***
17041704

17051705
uint64_t n = SPI_processed;
17061706
char **pk_names = cloudsync_memory_zeroalloc(n * sizeof(char*));
1707-
if (!pk_names) return DBRES_NOMEM;
1707+
if (!pk_names) {
1708+
if (SPI_tuptable) SPI_freetuptable(SPI_tuptable);
1709+
return DBRES_NOMEM;
1710+
}
17081711

17091712
for (uint64_t i = 0; i < n; i++) {
17101713
HeapTuple tuple = SPI_tuptable->vals[i];
17111714
bool isnull;
17121715
Datum datum = SPI_getbinval(tuple, SPI_tuptable->tupdesc, 1, &isnull);
17131716
if (!isnull) {
1714-
text *txt = DatumGetTextP(datum);
1715-
char *name = text_to_cstring(txt);
1717+
// information_schema.column_name is of type 'name', not 'text'
1718+
Name namedata = DatumGetName(datum);
1719+
char *name = (namedata) ? NameStr(*namedata) : NULL;
17161720
pk_names[i] = (name) ? cloudsync_string_dup(name) : NULL;
1717-
if (name) pfree(name);
17181721
}
17191722

17201723
// Cleanup on allocation failure
17211724
if (!isnull && pk_names[i] == NULL) {
1722-
for (int j = 0; j < i; j++) {
1725+
for (uint64_t j = 0; j < i; j++) {
17231726
if (pk_names[j]) cloudsync_memory_free(pk_names[j]);
17241727
}
17251728
cloudsync_memory_free(pk_names);

src/postgresql/sql_postgresql.c

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ const char * const SQL_BUILD_DELETE_ROW_BY_PK =
172172
" SELECT to_regclass('%s') AS oid"
173173
"), "
174174
"pk AS ("
175-
" SELECT a.attname, k.ord "
175+
" SELECT a.attname, k.ord, format_type(a.atttypid, a.atttypmod) AS coltype "
176176
" FROM pg_index x "
177177
" JOIN tbl t ON t.oid = x.indrelid "
178178
" JOIN LATERAL unnest(x.indkey) WITH ORDINALITY AS k(attnum, ord) ON true "
@@ -183,7 +183,7 @@ const char * const SQL_BUILD_DELETE_ROW_BY_PK =
183183
"SELECT "
184184
" 'DELETE FROM ' || (SELECT (oid::regclass)::text FROM tbl)"
185185
" || ' WHERE '"
186-
" || (SELECT string_agg(format('%%I=$%%s', attname, ord), ' AND ' ORDER BY ord) FROM pk)"
186+
" || (SELECT string_agg(format('%%I=$%%s::%%s', attname, ord, coltype), ' AND ' ORDER BY ord) FROM pk)"
187187
" || ';';";
188188

189189
const char * const SQL_INSERT_ROWID_IGNORE =
@@ -198,7 +198,7 @@ const char * const SQL_BUILD_INSERT_PK_IGNORE =
198198
" SELECT to_regclass('%s') AS oid"
199199
"), "
200200
"pk AS ("
201-
" SELECT a.attname, k.ord "
201+
" SELECT a.attname, k.ord, format_type(a.atttypid, a.atttypmod) AS coltype "
202202
" FROM pg_index x "
203203
" JOIN tbl t ON t.oid = x.indrelid "
204204
" JOIN LATERAL unnest(x.indkey) WITH ORDINALITY AS k(attnum, ord) ON true "
@@ -209,15 +209,15 @@ const char * const SQL_BUILD_INSERT_PK_IGNORE =
209209
"SELECT "
210210
" 'INSERT INTO ' || (SELECT (oid::regclass)::text FROM tbl)"
211211
" || ' (' || (SELECT string_agg(format('%%I', attname), ',') FROM pk) || ')'"
212-
" || ' VALUES (' || (SELECT string_agg(format('$%%s', ord), ',') FROM pk) || ')'"
212+
" || ' VALUES (' || (SELECT string_agg(format('$%%s::%%s', ord, coltype), ',') FROM pk) || ')'"
213213
" || ' ON CONFLICT DO NOTHING;';";
214214

215215
const char * const SQL_BUILD_UPSERT_PK_AND_COL =
216216
"WITH tbl AS ("
217217
" SELECT to_regclass('%s') AS oid"
218218
"), "
219219
"pk AS ("
220-
" SELECT a.attname, k.ord "
220+
" SELECT a.attname, k.ord, format_type(a.atttypid, a.atttypmod) AS coltype "
221221
" FROM pg_index x "
222222
" JOIN tbl t ON t.oid = x.indrelid "
223223
" JOIN LATERAL unnest(x.indkey) WITH ORDINALITY AS k(attnum, ord) ON true "
@@ -235,7 +235,7 @@ const char * const SQL_BUILD_UPSERT_PK_AND_COL =
235235
" 'INSERT INTO ' || (SELECT (oid::regclass)::text FROM tbl)"
236236
" || ' (' || (SELECT string_agg(format('%%I', attname), ',') FROM pk)"
237237
" || ',' || (SELECT format('%%I', colname) FROM col) || ')'"
238-
" || ' VALUES (' || (SELECT string_agg(format('$%%s', ord), ',') FROM pk)"
238+
" || ' VALUES (' || (SELECT string_agg(format('$%%s::%%s', ord, coltype), ',') FROM pk)"
239239
" || ',' || (SELECT format('$%%s', (SELECT n FROM pk_count) + 1)) || ')'"
240240
" || ' ON CONFLICT (' || (SELECT string_agg(format('%%I', attname), ',') FROM pk) || ')'"
241241
" || ' DO UPDATE SET ' || (SELECT format('%%I', colname) FROM col)"
@@ -249,7 +249,7 @@ const char * const SQL_BUILD_SELECT_COLS_BY_PK_FMT =
249249
" SELECT to_regclass('%s') AS tblreg"
250250
"), "
251251
"pk AS ("
252-
" SELECT a.attname, k.ord "
252+
" SELECT a.attname, k.ord, format_type(a.atttypid, a.atttypmod) AS coltype "
253253
" FROM pg_index x "
254254
" JOIN tbl t ON t.tblreg = x.indrelid "
255255
" JOIN LATERAL unnest(x.indkey) WITH ORDINALITY AS k(attnum, ord) ON true "
@@ -264,7 +264,7 @@ const char * const SQL_BUILD_SELECT_COLS_BY_PK_FMT =
264264
" 'SELECT ' || (SELECT format('%%I', colname) FROM col) "
265265
" || ' FROM ' || (SELECT tblreg::text FROM tbl)"
266266
" || ' WHERE '"
267-
" || (SELECT string_agg(format('%%I=$%%s', attname, ord), ' AND ' ORDER BY ord) FROM pk)"
267+
" || (SELECT string_agg(format('%%I=$%%s::%%s', attname, ord, coltype), ' AND ' ORDER BY ord) FROM pk)"
268268
" || ';';";
269269

270270
const char * const SQL_CLOUDSYNC_ROW_EXISTS_BY_PK =

0 commit comments

Comments
 (0)