Copyright Debezium Authors. Licensed under the Apache License, Version 2.0.
A Debezium connector for capturing changes from CockroachDB databases.
The Debezium CockroachDB connector processes row-level changes from CockroachDB databases that have been captured and streamed to Apache Kafka topics by CockroachDB's native changefeed mechanism.
The connector works in a two-stage process:
CockroachDB Changefeed Stage: CockroachDB's native changefeed mechanism captures row-level changes from the database and streams them directly to configured sinks (Kafka, webhook, cloud storage, etc.) in real-time.
Debezium Processing Stage: The Debezium connector consumes these changefeed events from Kafka topics and processes them through Debezium's event processing pipeline, converting them into standardized Debezium change events with enriched metadata.
This architecture leverages CockroachDB's reliable changefeed delivery mechanism while providing the benefits of Debezium's event processing capabilities, including schema evolution, event transformation, and integration with the broader Debezium ecosystem.
Status: This connector is currently in incubation phase and is being developed and tested.
- CockroachDB v25.2+ with rangefeed enabled (enriched envelope support introduced in v25.2)
- Kafka Connect
- JDK 21+
- Maven 3.9.8 or later
./mvnw clean package -Passembly
Example connector configuration:
{
"name": "cockroachdb-connector",
"config": {
"connector.class": "io.debezium.connector.cockroachdb.CockroachDBConnector",
"database.hostname": "cockroachdb",
"database.port": "26257",
"database.user": "testuser",
"database.password": "",
"database.dbname": "testdb",
"database.server.name": "cockroachdb",
"table.include.list": "public.products",
"cockroachdb.changefeed.envelope": "enriched",
"cockroachdb.changefeed.enriched.properties": "source,schema",
"cockroachdb.changefeed.sink.type": "kafka",
"cockroachdb.changefeed.sink.uri": "kafka://kafka-test:9092",
"cockroachdb.changefeed.sink.topic.prefix": "",
"cockroachdb.changefeed.sink.options": "",
"cockroachdb.changefeed.resolved.interval": "10s",
"cockroachdb.changefeed.include.updated": true,
"cockroachdb.changefeed.include.diff": true,
"cockroachdb.changefeed.cursor": "now",
"cockroachdb.changefeed.batch.size": 1000,
"cockroachdb.changefeed.poll.interval.ms": 100,
"connection.timeout.ms": 30000,
"connection.retry.delay.ms": 100,
"connection.max.retries": 3
}
}
Option | Default | Description |
---|---|---|
database.hostname |
- | CockroachDB host |
database.port |
26257 | CockroachDB port |
database.user |
- | Database user |
database.password |
- | Database password |
database.dbname |
- | Database name |
database.server.name |
- | Unique server name for topic prefix |
Option | Default | Description |
---|---|---|
table.include.list |
- | Comma-separated list of tables to monitor |
Option | Default | Description |
---|---|---|
cockroachdb.changefeed.envelope |
enriched | Envelope type: enriched, wrapped, bare |
cockroachdb.changefeed.enriched.properties |
source | Comma-separated enriched properties |
cockroachdb.changefeed.sink.type |
kafka | Sink type (kafka, webhook, pubsub, etc.) |
cockroachdb.changefeed.sink.uri |
kafka://localhost:9092 | Sink URI (format depends on sink type) |
cockroachdb.changefeed.sink.topic.prefix |
"" | Optional prefix for sink topic names |
cockroachdb.changefeed.sink.options |
"" | Additional sink options in key=value format |
cockroachdb.changefeed.resolved.interval |
10s | Resolved timestamp interval |
cockroachdb.changefeed.include.updated |
false | Include updated column information |
cockroachdb.changefeed.include.diff |
false | Include before/after diff information |
cockroachdb.changefeed.cursor |
now | Start cursor position |
cockroachdb.changefeed.batch.size |
1000 | Batch size for changefeed processing |
cockroachdb.changefeed.poll.interval.ms |
100 | Poll interval in milliseconds |
Option | Default | Description |
---|---|---|
connection.timeout.ms |
30000 | Connection timeout in milliseconds |
connection.retry.delay.ms |
100 | Delay between connection retries in ms |
connection.max.retries |
3 | Maximum number of connection retry attempts |
Events are produced in Debezium's enriched envelope format. For details on the changefeed message format, see the CockroachDB changefeed messages documentation.
{
"before": null,
"after": {
"id": "...",
"name": "...",
"...": "..."
},
"source": {
"changefeed_sink": "kafka",
"cluster_id": "...",
"database_name": "testdb",
"table_name": "products",
"...": "..."
},
"op": "c",
"ts_ns": 1751407136710963868
}
Run all unit and integration tests:
./mvnw clean test
To run only integration tests:
./mvnw clean test -Dtest="*IT"
- Permission Errors: Ensure CHANGEFEED and SELECT privileges are granted on all monitored tables.
- Rangefeed Disabled: Enable with
SET CLUSTER SETTING kv.rangefeed.enabled = true;
- No Events: Check connector logs and changefeed job status.
- Configuration Issues: Verify all required changefeed parameters are properly configured.