Skip to content

Rolling back to genesis when restore from snapshot #1982

Open
@i-m-alexander

Description

@i-m-alexander

OS:
Ubuntu 22

Versions:
13.6.0.5

Build/Install Method
cardano-db-sync-13.6.0.5-linux.tar.gz
Snapshots

Run method:
Kubernetes, containerd

Additional context
When restoring a database from the latest 13.6 snapshots, or even when using pg_dump and restore from another PostgreSQL instance database, we are always observing a rollback to genesis, starting from EpochNo 0.
The expected behavior is to sync as quickly as possible, but restoring from the latest snapshot is very time-consuming, especially when PostgreSQL is on low-spec hardware and over the network rather than locally with database sync. Full log below, thanks:

[db-sync-node:Info:6] [2025-06-13 08:36:14.76 UTC] Version number: 13.6.0.5
[db-sync-node:Info:6] [2025-06-13 08:36:14.76 UTC] Git hash: cb61094c82254464fc9de777225e04d154d9c782
[db-sync-node:Info:6] [2025-06-13 08:36:14.76 UTC] Enviroment variable DbSyncAbortOnPanic: False
[db-sync-node:Info:6] [2025-06-13 08:36:14.76 UTC] SyncNodeParams {enpConfigFile = ConfigFile {unConfigFile = "/config/mainnet-config.yaml"}, enpSocketPath = SocketPath {unSocketPath = "/config/int-ada-node/node.socket"}, enpMaybeLedgerStateDir = Just (LedgerStateDir {unLedgerStateDir = "/config/state"}), enpMigrationDir = MigrationDir "/app/schema", enpPGPassSource = PGPassDefaultEnv, enpEpochDisabled = False, enpHasCache = True, enpSkipFix = False, enpOnlyFix = False, enpForceIndexes = False, enpHasInOut = True, enpSnEveryFollowing = 500, enpSnEveryLagging = 10000, enpMaybeRollback = Nothing}
[db-sync-node:Info:6] [2025-06-13 08:36:14.76 UTC] SyncOptions {soptEpochAndCacheEnabled = True, soptAbortOnInvalid = False, soptCache = True, soptSkipFix = False, soptOnlyFix = False, soptPruneConsumeMigration = PruneConsumeMigration {pcmPruneTxOut = False, pcmConsumedTxOut = False, pcmSkipTxIn = False}, soptInsertOptions = InsertOptions {ioTxCBOR = False, ioInOut = True, ioUseLedger = True, ioShelley = True, ioRewards = True, ioMultiAssets = True, ioMetadata = True, ioKeepMetadataNames = Nothing, ioPlutusExtra = True, ioOffChainPoolData = True, ioPoolStats = False, ioGov = True, ioRemoveJsonbFromSchema = False, ioTxOutTableType = TxOutCore}, snapshotEveryFollowing = 500, snapshotEveryLagging = 10000}
[db-sync-node:Info:6] [2025-06-13 08:36:14.77 UTC] Schema migration files validated
[db-sync-node:Info:6] [2025-06-13 08:36:16.07 UTC] Running database migrations in mode Initial
[db-sync-node:Info:6] [2025-06-13 08:36:16.07 UTC] Found maintenance_work_mem=2GB, max_parallel_maintenance_workers=4
[db-sync-node:Info:6] [2025-06-13 08:37:35.62 UTC] All migrations were executed
[db-sync-node:Info:6] [2025-06-13 08:37:35.62 UTC] New user indexes were not created. They may be created later if necessary.
[db-sync-node:Info:6] [2025-06-13 08:37:35.62 UTC] Using byron genesis file from: "/config/int-ada-node/mainnet-byron-genesis.json"
[db-sync-node:Info:6] [2025-06-13 08:37:35.62 UTC] Using shelley genesis file from: "/config/int-ada-node/mainnet-shelley-genesis.json"
[db-sync-node:Info:6] [2025-06-13 08:37:35.62 UTC] Using alonzo genesis file from: "/config/int-ada-node/mainnet-alonzo-genesis.json"
[db-sync-node:Info:6] [2025-06-13 08:37:36.96 UTC] NetworkMagic: 764824073
[db-sync-node:Info:6] [2025-06-13 08:37:37.68 UTC] runExtraMigrationsMaybe: PruneConsumeMigration {pcmPruneTxOut = False, pcmConsumedTxOut = False, pcmSkipTxIn = False}
[db-sync-node:Info:6] [2025-06-13 08:37:37.88 UTC] runExtraMigrations: No extra migration specified
[db-sync-node:Info:6] [2025-06-13 08:37:38.58 UTC] Initial genesis distribution present and correct
[db-sync-node:Info:6] [2025-06-13 08:37:38.58 UTC] Total genesis supply of Ada: 31112484745.000000
[db-sync-node:Info:6] [2025-06-13 08:37:38.88 UTC] Inserting Shelley Genesis distribution
[db-sync-node:Info:154] [2025-06-13 08:37:39.08 UTC] Running Offchain Pool fetch thread
[db-sync-node:Info:150] [2025-06-13 08:37:39.08 UTC] Running DB thread
[db-sync-node:Info:156] [2025-06-13 08:37:39.08 UTC] Running Offchain Vote Anchor fetch thread
[db-sync-node:Info:152] [2025-06-13 08:37:39.08 UTC] Connecting to node via "/config/int-ada-node/node.socket"
[db-sync-node.Subscription:Notice:158] [2025-06-13 08:37:39.08 UTC] Identity Starting Subscription Worker, valency 1
[db-sync-node.Subscription:Notice:159] [2025-06-13 08:37:39.08 UTC] Identity Connection Attempt Start, destination LocalAddress "/config/int-ada-node/node.socket"
[db-sync-node.Subscription:Notice:159] [2025-06-13 08:37:39.08 UTC] Identity Connection Attempt End, destination LocalAddress "/config/int-ada-node/node.socket" outcome: ConnectSuccessLast
[db-sync-node.Handshake:Info:159] [2025-06-13 08:37:39.08 UTC] WithMuxBearer (ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"}) Send (ClientAgency TokPropose,MsgProposeVersions (fromList [(NodeToClientV_9,TInt 764824073),(NodeToClientV_10,TInt 764824073),(NodeToClientV_11,TInt 764824073),(NodeToClientV_12,TInt 764824073),(NodeToClientV_13,TInt 764824073),(NodeToClientV_14,TInt 764824073),(NodeToClientV_15,TList [TInt 764824073,TBool False]),(NodeToClientV_16,TList [TInt 764824073,TBool False]),(NodeToClientV_17,TList [TInt 764824073,TBool False]),(NodeToClientV_18,TList [TInt 764824073,TBool False])]))
[db-sync-node.Handshake:Info:159] [2025-06-13 08:37:39.09 UTC] WithMuxBearer (ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"}) Recv (ServerAgency TokConfirm,MsgAcceptVersion NodeToClientV_18 (TList [TInt 764824073,TBool False]))
[db-sync-node.Mux:Info:159] [2025-06-13 08:37:39.09 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: Handshake Client end, duration 0.001517599s
[db-sync-node.Mux:Info:162] [2025-06-13 08:37:39.09 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: State: Mature
[db-sync-node.Mux:Info:162] [2025-06-13 08:37:39.09 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: Eagerly started (MiniProtocolNum 5) in InitiatorDir
[db-sync-node.Mux:Info:162] [2025-06-13 08:37:39.09 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: Eagerly started (MiniProtocolNum 6) in InitiatorDir
[db-sync-node.Mux:Info:162] [2025-06-13 08:37:39.09 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: Eagerly started (MiniProtocolNum 7) in InitiatorDir
[db-sync-node.Mux:Info:162] [2025-06-13 08:37:39.09 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: Eagerly started (MiniProtocolNum 9) in InitiatorDir
[db-sync-node.Subscription:Notice:158] [2025-06-13 08:37:39.11 UTC] Identity Required subscriptions started
[db-sync-node:Info:165] [2025-06-13 08:37:39.19 UTC] Starting the fixing Plutus Script procedure. This may take a couple minutes on mainnet if there are wrong values. You can skip it using --skip-plutus-script-fix. It will fix Script with wrong bytes. See more in Issue #1214 and #1348. This procedure makes resyncing unnecessary.
[db-sync-node:Info:165] [2025-06-13 08:37:39.19 UTC] Trying to find Script with wrong bytes
[db-sync-node:Info:165] [2025-06-13 08:37:39.43 UTC] There are 131745 Script. Need to scan them all.
[db-sync-node:Info:165] [2025-06-13 08:37:56.74 UTC] Found 0 Script with mismatch between bytes and hash.
[db-sync-node.Mux:Notice:162] [2025-06-13 08:37:56.84 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: Miniprotocol (MiniProtocolNum 5) InitiatorDir terminated cleanly
[db-sync-node.Subscription:Notice:158] [2025-06-13 08:37:57.85 UTC] Identity Restarting Subscription after 18.762189297s desired valency 1 current valency 0
[db-sync-node.Subscription:Notice:158] [2025-06-13 08:37:57.85 UTC] Identity Starting Subscription Worker, valency 1
[db-sync-node.Subscription:Notice:171] [2025-06-13 08:37:57.85 UTC] Identity Connection Attempt Start, destination LocalAddress "/config/int-ada-node/node.socket"
[db-sync-node.Subscription:Notice:171] [2025-06-13 08:37:57.85 UTC] Identity Connection Attempt End, destination LocalAddress "/config/int-ada-node/node.socket" outcome: ConnectSuccessLast
[db-sync-node.Handshake:Info:171] [2025-06-13 08:37:57.85 UTC] WithMuxBearer (ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"}) Send (ClientAgency TokPropose,MsgProposeVersions (fromList [(NodeToClientV_9,TInt 764824073),(NodeToClientV_10,TInt 764824073),(NodeToClientV_11,TInt 764824073),(NodeToClientV_12,TInt 764824073),(NodeToClientV_13,TInt 764824073),(NodeToClientV_14,TInt 764824073),(NodeToClientV_15,TList [TInt 764824073,TBool False]),(NodeToClientV_16,TList [TInt 764824073,TBool False]),(NodeToClientV_17,TList [TInt 764824073,TBool False]),(NodeToClientV_18,TList [TInt 764824073,TBool False])]))
[db-sync-node.Handshake:Info:171] [2025-06-13 08:37:57.85 UTC] WithMuxBearer (ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"}) Recv (ServerAgency TokConfirm,MsgAcceptVersion NodeToClientV_18 (TList [TInt 764824073,TBool False]))
[db-sync-node.Mux:Info:171] [2025-06-13 08:37:57.85 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: Handshake Client end, duration 0.001510348s
[db-sync-node.Mux:Info:172] [2025-06-13 08:37:57.85 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: State: Mature
[db-sync-node.Mux:Info:172] [2025-06-13 08:37:57.85 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: Eagerly started (MiniProtocolNum 5) in InitiatorDir
[db-sync-node.Mux:Info:172] [2025-06-13 08:37:57.85 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: Eagerly started (MiniProtocolNum 6) in InitiatorDir
[db-sync-node:Info:175] [2025-06-13 08:37:57.85 UTC] Starting ChainSync client
[db-sync-node:Info:175] [2025-06-13 08:37:57.85 UTC] Setting ConsistencyLevel to Unchecked
[db-sync-node.Mux:Info:172] [2025-06-13 08:37:57.85 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: Eagerly started (MiniProtocolNum 7) in InitiatorDir
[db-sync-node.Mux:Info:172] [2025-06-13 08:37:57.85 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "/config/node.socket"} event: Eagerly started (MiniProtocolNum 9) in InitiatorDir
[db-sync-node:Info:150] [2025-06-13 08:37:57.85 UTC] Chain Sync client thread has restarted
[db-sync-node.Subscription:Notice:158] [2025-06-13 08:37:57.87 UTC] Identity Required subscriptions started
[db-sync-node:Info:150] [2025-06-13 08:37:58.45 UTC] Database tip is at slot 158112119, block 11984612
[db-sync-node:Info:175] [2025-06-13 08:37:58.45 UTC] Suggesting intersection points from memory: [] and from disk: []
[db-sync-node:Info:150] [2025-06-13 08:38:00.19 UTC] Delaying delete of 11984612 while rolling back to genesis. Applying blocks until a new block is found. The node is currently at Tip (SlotNo 158237571) 479f7bac1513f476a3e22c26c19487bd2453a4710bb762f88920cdf9f27bbbd4 (BlockNo 11990758)
[db-sync-node:Info:150] [2025-06-13 08:38:00.42 UTC] Found snapshot file for genesis
[db-sync-node:Info:150] [2025-06-13 08:38:00.42 UTC] Setting ConsistencyLevel to DBAheadOfLedger
[db-sync-node:Info:150] [2025-06-13 08:38:01.52 UTC] Reached EpochNo 0
[db-sync-node:Info:150] [2025-06-13 09:17:57.90 UTC] Reached EpochNo 1
[db-sync-node:Info:150] [2025-06-13 09:57:53.77 UTC] Reached EpochNo 2

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions