Open
Description
archiver data pruning
What? Why? Who?
After we migrate the transaction persistance layer to elasticsearch for long term archival, we need to regularly prune archiver data.
Acceptance Criteria
- we need to prune once every
x
- pruning ticks (it's easier because we have the tick intervals)
- pruning transactions (we need to iterate over all ticks in the pruned epoch, get the transaction ids and delete the transactions for each tick)
- pruning identities indexes will not be done because it will not exist anymore in the new archive implementation (migrated to elastic search)
- when we prune the epoch we first need to create a backup which we will deleted at the next pruning (probably needs to be done in another task/issue)
- import backed up data with same cryptographic verification as we are performing on the nodes data (probably needs to be done in another task/issue)
Metadata
Metadata
Assignees
Labels
No labels
Type
Projects
Status
Backlog