-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Recovery invariant related to old generation tlogs and remote tlogs #12576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Result of foundationdb-pr-clang-ide on Linux RHEL 9
|
Result of foundationdb-pr-macos-m1 on macOS Ventura 13.x
|
Result of foundationdb-pr-clang on Linux RHEL 9
|
Result of foundationdb-pr on Linux RHEL 9
|
Result of foundationdb-pr-macos on macOS Ventura 13.x
|
Result of foundationdb-pr-clang-arm on Linux CentOS 7
|
|
Strange, 200K didn't show but CI showing failures where assertion fires. I'll look into it. Turning PR to draft for now. |
Result of foundationdb-pr-cluster-tests on Linux RHEL 9
|
Result of foundationdb-pr-clang-ide on Linux RHEL 9
|
Result of foundationdb-pr-macos-m1 on macOS Ventura 13.x
|
jzhou77
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I think some lines can be removed.
| if (!(allLogs || !newState.oldTLogData.empty())) { | ||
| TraceEvent(SevError, "FooRecoveryInvariant1") | ||
| .detail("AllLogs", allLogs) | ||
| .detail("OldTLogSize", newState.oldTLogData.size()) | ||
| .detail("NewTLogSize", newState.tLogs.size()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have ASSERT_WE_THINK below, so these lines are not needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jzhou77 Actually this trace was not there originally, I was experimenting by adding logging since I found failures (that's why turned this PR to draft).
Have a look at: https://github.com/apple/foundationdb/pull/12577/files. This is minimal code needed to reproduce the issue. Tests fail pretty quickly with it.
Example failure: fdbserver -r simulation -f /root/src/foundationdb/tests/fast/ConfigIncrement.toml --buggify on --seed 2729610066.
I can see that at accepting_commits, all logs is false but old tlogs is 0. I think that's because this is the first recovery of the cluster (new cluster). But in this case, we set RecoveryCompleteWrittenToCoreState to true while we are not at fully_recovered. Sev40 below. I spot checked more failures and in all cases so far, it's the first recovery of the cluster that seems to break the invariant.
Sev40:
<Event Severity="40" ErrorKind="Unset" Time="10.031027" DateTime="2025-11-22T06:19:17Z" Type="FooRecoveryInvariant1" Machine="[abcd::2:0:1:0]:1" ID="0000000000000000" AllLogs="0" OldTLogSize="0" NewTLogSize="2" FinalUpdate="0" WillBeFullyRecovered="0" CurrRecoveryState="6" RecoveryCompleteWrittenToCoreStateWillBeSetToTrue="1" ThreadID="9766125937575351112" Backtrace="/usr/local/bin/llvm-addr2line -e /root/cnd_build_output/bin/fdbserver -p -C -f -i 0x556d7ef 0x556dae9 0x5567cd4 0x2227f94 0x2227df7 0x2226f90 0x2269ed8 0x226afc9 0x226ec83 0x248ea18 0x248e61e 0x2490313 0x248aeb8 0x248b243 0x248a631 0x248a933 0x1ed45c8 0x1ed423b 0x1ef6ac8 0x2489fc8 0x2477158 0x2476bb2 0x246f528 0x246f321 0x246bfde 0x246c858 0x246c622 0x246fe68 0x246f6d2 0x2470a78 0x2470140 0x247bd48 0x247bbba 0x53141c4 0x5313abc 0x1d84af8 0x541b8b7 0x541b3e0 0x3204d2a 0x7fad024745d0" LogGroup="default" Roles="CC,CD,CP,GP,SS,TL" />
Result of foundationdb-pr-clang on Linux RHEL 9
|
Result of foundationdb-pr on Linux RHEL 9
|
Result of foundationdb-pr-clang-arm on Linux CentOS 7
|
Result of foundationdb-pr-cluster-tests on Linux RHEL 9
|
Adding this based on the discussion here: #12558 (comment).
I think the reverse implication (below) should also hold, can be added in a separate PR. Initially I thought there could be a window when it won't hold i.e. between the time when remote tlogs were recruited and caught up to when we purged old tlog generation state. But where I am adding these invariants, at that point, it may hold since we would've cleared the old generation tlog state by then.
200K (100K chunks):
20251121-205153-praza-recovery-invariant-it-ecbd1ddae61fddf0 compressed=True data_size=40239684 duration=5534116 ended=100000 fail_fast=10 max_runs=100000 pass=100000 priority=100 remaining=0 runtime=2:35:19 sanity=False started=100000 stopped=20251121-232712 submitted=20251121-205153 timeout=5400 username=praza-recovery-invariant-iter2-347ab56f88c70ddf5968cb1f42517b6fd03a3430
20251121-205156-praza-recovery-invariant-it-ecbd1ddae61fddf0 compressed=True data_size=40239684 duration=5448791 ended=100000 fail_fast=10 max_runs=100000 pass=100000 priority=100 remaining=0 runtime=2:36:28 sanity=False started=100000 stopped=20251121-232824 submitted=20251121-205156 timeout=5400 username=praza-recovery-invariant-iter2-347ab56f88c70ddf5968cb1f42517b6fd03a3430
Code-Reviewer Section
The general pull request guidelines can be found here.
Please check each of the following things and check all boxes before accepting a PR.
For Release-Branches
If this PR is made against a release-branch, please also check the following:
release-branchormainif this is the youngest branch)