-
Notifications
You must be signed in to change notification settings - Fork 6
bpf: tcp: Exactly-once socket iteration #5485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: bpf-next_base
Are you sure you want to change the base?
bpf: tcp: Exactly-once socket iteration #5485
Conversation
Upstream branch: cd7312a |
45c71f6
to
4516bd2
Compare
Upstream branch: e30329b |
c835b79
to
e79a114
Compare
4516bd2
to
983d136
Compare
Upstream branch: 99fe8af |
e79a114
to
483110b
Compare
983d136
to
097b64d
Compare
Upstream branch: c11f34e |
483110b
to
e16fca7
Compare
097b64d
to
1daac42
Compare
Prepare for the next patch which needs to be able to choose either GFP_USER or GFP_NOWAIT for calls to bpf_iter_tcp_realloc_batch. Signed-off-by: Jordan Rife <[email protected]> Reviewed-by: Kuniyuki Iwashima <[email protected]>
Require that iter->batch always contains a full bucket snapshot. This invariant is important to avoid skipping or repeating sockets during iteration when combined with the next few patches. Before, there were two cases where a call to bpf_iter_tcp_batch may only capture part of a bucket: 1. When bpf_iter_tcp_realloc_batch() returns -ENOMEM. 2. When more sockets are added to the bucket while calling bpf_iter_tcp_realloc_batch(), making the updated batch size insufficient. In cases where the batch size only covers part of a bucket, it is possible to forget which sockets were already visited, especially if we have to process a bucket in more than two batches. This forces us to choose between repeating or skipping sockets, so don't allow this: 1. Stop iteration and propagate -ENOMEM up to userspace if reallocation fails instead of continuing with a partial batch. 2. Try bpf_iter_tcp_realloc_batch() with GFP_USER just as before, but if we still aren't able to capture the full bucket, call bpf_iter_tcp_realloc_batch() again while holding the bucket lock to guarantee the bucket does not change. On the second attempt use GFP_NOWAIT since we hold onto the spin lock. I did some manual testing to exercise the code paths where GFP_NOWAIT is used and where ERR_PTR(err) is returned. I used the realloc test cases included later in this series to trigger a scenario where a realloc happens inside bpf_iter_tcp_batch and made a small code tweak to force the first realloc attempt to allocate a too-small batch, thus requiring another attempt with GFP_NOWAIT. Some printks showed both reallocs with the tests passing: May 09 18:18:55 crow kernel: resize batch TCP_SEQ_STATE_LISTENING May 09 18:18:55 crow kernel: again GFP_USER May 09 18:18:55 crow kernel: resize batch TCP_SEQ_STATE_LISTENING May 09 18:18:55 crow kernel: again GFP_NOWAIT May 09 18:18:57 crow kernel: resize batch TCP_SEQ_STATE_ESTABLISHED May 09 18:18:57 crow kernel: again GFP_USER May 09 18:18:57 crow kernel: resize batch TCP_SEQ_STATE_ESTABLISHED May 09 18:18:57 crow kernel: again GFP_NOWAIT With this setup, I also forced each of the bpf_iter_tcp_realloc_batch calls to return -ENOMEM to ensure that iteration ends and that the read() in userspace fails. Signed-off-by: Jordan Rife <[email protected]> Reviewed-by: Kuniyuki Iwashima <[email protected]>
Get rid of the st_bucket_done field to simplify TCP iterator state and logic. Before, st_bucket_done could be false if bpf_iter_tcp_batch returned a partial batch; however, with the last patch ("bpf: tcp: Make sure iter->batch always contains a full bucket snapshot"), st_bucket_done == true is equivalent to iter->cur_sk == iter->end_sk. Signed-off-by: Jordan Rife <[email protected]> Reviewed-by: Kuniyuki Iwashima <[email protected]>
Prepare for the next patch that tracks cookies between iterations by converting struct sock **batch to union bpf_tcp_iter_batch_item *batch inside struct bpf_tcp_iter_state. Signed-off-by: Jordan Rife <[email protected]> Reviewed-by: Kuniyuki Iwashima <[email protected]>
Replace the offset-based approach for tracking progress through a bucket in the TCP table with one based on socket cookies. Remember the cookies of unprocessed sockets from the last batch and use this list to pick up where we left off or, in the case that the next socket disappears between reads, find the first socket after that point that still exists in the bucket and resume from there. This approach guarantees that all sockets that existed when iteration began and continue to exist throughout will be visited exactly once. Sockets that are added to the table during iteration may or may not be seen, but if they are they will be seen exactly once. Signed-off-by: Jordan Rife <[email protected]>
Replicate the set of test cases used for UDP socket iterators to test similar scenarios for TCP listening sockets. Signed-off-by: Jordan Rife <[email protected]>
Prepare to test TCP socket iteration over both listening and established sockets by allowing the BPF iterator programs to skip the port check. Signed-off-by: Jordan Rife <[email protected]>
Add parentheses around loopback address check to fix up logic and make the socket state filter configurable for the TCP socket iterators. Iterators can skip the socket state check by setting ss to 0. Signed-off-by: Jordan Rife <[email protected]>
Prepare for bucket resume tests for established TCP sockets by making the number of ehash buckets configurable. Subsequent patches force all established sockets into the same bucket by setting ehash_buckets to one. Signed-off-by: Jordan Rife <[email protected]>
Prepare for bucket resume tests for established TCP sockets by creating established sockets. Collect socket fds from connect() and accept() sides and pass them to test cases. Signed-off-by: Jordan Rife <[email protected]>
Upstream branch: 3ce7cdd |
Prepare for bucket resume tests for established TCP sockets by creating a program to immediately destroy and remove sockets from the TCP ehash table, since close() is not deterministic. Signed-off-by: Jordan Rife <[email protected]>
Replicate the set of test cases used for UDP socket iterators to test similar scenarios for TCP established sockets. Signed-off-by: Jordan Rife <[email protected]>
e16fca7
to
c7827f1
Compare
Pull request for series with
subject: bpf: tcp: Exactly-once socket iteration
version: 2
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=973515