-
Notifications
You must be signed in to change notification settings - Fork 990
Draft POC: Push batch with filter without copy #8103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
169a167
0710024
75e6531
3d0f47c
88a98e1
5704355
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -164,6 +164,44 @@ fn multiple_arrays(data_type: &DataType) -> bool { | |
} | ||
} | ||
|
||
/// A public, lightweight plan describing how to apply a Boolean filter. | ||
/// | ||
/// Used for zero-copy filtering externally (e.g., in BatchCoalescer): | ||
/// - `None`: no rows selected | ||
/// - `All`: all rows selected | ||
/// - `Slices`: list of continuous ranges `[start, end)` (can be used directly for `copy_rows`) | ||
/// - `Indices`: list of single-row indices (can be merged into continuous ranges externally) | ||
#[derive(Debug, Clone)] | ||
pub enum FilterPlan { | ||
None, | ||
All, | ||
Slices(Vec<(usize, usize)>), | ||
Indices(Vec<usize>), | ||
} | ||
|
||
/// Compute a filtering plan based on `FilterBuilder::optimize`. | ||
/// | ||
/// This function calls `FilterBuilder::new(filter).optimize()`, then | ||
/// converts the optimized `IterationStrategy` into the above `FilterPlan` | ||
/// to enable zero-copy execution externally. | ||
pub fn compute_filter_plan(filter: &BooleanArray) -> FilterPlan { | ||
let fb = FilterBuilder::new(filter); | ||
let pred = fb.build(); | ||
|
||
match pred.strategy { | ||
IterationStrategy::None => FilterPlan::None, | ||
IterationStrategy::All => FilterPlan::All, | ||
IterationStrategy::Slices(s) => FilterPlan::Slices(s), // moved directly | ||
IterationStrategy::Indices(i) => FilterPlan::Indices(i), // moved directly | ||
IterationStrategy::SlicesIterator => { | ||
FilterPlan::Slices(SlicesIterator::new(&pred.filter).collect()) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. avoiding this allocation will likely help There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thank you @alamb , i tried now, but it not improved the regression. The compute_filter_plan almost cost nothing for the benchmark profile. |
||
} | ||
IterationStrategy::IndexIterator => { | ||
FilterPlan::Indices(IndexIterator::new(&pred.filter, pred.count).collect()) | ||
} | ||
} | ||
} | ||
|
||
/// Returns a filtered [RecordBatch] where the corresponding elements of | ||
/// `predicate` are true. | ||
/// | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect to really make this fast it will need to have specialized implementations for the different array types (not use mutable array data)
I think we could yoink / reuse some of the existing code from the filter kernel:
arrow-rs/arrow-select/src/filter.rs
Line 322 in 04f217b
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @alamb for review and good suggestion.
I found the hot path for profile is, copy_rows:
Especially for the code for null handling:
It may due to we will call more times for copy_rows here, but the original logic is just filter and concact to a batch, and then to push_batch, so it will be friendly for SIMD. Also it will make copy_rows SIMD friendly for bigger batch. So the original run is pretty faster for most cases.
May be we need to only change this logic for selective < 0.005 for example, but it's hard for me to decide.