Commit 96a5f21
authored
optimizer: Support dynamic filter in
## Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes #123` indicates that this PR will close issue #123.
-->
- Closes #.
## Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->
Background for dynamic filter:
https://datafusion.apache.org/blog/2025/09/10/dynamic-filters/
The following queries can be used for quick global insights:
```
-- Q1
select min(l_shipdate) from lineitem;
-- Q2
select min(l_shipdate) from lineitem where l_returnflag = 'R';
```
Now Q1 can get executed very efficiently by directly check the file
metadata if possible:
```
> explain select min(l_shipdate) from lineitem;
+---------------+-------------------------------+
| plan_type | plan |
+---------------+-------------------------------+
| physical_plan | ┌───────────────────────────┐ |
| | │ ProjectionExec │ |
| | │ -------------------- │ |
| | │ min(lineitem.l_shipdate): │ |
| | │ 1992-01-02 │ |
| | └─────────────┬─────────────┘ |
| | ┌─────────────┴─────────────┐ |
| | │ PlaceholderRowExec │ |
| | └───────────────────────────┘ |
| | |
+---------------+-------------------------------+
1 row(s) fetched.
Elapsed 0.007 seconds.
```
However for Q2 now it's still doing the whole scan, and it's possible to
use dynamic filters to speed them up.
### Benchmarking Q2
#### Setup
1. Generate tpch-sf100 parquet file with `tpchgen-cli -s 100
--format=parquet`
(https://github.com/clflushopt/tpchgen-rs/tree/main/tpchgen-cli)
2. In datafusion-cli, run
```
CREATE EXTERNAL TABLE lineitem
STORED AS PARQUET
LOCATION '/Users/yongting/data/tpch_sf100/lineitem.parquet';
select min(l_shipdate) from lineitem where l_returnflag = 'R';
```
**Result**
Main: 0.55s
PR: 0.09s
### Aggregate Dynamic Filter Pushdown Overview
For queries like
```sql
-- `example_table(type TEXT, val INT)`
SELECT min(val)
FROM example_table
WHERE type='A';
```
And `example_table`'s physical representation is a partitioned parquet
file with
column statistics
- part-0.parquet: val {min=0, max=100}
- part-1.parquet: val {min=100, max=200}
- ...
- part-100.parquet: val {min=10000, max=10100}
After scanning the 1st file, we know we only have to read files if their
minimal
value on `val` column is less than 0, the minimal `val` value in the 1st
file.
We can skip scanning the remaining file by implementing dynamic filter,
the
intuition is we keep a shared data structure for current min in both
`AggregateExec`
and `DataSourceExec`, and let it update during execution, so the scanner
can
know during execution if it's possible to skip scanning certain files.
See
physical optimizer rule `FilterPushdown` for details.
### Implementation
#### Enable Condition
- No grouping (no `GROUP BY` clause in the sql, only a single global
group to aggregate)
- The aggregate expression must be `min`/`max`, and evaluate directly on
columns.
Note multiple aggregate expressions that satisfy this requirement are
allowed,
and a dynamic filter will be constructed combining all applicable expr's
states. See more in the following example with dynamic filter on
multiple columns.
#### Filter Construction
The filter is kept in the `DataSourceExec`, and it will gets update
during execution,
the reader will interpret it as "the upstream only needs rows that such
filter
predicate is evaluated to true", and certain scanner implementation like
`parquet`
can evalaute column statistics on those dynamic filters, to decide if
they can
prune a whole range.
**Examples**
- Expr: `min(a)`, Dynamic Filter: `a < a_cur_min`
- Expr: `min(a), max(a), min(b)`, Dynamic Filter: `(a < a_cur_min) OR (a
> a_cur_max) OR (b < b_cur_min)`
## What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->
The goal is is to let aggregate expressions `MIN/MAX` with only column
reference as argument (e.g. min(col1)) support dynamic filter, the above
implementation rationale has explained it further.
The implementation includes:
1. Added `AggrDynFilter` struct, and it would be shared across different
partition streams to store the current bounds for dynamic filter update.
2. `init_dynamic_filter` is responsible checking the conditions for
whether to enable dynamic filter in the current aggregate execution
plan, and finally build the `AggrDynFilter` inside the operator.
3. During aggregation execution, after evaluating each batch, the
current bound is refreshed in the dynamic filter, enabling the scanner
to skip prunable units using the latest runtime bounds. (now it's
updating every batch, perhaps we can let them update every k batches to
avoid overheads?)
4. Updated `gather_filters_for_pushdown` and
`handle_child_pushdown_result` API in `AggregateExec` to enable self
dynamic filter generation and pushdown.
5. Added a configuration to turn it on/off
### TODO
- [x] Add tests for grouping set
- [ ] Only update bounds if they're tightened, to reduce lock contention
(follow-up perhaps)
### Questions
Now the implementations only pushdown aggregates like `min(col1)`, that
the inner physical expression is exactly a column reference, I realized
this might be too conservative. Should we always pushdown the dynamic
filter, and let the
[PruningPredicate](https://docs.rs/datafusion/latest/datafusion/physical_optimizer/pruning/struct.PruningPredicate.html)
to decide if we can use the expression to skip partitions.
Examples:
1. min(col1 + 1): we push down `col1 + 1 < col1_plus_1_cur_min`, and the
`PruningPredicate` can use such expression to prune.
2. min(pow(col1, 2)): we push down `pow(col1, 2) < col1_pow_cur_min`,
and the `PruningPredicate` cannot interpret the inner physical
expression, so it decides not to prune always
## Are these changes tested?
<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
3. Serve as another way to document the expected behavior of the code
If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->
Yes, optimize UTs and end-to-end tests
## Are there any user-facing changes?
No
<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->
<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->MIN/MAX aggregates (#18644)1 parent e4dcf0c commit 96a5f21
File tree
9 files changed
+1074
-33
lines changed- datafusion
- common/src
- core/tests/physical_optimizer/filter_pushdown
- physical-plan/src
- aggregates
- sqllogictest/test_files
- docs/source/user-guide
9 files changed
+1074
-33
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
904 | 904 | | |
905 | 905 | | |
906 | 906 | | |
907 | | - | |
| 907 | + | |
| 908 | + | |
| 909 | + | |
| 910 | + | |
| 911 | + | |
908 | 912 | | |
909 | 913 | | |
910 | 914 | | |
911 | 915 | | |
912 | | - | |
| 916 | + | |
913 | 917 | | |
914 | 918 | | |
915 | 919 | | |
| |||
1260 | 1264 | | |
1261 | 1265 | | |
1262 | 1266 | | |
| 1267 | + | |
1263 | 1268 | | |
1264 | 1269 | | |
1265 | 1270 | | |
| |||
0 commit comments