Was struggling to understand why creating a dask dataframe from a large list of parquet files was taking ages. Eventually tried disabling query planning and saw normal timing again. These are all relatively small S3 files ~1MB. There is no metadata file or similar.

Environment:
- dask==2024.5.0
- dask-expr==1.1.0
- python==3.10