-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Epic: Statistics improvements #8227
Comments
Recently I began looking into implementing #10316, and the proposed approach was to add per-partition statistics to According to @alamb's comment on #8078, the (then) current state of the I see that work on this epic has stalled since February, is there interest in continuing it? If so, I'm a willing contributor, but it'd help to know what needs to be done first, in particular if the |
If we have per-partition statistics, merging them will be problematic for NDV. Extrapolation techniques are not likely to work. |
Ok, well I suppose we can keep the existing global statistics and add a new per-partition statistics method (that defaults to returning the global statistics for each partition). That would probably be a less invasive change too. Would be happy to discuss the details more over on #10316 |
Well, that may have also been my attempt / excuse :) -- especially if I didn't have enough time to work on it
I personally think we should go about this from the other end: try to implement the analysis for #10316 -- and use that as a vehicle to make any additional |
Here is one idea on how to improve Statistics / Precision. Let me know what you think: |
The process has started in |
Thanks for this issue @alamb I was actually looking for an EPIC that covers the work planned for statistics; since there's already work done as a followup for #14699 in #14896 I am wondering what else is left that I can take on with regards to this ? There's still the work necessary to finish #3929 i.e #4158 abd #4159 both seem to be relatively related and can benefit from the the redesign in #14699. |
@clflushopt I don't really know as I am not driving the statistics rework myself and thus don't have much visibility into what is planned The only remaining thing I know of that comes to mind is but that is more of an API exercise rather than algorithms. Another thing that comes to mind is to look at / extend the existing https://github.com/apache/datafusion/blob/main/datafusion/physical-expr/src/intervals/cp_solver.rs code / show how it is useful I was playing around with duckdb this morning. Somehow it uses its constrait solver to simplify away redundant filters. For example, it correctly deduces create table foo(x int);
D insert into foo values (1);
D insert into foo values (5);
D explain SELECT * from foo where x > 1 AND x > 2 AND x > 3 AND x < 6 AND x < 7 AND x < 8;
┌─────────────────────────────┐
│┌───────────────────────────┐│
││ Physical Plan ││
│└───────────────────────────┘│
└─────────────────────────────┘
┌───────────────────────────┐
│ SEQ_SCAN │
│ ──────────────────── │
│ Table: foo │
│ Type: Sequential Scan │
│ Projections: x │
│ Filters: x>3 AND x<6 │
│ │
│ ~1 Rows │
└───────────────────────────┘ |
Is your feature request related to a problem or challenge?
We would like to use "statistics" in our project for transformations that rely on the statisics being "correct" (e.g. that the there are no values outside the
min
andmax
range).DataFusion has several optimizations like this too that rely on statistics being correct such as skipping file scans with limits such as in https://github.com/apache/arrow-datafusion/blob/e54894c39202815b14d9e7eae58f64d3a269c165/datafusion/core/src/datasource/statistics.rs#L34-L33. There are also suggestions of additional such optimizations like #6672
However the current Statistics code seems to make it hard to manage the 'are the statistics exact and can they be guaranteed for transformations' (@crepererum noted this quite some time ago on #5613). This has recently lead to several bugs such as
We would like to make it clearer what is known and what is an estimate is know (e.g. the min/max of row counts may be known, but the actual value may be an estimate after a filter). This is described in more detail on #8078
As we began exploring this concept we ran into several issues with Statistics and I think it is getting big enough to warrant its own tracking epic
Related items
show statistics
#8111ParquetExec::statistics()
does not read statistics for many column types (like timstamps, strings, etc) #8295Statistics::total_byte_size
does not account for projection inFileScanConfig::with_projection
#14936ParquetExec::statistics::is_exact
likely wrong/misunderstood #5614Statistics::is_exact
semantics #5613num_rows
andtotal_byte_size
are not defined (stat should be None instead of Some(0)) #2976Pruning Improvements (maybe should be its own epic)
<col> = 'const'
inPruningPredicate
#8376Describe the solution you'd like
No response
Describe alternatives you've considered
No response
Additional context
This is somewhat related
The text was updated successfully, but these errors were encountered: