-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Reduce page metadata loading to only what is necessary for query execution in ParquetOpen #16200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
FYI @adriangb -- I think this is something you were interested in too |
Thank you @adrian-thurston -- this is a very neat idea For others not following along, most of benchmarks don't have page metadata at the moment (e.g. the ClickBench partitioned set doesn't have any page index metadata) so this wouldn't show up in our existing benchmarks |
Yes very neat. I was actually thinking this would be along the other axis: loading metadata only for the columns that are needed. My gut feeling is that a lot of compute is spent loading metadata for columns that aren't being filtered on. But I don't know if that's possible given the structure of the row group / page metadata. |
I think we could certainly avoid loading page metadata for columns We would probably have to add some sort of new API to One challenge / tradeoff that would be interesting/required is that doing another async load to read more of the metdata will be very bad if that has to actually go to object store again. Influx has it all cached in memory so it doesn't matter, but in general we need to be careful of adding additional requests |
Though since DataFusion knows what columns are needed for predicates (and thus would be used in pruning) we could easily disable loading the page index for the other columns 🤔 |
That's exactly my thought. You know which columns you need to filter on and after the ParquetOpener the metadata is discarded anyway, so there's no reason to read any (row group or page) metadata / stats for columns you're not going to filter on. |
I'd like to take this issue and try. And feel free to reassign if i don't submit a PR for a long time. |
take |
|
Thank you @zhuqi-lucas |
Created a arrow-rs issue, we can implement the interface first. |
Yes, this has me very worried. The layout of the column index is by row group, then column. So to read just a single column requires jumping around quite a bit if there are many row groups. Also, if there is no projection involved, the entire offset index will be read as well. This will need some careful testing to see if multiple fetches are worthwhile, or if doing a single fetch with a range large enough to include all column and offset indexes needed (and then only parsing the needed indexes) would be better. |
Sadly I doubt there's a correct answer. It might be the opposite for a local SSD vs object storage. |
I feel like we may need to add some sort of policy as this same tradeoff is coming up when implementing filter_pushdown optimizations. Namely, is it important to minimize IO operations or are more IO operations ok if it reduces CPU/Memory requirements. As @adriangb and @etseidl say, this tradeoff is quite different depending on local vs object store. Maybe we could make some sort of ObjectStore based interface that allows the parquet reader to hint what data might be necessary (e.g. the entire range of metadata / pages before pruning) and then allow the lower level system to decide if it wanted to prefetch, buffer or just pass through the request 🤔 |
FWIW we should still be able to have a significant win by not copying the page index values into the rust structures for columns we don't need in the query (even if we had to fetch the bytes from object store and decode them in thrift) |
Yeah I see two ways to go about that:
|
Is your feature request related to a problem or challenge?
The ParquetOpen will load all page metadata for a file, on an all tasks concurrently accessing that file. This can be costly for parquet files with a large number of rows, or a large number of columns, or both.
In testing at Influx we have noticed page metadata load time taking in the order of tens of milliseconds for some customer scenarios. We have directly timed this on customer parquet files. We estimate the contribution to query time being about 83% of those times.
Some individual page metadata load times:
Note: for the Telegraf and Random Datagen datasets we were able to measure query time savings with our prototype. For customer scenarios we can only estimate.
Describe the solution you'd like
Rather than always loading all page metadata, instead load just file metadata, prune as much as we can, then load only the page metadata needed to execute the query.
Psuedo-code looks something like this:
In our prototype we created a sparse page-metadata array. Row-group/column indexes that we don't need were left as
Index::None
. Psuedo-code:Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: