-
Notifications
You must be signed in to change notification settings - Fork 4k
ARROW-16085: [C++][R] InMemoryDataset::ReplaceSchema does not alter scan output #13088
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ARROW-16085: [C++][R] InMemoryDataset::ReplaceSchema does not alter scan output #13088
Conversation
|
|
|
The other way (arguably) would be to have |
|
|
||
| ASSERT_OK_AND_ASSIGN(scanner_builder, dataset->NewScan()); | ||
| ASSERT_OK_AND_ASSIGN(scanner, scanner_builder->Finish()); | ||
| ASSERT_NOT_OK(scanner->ToTable()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a more explicit check, e.g. with EXPECT_RAISES_WITH_MESSAGE_THAT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it was raising NotImplemented, but I realized we probably would rather raise TypeError, so I added a check for project-ability where the schema consistency check was before.
| auto batch2 = ConstantArrayGenerator::Zeroes(kBatchSize, schema_); | ||
| RecordBatchVector batches{batch1, batch2}; | ||
|
|
||
| auto dataset = std::make_shared<InMemoryDataset>(schema_, batches); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually want this to be valid though? I would expect the batches of a dataset to have a consistent schema
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In file fragments, it's totally normal to have a physical schema that is different from the dataset schema.
This came up when I realized we could create a union dataset out of filesystem ones but not in-memory ones if the schemas differed.
The other way (arguably) would be to have ReplaceSchema project the batches (though that is a lot more work).
I thought about that, but then are we materializing the projected batches before any scan is started? It seems more efficient for the projection to happen as part of the scan.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, good point about the fragments.
I was thinking InMemoryDataset already has all the data in memory, so it's not a big deal anyways. But yes, that's unnecessary work compared to this.
| " which did not match InMemorySource's: ", *schema); | ||
| } | ||
|
|
||
| RETURN_NOT_OK(CheckProjectable(*schema, *batch->schema())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels like this could be a construction-time check to avoid repeated checking except there is no way to return a Status there, unfortunately. (Not a big deal, though.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about that, but would have to change this to a ::Make() method and didn't want to go that far here.
|
Benchmark runs are scheduled for baseline = 35119f2 and contender = 5b653ee. 5b653ee is a master commit associated with this PR. Results will be available as each benchmark for each run completes. |
…Hub issue numbers (#34260) Rewrite the Jira issue numbers to the GitHub issue numbers, so that the GitHub issue numbers are automatically linked to the issues by pkgdown's auto-linking feature. Issue numbers have been rewritten based on the following correspondence. Also, the pkgdown settings have been changed and updated to link to GitHub. I generated the Changelog page using the `pkgdown::build_news()` function and verified that the links work correctly. --- ARROW-6338 #5198 ARROW-6364 #5201 ARROW-6323 #5169 ARROW-6278 #5141 ARROW-6360 #5329 ARROW-6533 #5450 ARROW-6348 #5223 ARROW-6337 #5399 ARROW-10850 #9128 ARROW-10624 #9092 ARROW-10386 #8549 ARROW-6994 #23308 ARROW-12774 #10320 ARROW-12670 #10287 ARROW-16828 #13484 ARROW-14989 #13482 ARROW-16977 #13514 ARROW-13404 #10999 ARROW-16887 #13601 ARROW-15906 #13206 ARROW-15280 #13171 ARROW-16144 #13183 ARROW-16511 #13105 ARROW-16085 #13088 ARROW-16715 #13555 ARROW-16268 #13550 ARROW-16700 #13518 ARROW-16807 #13583 ARROW-16871 #13517 ARROW-16415 #13190 ARROW-14821 #12154 ARROW-16439 #13174 ARROW-16394 #13118 ARROW-16516 #13163 ARROW-16395 #13627 ARROW-14848 #12589 ARROW-16407 #13196 ARROW-16653 #13506 ARROW-14575 #13160 ARROW-15271 #13170 ARROW-16703 #13650 ARROW-16444 #13397 ARROW-15016 #13541 ARROW-16776 #13563 ARROW-15622 #13090 ARROW-18131 #14484 ARROW-18305 #14581 ARROW-18285 #14615 * Closes: #33631 Authored-by: SHIMA Tatsuya <[email protected]> Signed-off-by: Sutou Kouhei <[email protected]>
Feels a little funny that deleting this code just makes it work, so I added a decent number of tests to make sure differing schemas are handled. LMK if you think I missed something.