Skip to content

Conversation

@wjones127
Copy link
Member

@wjones127 wjones127 commented May 6, 2022

Feels a little funny that deleting this code just makes it work, so I added a decent number of tests to make sure differing schemas are handled. LMK if you think I missed something.

@github-actions
Copy link

github-actions bot commented May 6, 2022

@github-actions
Copy link

github-actions bot commented May 6, 2022

⚠️ Ticket has not been started in JIRA, please click 'Start Progress'.

@wjones127 wjones127 marked this pull request as ready for review May 6, 2022 21:52
@lidavidm
Copy link
Member

lidavidm commented May 9, 2022

The other way (arguably) would be to have ReplaceSchema project the batches (though that is a lot more work).


ASSERT_OK_AND_ASSIGN(scanner_builder, dataset->NewScan());
ASSERT_OK_AND_ASSIGN(scanner, scanner_builder->Finish());
ASSERT_NOT_OK(scanner->ToTable());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a more explicit check, e.g. with EXPECT_RAISES_WITH_MESSAGE_THAT?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it was raising NotImplemented, but I realized we probably would rather raise TypeError, so I added a check for project-ability where the schema consistency check was before.

auto batch2 = ConstantArrayGenerator::Zeroes(kBatchSize, schema_);
RecordBatchVector batches{batch1, batch2};

auto dataset = std::make_shared<InMemoryDataset>(schema_, batches);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we actually want this to be valid though? I would expect the batches of a dataset to have a consistent schema

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In file fragments, it's totally normal to have a physical schema that is different from the dataset schema.

This came up when I realized we could create a union dataset out of filesystem ones but not in-memory ones if the schemas differed.

The other way (arguably) would be to have ReplaceSchema project the batches (though that is a lot more work).

I thought about that, but then are we materializing the projected batches before any scan is started? It seems more efficient for the projection to happen as part of the scan.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, good point about the fragments.

I was thinking InMemoryDataset already has all the data in memory, so it's not a big deal anyways. But yes, that's unnecessary work compared to this.

" which did not match InMemorySource's: ", *schema);
}

RETURN_NOT_OK(CheckProjectable(*schema, *batch->schema()));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels like this could be a construction-time check to avoid repeated checking except there is no way to return a Status there, unfortunately. (Not a big deal, though.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about that, but would have to change this to a ::Make() method and didn't want to go that far here.

@lidavidm lidavidm closed this in 5b653ee May 9, 2022
@ursabot
Copy link

ursabot commented May 11, 2022

Benchmark runs are scheduled for baseline = 35119f2 and contender = 5b653ee. 5b653ee is a master commit associated with this PR. Results will be available as each benchmark for each run completes.
Conbench compare runs links:
[Finished ⬇️0.0% ⬆️0.0%] ec2-t3-xlarge-us-east-2
[Failed ⬇️0.31% ⬆️0.0%] test-mac-arm
[Finished ⬇️0.71% ⬆️0.0%] ursa-i9-9960x
[Finished ⬇️0.12% ⬆️0.0%] ursa-thinkcentre-m75q
Buildkite builds:
[Finished] 5b653ee2 ec2-t3-xlarge-us-east-2
[Failed] 5b653ee2 test-mac-arm
[Finished] 5b653ee2 ursa-i9-9960x
[Finished] 5b653ee2 ursa-thinkcentre-m75q
[Finished] 35119f29 ec2-t3-xlarge-us-east-2
[Finished] 35119f29 test-mac-arm
[Finished] 35119f29 ursa-i9-9960x
[Finished] 35119f29 ursa-thinkcentre-m75q
Supported benchmarks:
ec2-t3-xlarge-us-east-2: Supported benchmark langs: Python, R. Runs only benchmarks with cloud = True
test-mac-arm: Supported benchmark langs: C++, Python, R
ursa-i9-9960x: Supported benchmark langs: Python, R, JavaScript
ursa-thinkcentre-m75q: Supported benchmark langs: C++, Java

@wjones127 wjones127 deleted the ARROW-16085-unify-inmemory-datasets branch May 11, 2022 22:43
kou pushed a commit that referenced this pull request Feb 20, 2023
…Hub issue numbers (#34260)

Rewrite the Jira issue numbers to the GitHub issue numbers, so that the GitHub issue numbers are automatically linked to the issues by pkgdown's auto-linking feature.

Issue numbers have been rewritten based on the following correspondence.
Also, the pkgdown settings have been changed and updated to link to GitHub.

I generated the Changelog page using the `pkgdown::build_news()` function and verified that the links work correctly.

---
ARROW-6338	#5198
ARROW-6364	#5201
ARROW-6323	#5169
ARROW-6278	#5141
ARROW-6360	#5329
ARROW-6533	#5450
ARROW-6348	#5223
ARROW-6337	#5399
ARROW-10850	#9128
ARROW-10624	#9092
ARROW-10386	#8549
ARROW-6994	#23308
ARROW-12774	#10320
ARROW-12670	#10287
ARROW-16828	#13484
ARROW-14989	#13482
ARROW-16977	#13514
ARROW-13404	#10999
ARROW-16887	#13601
ARROW-15906	#13206
ARROW-15280	#13171
ARROW-16144	#13183
ARROW-16511	#13105
ARROW-16085	#13088
ARROW-16715	#13555
ARROW-16268	#13550
ARROW-16700	#13518
ARROW-16807	#13583
ARROW-16871	#13517
ARROW-16415	#13190
ARROW-14821	#12154
ARROW-16439	#13174
ARROW-16394	#13118
ARROW-16516	#13163
ARROW-16395	#13627
ARROW-14848	#12589
ARROW-16407	#13196
ARROW-16653	#13506
ARROW-14575	#13160
ARROW-15271	#13170
ARROW-16703	#13650
ARROW-16444	#13397
ARROW-15016	#13541
ARROW-16776	#13563
ARROW-15622	#13090
ARROW-18131	#14484
ARROW-18305	#14581
ARROW-18285	#14615
* Closes: #33631

Authored-by: SHIMA Tatsuya <[email protected]>
Signed-off-by: Sutou Kouhei <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants