Skip to content

Conversation

parthchandra
Copy link

This PR requires comet 0.10.0 and also #13786.
The PR removes the restriction on handling deleted rows in Comet. The basic change is to replace the use of ColumnVectorWithFilter with CometSelectionVector which provides the same functionality. CometSelectionVector represents the values vector and the row mapping id as arrow vectors allowing fast zero-copy transfer via Arrow FFI. All processing is done in Comet native code.

@github-actions github-actions bot added the spark label Sep 12, 2025
@parthchandra parthchandra marked this pull request as draft September 12, 2025 16:25
@parthchandra
Copy link
Author

Copy link
Contributor

@anuragmantri anuragmantri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall. I will do another pass after Comet 0.10.0 is released. Thanks @parthchandra

&& field.fieldId() != MetadataColumns.LAST_UPDATED_SEQUENCE_NUMBER.fieldId();
}

private boolean supportsCometBatchReads(ScanTask task) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I created this method to filter scans w/ deletions at the beginning. With your deletion support, we can remove this method and use the existing supportsParquetBatchReads b/c supportsCometBatchReads and supportsParquetBatchReads are identical now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit confused with this method too. I think @hsiang-c is right: we should use supportsParquetBatchReads

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@huaxingao Sorry about that. I added this in a different draft PR to fallback to Spark when delete files are present. It is no longer needed with Parth's work.

&& field.fieldId() != MetadataColumns.LAST_UPDATED_SEQUENCE_NUMBER.fieldId();
}

private boolean supportsCometBatchReads(ScanTask task) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto, can reuse supportsParquetBatchReads now.

@parthchandra
Copy link
Author

@hsiang-c I will revise this once Comet 0.10.0 is released and your changes are merged.

@huaxingao
Copy link
Contributor

Looks good overall. I will also do another pass after Comet 0.10.0 is released. Thanks @parthchandra for working on this!

@parthchandra
Copy link
Author

Thanks @huaxingao. For reference, this is consumed on the Comet side here: apache/datafusion-comet@bb318a5#diff-00a84cd42c8ea9cf336eedc24279f079148064522404461b3d4c88d9b0e36259

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants