-
Notifications
You must be signed in to change notification settings - Fork 4k
ARROW-380: [Java] optimize null count when serializing vectors #207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
julienledem
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for you contribution! See my comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this the NonNullCount?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do this at long level. We can add two methods, this one which is validcount and another which is count - this which is null count.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad, I refactored the name wrongly, it was "getSetCount", will fix it promptly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jacques-n Do you mean having Sorry didn't understand your comment :/getNullCount and getCount (or getNonNullCount) ?
|
When you have a chance, please change the PR title to start with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is incorrect. The null count is the number of bits set to 0 (up to the length of the vector)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indeed :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test could be more rigorous in case there are issues with the bitmasking logic -- perhaps check the null count either for each iteration of the for loop, or periodically if that is too slow -- I only see 2 bits set to 0, or all of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, can you also test for an vector length that is not a multiple of 8?
|
@zeapo ping |
|
Sorry busy by work, will get onto it this weekend.
Le 2 déc. 2016 5:49 PM, "Wes McKinney" <[email protected]> a écrit :
… @zeapo <https://github.com/zeapo> ping
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#207 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAQgBiVRloHHtWiW87O8e5T3xcjSMZpQks5rEEwIgaJpZM4K1his>
.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems wrong to me. Isn't the length of the vector 1015?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was the question I asked on slack the other day. I didn't get a clear answer I didn't understand the answer, so I assumed that it should return the allocated size as defined in https://github.com/apache/arrow/blob/master/java/vector/src/main/java/org/apache/arrow/vector/BitVector.java#L113
There is an issue however if we want to get the actual length of the array. The length (valueCount) we pass through allocateNew is never set into the instance's valueCount, we set only the allocated size and valueCount is kept at zero. Is it the correct behaviour? If not, I should open another issue to fix that, change the implementation of getNullCount so that it uses valueCount rather than allocationSizeInBytes (the attribute that is set when we allocate the vector).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a question for @julienledem or @jacques-n
also I would recommend asking such questions on JIRA or the mailing list instead of Slack (which is not an official channel for development of the project)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my comment above. Yes we should set valueCOunt.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the other tests I looked at, it seems like setting valueCount should be done using the mutator after the allocation rather than inside the load method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one calls Mutator.setValueCount() once they are done writing to the vector through the mutator. When we load() the vector we should restore the field valueCount.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
allocationSizeinBytes * 8 is rounded up to the next byte and not the actual valueCount.
If you rebase you will see that BitVector now overloads the load method. You can set valueCount there from fieldNode.getLength().
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my comment above. Yes we should set valueCOunt.
|
@zeapo @julienledem this patch seems to break the integration tests. let me know if you need assistance debugging |
|
Found the issue. Some implementations of getNullCount() were missing. |
|
I think I implemented all of them, could you check If I missed any? |
wesm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, thank you
Author: Korn, Uwe <[email protected]> Author: Uwe L. Korn <[email protected]> Closes apache#207 from xhochy/PARQUET-813 and squashes the following commits: e9f57c8 [Korn, Uwe] Use arrow_io_static in unittests 6480f24 [Uwe L. Korn] Update Arrow hash 87e3acd [Uwe L. Korn] Explicitly call make install for Thrift babe81f [Korn, Uwe] Ensure external projects are built first c3d8e0e [Korn, Uwe] Remove old thirdparty scripts a9861a4 [Korn, Uwe] Fix linking problems 50d2acc [Uwe L. Korn] Add Thrift and Arrow 0e2fa19 [Korn, Uwe] PARQUET-813: Build thirdparty dependencies using ExternalProject Change-Id: I17064032d86398d080ac01bbf05cfbe058a239f3
> scalar.pyi:75:20 - warning: TypeVar "_AsPyTypeK" appears only once in generic function signature > Use "object" instead (reportInvalidTypeVarUse) > scalar.pyi:85:20 - warning: TypeVar "_AsPyTypeK" appears only once in generic function signature > Use "object" instead (reportInvalidTypeVarUse) Instead just using `int`, which should be all that is possible from: https://github.com/zen-xu/pyarrow-stubs/blob/02552b81161d19d4aa71d8656b028eefac84612b/pyarrow-stubs/__lib_pxi/types.pyi#L154-L164 https://github.com/zen-xu/pyarrow-stubs/blob/02552b81161d19d4aa71d8656b028eefac84612b/pyarrow-stubs/__lib_pxi/types.pyi#L63-L70
* Initial commit * init project * complete most of the annotations * fix FixedSizeBufferWriter init annotation * bump 10.0.1.2 * complete parquet core annotations * bump 10.0.1.3 * re-export modules * fix: add return type for foreign_buffer * fix output_stream and read_message annotations * ci: add release job * pre-commit specify flake8 version to 5.0.4 * flake8 ignore F821 for private files * optimize annotations * bump 10.0.1.4 * if param supports IOBase, it should also support NativeFile * bump 10.0.1.5 * pre-commit adds mypy lint * bump 10.0.1.6 * fix ci name * Remove version restrictions for Python. * release 10.0.1.7 * update poetry ci * Fix stubs for Table factory methods The main problem was that these were annotated as instance methods rather than static/class methods, but I've added some detail, too. * update pre-commit * update * fix: make fs.FileSystem.from_uri and hdfs.HadoopFileSystem.from_uri as classmethod * fix: fix read_metadata and read_schema wrong annotations (#11) * fix: typo S3FileSystem schema -> scheme (#12) * bump version 10.0.1.8 (#13) * . (#16) * make DataType hashable (#22) * pa.table support recordbatch (#20) * RecordBatchStreamReader supports next (#18) * add RecordBatch.to_pylist (#23) * precise return types for to_pandas (#25) * bump version 10.0.1.9 (#26) * [pre-commit.ci] pre-commit autoupdate (#27) * [pre-commit.ci] pre-commit autoupdate (#28) * Fix types in FlightDescriptor class (#29) * Fix types in FlightDescriptor class * Add argument types * chore: update pre-commit config (#30) * build: use `pixi` to manage project (#31) * chore: add taplo config (#32) * chore: update LICENSE date (#33) * doc: add CODE_OF_CONDUCT.md (#34) * [pre-commit.ci] pre-commit autoupdate (#38) * [pre-commit.ci] pre-commit autoupdate (#39) updates: - [github.com/astral-sh/ruff-pre-commit: v0.5.7 → v0.6.1](astral-sh/ruff-pre-commit@v0.5.7...v0.6.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [pre-commit.ci] pre-commit autoupdate (apache#48) updates: - [github.com/astral-sh/ruff-pre-commit: v0.6.1 → v0.6.2](astral-sh/ruff-pre-commit@v0.6.1...v0.6.2) - [github.com/pre-commit/mirrors-mypy: v1.11.1 → v1.11.2](pre-commit/mirrors-mypy@v1.11.1...v1.11.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * refactor: rewrite type annotations by hand. (#35) * chore: restart * update ruff config * build: add extra dependencies * update mypy config * feat: add util.pyi * feat: add types.pyi * feat: impl lib.pyi * update * feat: add acero.pyi * feat: add compute.pyi * add benchmark.pyi * add cffi * feat: add csv.pyi * disable isort single line * reformat * update compute.pyi * add _auzurefs.pyi * add _cuda.pyi * add _dataset.pyi * rename _stub_typing.pyi -> _stubs_typing.pyi * add _dataset_orc.pyi * add pyarrow-stubs/_dataset_parquet_encryption.pyi * add _dataset_parquet.pyi * add _feather.pyi * feat: add _flight.pyi * add _fs.pyi * add _gcsfs.pyi * add _hdfs.pyi * add _json.pyi * add _orc.pyi * add _parquet_encryption.pyi * add _parquet.pyi * update * add _parquet.pyi * add _s3fs.pyi * add _substrait.pyi * update * update * add parquet/core.pyi * add parquet/encryption.pyi * add BufferProtocol * impl _filesystemdataset_write * add dataset.pyi * add feather.pyi * add flight.pyi * add fs.pyi * add gandiva.pyi * add json.pyi * add orc.pyi * add pandas_compat.pyi * add substrait.pyi * update util.pyi * add interchange * add __lib_pxi * update __lib_pxi * update * update * add types.pyi * feat: add scalar.pyi * update types.pyi * update types.pyi * update scalar.pyi * update * update * update * update * update * update * feat: impl array * feat: add builder.pyi * add scipy * add tensor.pyi * feat: impl NativeFile * update io.pyi * complete io.pyi * add ipc.pyi * mv benchmark.pyi into __lib_pxi * add table.pyi * do re-export in lib.pyi * fix io.pyi * update * optimize scalar.pyi * optimize indices * complete ipc.pyi * update * fix NullableIterable * fix string array * ignore overload-overlap error * fix _Tabular.__getitem__ * remove additional_dependencies * remove check-mypy.sh (apache#49) * release 20240828 (apache#50) * fix release tag (apache#51) * ci: install hatch by pip (apache#52) * ci: fix hatch keyring (apache#53) * ci: use Release environment (apache#54) * remove Scalar generic type var _IsValid (apache#56) * remove Scalar generic type var _IsValid * make Array, Scalar, Types generic type var as covariant type (apache#57) * remove Field generic type var _Nullable (apache#58) * remove Field generic type var _Nullable * fix: pa.dictionary and pa.schema annotation (apache#59) * fix pa.dictionary annotation * fix: schema annotation * release new version (apache#60) * [pre-commit.ci] pre-commit autoupdate (apache#62) * release: 2024.9.3 (apache#63) use new date release format %Y.%m.%d * support pyarrow compute funcs (apache#61) * update compute.pyi * impl Aggregation funcs * impl arithmetic * imit bit-wise functions * imit rounding functions * optimize annotation * impl logarithmic functions * update * impl comparisons funcs * impl logical funcs * impl string predicates and transforms * impl string padding * impl string trimming * impl string splitting and component extraction * impl string joining and slicing * impl Containment tests * impl Categorizations * impl Structural transforms * impl Conversions * impl Temporal component extraction * impl random, Timezone handling * impl Array-wise functions * fix timestamp scalar * support build array with list of scalar (apache#64) * release 2024.9.4 (apache#65) * Version follows the version of pyarrow (apache#66) * import parquet.core into parquet __init__.py (apache#67) Update __init__.pyi * release 17.1 (apache#69) * fix: add missing submodule benchmark, csv and cuda (apache#71) * release 17.2 (apache#72) * fix: from_pylist covariance (apache#73) * [pre-commit.ci] pre-commit autoupdate (apache#74) * Fix return type for middleware factory's start_call (apache#75) It can return None if middleware is not needed for a given call. * release 17.3 (apache#76) * fix: add missing return type in FlightDescriptor static methods (apache#80) * Support Tabular filter with Expression (apache#81) support Tabular filter with Expression * Support compute functions to accept Expression as parameter (apache#82) * fix: Fix the return value of Expression comparison (apache#83) * release 17.4 (apache#84) * fix: fix the array return type (apache#89) * a few type improvements, mostly flight related (apache#90) * FlightError.extra_info -> bytes * annotate FlightStreamReader.cancel return * BasicAuth serialize/deserialize * RecordBatchFileReader.schema * actually str | bytes * add_type_to_Field (apache#87) * add_type_to_Field * Field.type should return the covariant DataType --------- Co-authored-by: ZhengYu, Xu <[email protected]> * Support fsspec.AbstractFileSystem (apache#88) * supported_filesystem * fixes * remove unused import --------- Co-authored-by: ZhengYu, Xu <[email protected]> * release 17.5 (apache#91) * [pre-commit.ci] pre-commit autoupdate (apache#95) * fix: parquet not accepting NativeFile (apache#98) * feat: support pa.Buffer buffer protocol (apache#99) * feat: Support `compute` functions to accept ChunkedArray. (apache#100) * release 17.6 (apache#101) * [pre-commit.ci] pre-commit autoupdate (apache#102) * working towards making return signatures only have one type (mean and exp) (apache#105) * group_by_returns_TableGroupBy * return_single_type_for_mean_exp * revert table.pyi * compute.mean does not support BinaryScalar or BinaryArray --------- Co-authored-by: ZhengYu, Xu <[email protected]> * a table group_by was returing Self but should return TableGroupBy (apache#104) group_by_returns_TableGroupBy * [pre-commit.ci] pre-commit autoupdate (apache#106) updates: - [github.com/pre-commit/pre-commit-hooks: v4.6.0 → v5.0.0](pre-commit/pre-commit-hooks@v4.6.0...v5.0.0) - [github.com/astral-sh/ruff-pre-commit: v0.6.7 → v0.6.9](astral-sh/ruff-pre-commit@v0.6.7...v0.6.9) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: RecordBatch missing `from_arrays` and `from_pandas` (apache#108) * release 17.7 (apache#109) * fix_combine_chunks (apache#110) * make Self backward compatible (apache#115) * fix: update ConvertOptions (apache#114) * add type property to Array (apache#112) * add type property to Array * Array.type should return covariant --------- Co-authored-by: ZhengYu, Xu <[email protected]> * release 17.8 (apache#117) * Add include_columns parameter in ConvertOptions (apache#118) * add list[str] overload to rename_columns (apache#119) * release 17.9 (apache#120) * [pre-commit.ci] pre-commit autoupdate (apache#124) updates: - [github.com/astral-sh/ruff-pre-commit: v0.6.9 → v0.7.0](astral-sh/ruff-pre-commit@v0.6.9...v0.7.0) - [github.com/pre-commit/mirrors-mypy: v1.11.2 → v1.12.1](pre-commit/mirrors-mypy@v1.11.2...v1.12.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * improve type annotations for parquet writer (apache#125) Add support for per-field compression specification Add missing none compression value. * Add missing return type for Schema.serialize (apache#123) * Add `Schema.field(int)` (apache#122) * Change various io related functions to support `StrPath` as a path input (apache#121) * Change various io related functions to support StrPath as a path input * fmt * Added StrPath | IO for feather types * fix type hint for sort_by (apache#130) sort_by takes str or list[tuple(name, order)] as its argument where str is a field name not a sort order * metadata on a schema can be passed as str (apache#128) For details see https://github.com/apache/arrow/blob/apache-arrow-17.0.0/python/pyarrow/types.pxi\#L2053-L2056 * Correct typevars for DictionaryType, MapType, RunEncodedType (apache#126) Correct type hints for Dictionary, RunEndEncoded and Map Signed-off-by: Jonas Dedden <[email protected]> Co-authored-by: ZhengYu, Xu <[email protected]> * Add some more StrPath io parts that were overlooked. (apache#131) * Add some more StrPath io parts that were overlooked. Additionally, add the utility typealias `SingleOrList` that can be used in places where we want a concise type declaration but the there is a large union of types. * write_dataset(base_dir = ) can also take Path * Support ChunkedArray in add/append methods in Table (apache#129) * Add missing partitioning typing case (apache#132) This should now support the examples in the docstring for partitioning. * fix: typo 'permissive' instead of 'premissive' (apache#133) * release 17.10 (apache#134) * fix incorrect type hints for compute.sort_indices (apache#135) * disallow passing `names` as an argument to table when using dictionaries (apache#137) * [pre-commit.ci] pre-commit autoupdate (apache#138) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.0 → v0.7.1](astral-sh/ruff-pre-commit@v0.7.0...v0.7.1) - [github.com/pre-commit/mirrors-mypy: v1.12.1 → v1.13.0](pre-commit/mirrors-mypy@v1.12.1...v1.13.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Add missing type for FlightEndpoint (apache#136) * release 17.11 (apache#139) * [pre-commit.ci] pre-commit autoupdate (apache#140) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.1 → v0.7.2](astral-sh/ruff-pre-commit@v0.7.1...v0.7.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [pre-commit.ci] pre-commit autoupdate (apache#142) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.2 → v0.7.3](astral-sh/ruff-pre-commit@v0.7.2...v0.7.3) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * chore: Create FUNDING.yml (apache#143) Create FUNDING.yml * fix: `read_schema` should return Schema (apache#145) fix: read_schema should return Schema * release 17.12 (apache#146) * [pre-commit.ci] pre-commit autoupdate (apache#147) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.3 → v0.7.4](astral-sh/ruff-pre-commit@v0.7.3...v0.7.4) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: `to_table` argument `columns` can be a dict of expressions (apache#149) * [pre-commit.ci] pre-commit autoupdate (apache#148) * [pre-commit.ci] pre-commit autoupdate updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.4 → v0.8.1](astral-sh/ruff-pre-commit@v0.7.4...v0.8.1) * ruff: ignore PYI063 --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: ZhengYu, Xu <[email protected]> * release 17.13 (apache#151) * fix: FileSystem metadata value should be str (apache#152) * fix: FileSystemHandler metadata value should be str (apache#153) * [pre-commit.ci] pre-commit autoupdate (apache#154) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.1 → v0.8.2](astral-sh/ruff-pre-commit@v0.8.1...v0.8.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * improve coverage for pyarrow.struct typehint (apache#157) * fix: ipc typing (apache#159) * release 17.14 (apache#160) * fix: add missing param 'nbytes' to NativeFile.read (apache#163) * release 17.15 (apache#164) * [pre-commit.ci] pre-commit autoupdate (apache#161) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.2 → v0.8.3](astral-sh/ruff-pre-commit@v0.8.2...v0.8.3) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Add 'None' as a valid argument for partitioning to the various parquet reading functions (apache#166) * [pre-commit.ci] pre-commit autoupdate (apache#165) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.3 → v0.8.6](astral-sh/ruff-pre-commit@v0.8.3...v0.8.6) - [github.com/pre-commit/mirrors-mypy: v1.13.0 → v1.14.1](pre-commit/mirrors-mypy@v1.13.0...v1.14.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: should use Collection[Array] instead list[Array] (apache#170) "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance Consider using "Sequence" instead, which is covariant * fix: update type hints for path_or_paths and source parameters in ParquetDataset and read_table (apache#171) * [pre-commit.ci] pre-commit autoupdate (apache#167) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.6 → v0.9.1](astral-sh/ruff-pre-commit@v0.8.6...v0.9.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 17.16 (apache#172) * Fixed pa.fixed_shape_tensor (apache#175) * [pre-commit.ci] pre-commit autoupdate (apache#173) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.1 → v0.9.4](astral-sh/ruff-pre-commit@v0.9.1...v0.9.4) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: Preserve generic in `ChunkedArray.type` (apache#177) * release 17.17 (apache#178) * [pre-commit.ci] pre-commit autoupdate (apache#176) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.4 → v0.9.6](astral-sh/ruff-pre-commit@v0.9.4...v0.9.6) - [github.com/pre-commit/mirrors-mypy: v1.14.1 → v1.15.0](pre-commit/mirrors-mypy@v1.14.1...v1.15.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: support to construct ListArray with primitive type (apache#179) * fix: Avoid `chunked_array` overlapping overloads (apache#183) * fix: Add placeholder annotations to `pc.if_else` (apache#182) * fix: Widen `Array` to `Array | ChunkedArray` (apache#181) * fix: add `pc.fill_null` (apache#185) - https://arrow.apache.org/docs/python/generated/pyarrow.compute.fill_null.html - https://github.com/narwhals-dev/narwhals/blob/05e47b27ebe27b24196cee5956d07748d65a62ee/narwhals/_arrow/series.py#L675 * fix: Allow Table.from_arrays to take a list containing a mix of Array and ChunkedArray (apache#187) Update table.pyi * release 17.18 (apache#188) * [pre-commit.ci] pre-commit autoupdate (apache#180) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.6 → v0.9.10](astral-sh/ruff-pre-commit@v0.9.6...v0.9.10) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: from_arrays for both Table and RecordBatch (apache#189) * fix: resolve some `pa.compute` overlaps (apache#184) * fix: resolve overlapping `compute.(add|divide)` * fix: copy from non-cloned signature * fix: resolve overlapping `compute.exp` * fix: resolve overlapping `compute.power` * fix: resolve overlapping `compute.equal` * fix: resolve overlapping `compute.and_` * fix: Include `Array` in `chunked_array` overload (apache#190) narwhals-dev/narwhals@0237f7a * release 17.19 (apache#191) * Add Scalar, Array and Type classes for Json & Uuid (apache#194) * Add Scalar, Array and Type classes for Json & Uuid * Formatting fixes * [pre-commit.ci] pre-commit autoupdate (apache#192) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.10 → v0.11.2](astral-sh/ruff-pre-commit@v0.9.10...v0.11.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Revert "Add Scalar, Array and Type classes for Json & Uuid" (apache#195) Revert "Add Scalar, Array and Type classes for Json & Uuid (apache#194)" This reverts commit 8f77909. * fix: Add missing `pc.equal` overload (apache#196) * feat: support pyarrow 19.0 (apache#198) * build: upgrade pyarrow min version to 19.0 * feat: support pyarrow 19.0 * omit mypy bool8 override error * fix: reexport new types (apache#199) * feat: override new patterns for func repeat and nulls (apache#200) * fix: reexport decimal64 array and decimal128 array * feat: override new patterns for func `repeat` and `nulls` * release: 19.1 (apache#201) * fix: Allow `Iterable[Table]` in `concat_tables` (apache#203) https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html > tables : iterable of pyarrow.Table objects * fix: Allow `ChunkedArray[BooleanScalar]` in `pc.invert` (apache#204) Fixes https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L298-L299 * feat: Fully spec `TableGroupBy.aggregate` (apache#197) ## Related - https://arrow.apache.org/docs/python/compute.html#grouped-aggregations - https://arrow.apache.org/docs/python/generated/pyarrow.TableGroupBy.html#pyarrow.TableGroupBy.aggregate - https://github.com/apache/arrow/blob/34a984c842db42b409a1359e6e2cf167a2365a48/python/pyarrow/table.pxi#L6578-L6604 * fix: Add missing return type to `ChunkedArray.filter` (apache#205) * fix: Add relaxed final overload to logical functions (apache#206) Covers all of `pc.(and_ | and_kleene | and_not | and_not_kleene | or_ | or_kleene | xor)` Resolves: - https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L219-L233 - https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L662 * fix: Allow `ChunkedArray` in `Table.set_column` (apache#211) Also being more consistent with `ArrayOrChunkedArray[Any]` everywhere Discovered in - https://github.com/vega/vega-datasets/blob/343b7101391a81190ba24e1e8d62a381d2fef3bd/scripts/species.py#L798-L799 * chore: Ignore `fsspec` `[import-untyped]` (apache#210) ```py _fs.pyi:18: error: Skipping analyzing "fsspec": module is installed, but missing library stubs or py.typed marker [import-untyped] _fs.pyi:18: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports Found 1 error in 1 file (checked 64 source files) ``` - fsspec/filesystem_spec#625 - fsspec/filesystem_spec#1676 * feat: Convert `types.is_*` into `TypeIs` guards (apache#215) * chore: Add `types.__all__` * feat: Convert `types._is_*` into `TypeIs` guards I've been using this for a little while, but makes more sense to live in the stubs https://github.com/narwhals-dev/narwhals/blob/16427440e6d74939c403083b52ce3fb0af7d63c7/narwhals/_arrow/utils.py#L44-L67 * fix: Resolve `bit_wise_and` overlaps (apache#214) Fixes 3 errors: ```py compute.pyi:608:5 - error: Overload 1 for "bit_wise_and" overlaps overload 4 and returns an incompatible type (reportOverlappingOverload) compute.pyi:608:5 - error: Overload 1 for "bit_wise_and" overlaps overload 5 and returns an incompatible type (reportOverlappingOverload) compute.pyi:620:5 - error: Overload 3 for "bit_wise_and" will never be used because its parameters overlap overload 1 (reportOverlappingOverload) ``` * fix: Resolve `list_*` overlapping overloads (apache#213) * fix: Resolve `list_value_length` overlaps * fix: Resolve `list_element` overlaps * fix: Resolve `list_(flatten|slice|parent_indices)` overlaps An improvement, but still not that accurate * fix: Include `VarianceOptions` in `TableGroupBy.aggregate` (apache#212) - Follow-up to apache#197 - Noticed while writing up (narwhals-dev/narwhals#2385) - We already use it for `std`, `var` in https://github.com/narwhals-dev/narwhals/blob/16427440e6d74939c403083b52ce3fb0af7d63c7/narwhals/_arrow/group_by.py#L81-L82 * [pre-commit.ci] pre-commit autoupdate (apache#202) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.2 → v0.11.5](astral-sh/ruff-pre-commit@v0.11.2...v0.11.5) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: Resolve `Scalar.as_py` warnings for `DictionaryType` (apache#207) > scalar.pyi:75:20 - warning: TypeVar "_AsPyTypeK" appears only once in generic function signature > Use "object" instead (reportInvalidTypeVarUse) > scalar.pyi:85:20 - warning: TypeVar "_AsPyTypeK" appears only once in generic function signature > Use "object" instead (reportInvalidTypeVarUse) Instead just using `int`, which should be all that is possible from: https://github.com/zen-xu/pyarrow-stubs/blob/02552b81161d19d4aa71d8656b028eefac84612b/pyarrow-stubs/__lib_pxi/types.pyi#L154-L164 https://github.com/zen-xu/pyarrow-stubs/blob/02552b81161d19d4aa71d8656b028eefac84612b/pyarrow-stubs/__lib_pxi/types.pyi#L63-L70 * fix: Add default to `pc.sort_indices` (apache#216) * fix: Add default to `pc.sort_indices` Fixes narwhals-dev/narwhals#2390 (comment) Default is specified in https://arrow.apache.org/docs/python/generated/pyarrow.compute.sort_indices.html * refactor: Reuse some aliases * fix: Allow `list_size` with `Field` in `pa.list_` (apache#218) Closes apache#217 * allow `Table` or `RecordBatch` for dataset (apache#222) allow source argument pyarrow.dataset.dataset() to be RecordBatch | Table * refactor: Simplify `types` overloads (apache#219) * fix: `binary` overlap * fix: Simplify list constructors, `_Ordered` * refactor: Use `_Tz` default * fix: iter ChunkedArray should return scalar value (apache#224) * release: 19.2 (apache#225) * fix: Add missing `DictionaryArray` methods/properties (apache#226) ## Docs - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.indices - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary_decode - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary_encode ## Fixes - https://github.com/narwhals-dev/narwhals/blob/c23e56c56630761f0fbc58b575a1c987e57d58d5/narwhals/_arrow/series.py#L787-L798 - https://github.com/narwhals-dev/narwhals/blob/c23e56c56630761f0fbc58b575a1c987e57d58d5/narwhals/_arrow/series_cat.py#L14-L18 * chore: use pyright as static type checker (apache#227) * use pyright as static type checker * make pyright happy * fix: fix pyright action (apache#229) fix github ci * fix: Match runtime behavior of `(Table|RecordBatch).select` (apache#221) * fix: Match runtime behavior of `(Table|RecordBatch).select` ## Resolves - https://github.com/MarcoGorelli/narwhals/blob/5b02b592183b8d39e2d32e0aedd6c234bb22d405/narwhals/_arrow/dataframe.py#L305-L307 - https://github.com/MarcoGorelli/narwhals/blob/5b02b592183b8d39e2d32e0aedd6c234bb22d405/narwhals/_arrow/dataframe.py#L285-L294 ##Description Following up on what I thought was a simple stub issue, but we're both *too strict* and *too permissive* in different ways ##Examples {placeholder} ##Related - https://github.com/apache/arrow/blob/d2ddee62329eb711572b4d71d6380673d7f7edd1/python/pyarrow/table.pxi#L4367-L4374 - https://github.com/apache/arrow/blob/d2ddee62329eb711572b4d71d6380673d7f7edd1/python/pyarrow/table.pxi#L1721-L1739 * update select * update select --------- Co-authored-by: ZhengYu, Xu <[email protected]> * [pre-commit.ci] pre-commit autoupdate (apache#220) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.5 → v0.11.8](astral-sh/ruff-pre-commit@v0.11.5...v0.11.8) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * feat: narrow scalar when type is given (apache#230) * rename Uint -> UInt * feat: narrow scalar when type is given * release 19.3 (apache#231) * chore: pyright use strict mode (apache#233) * fix types * update array.pyi * update scalar.pyi * update * update array * update array * optimize chunked_array * optimizer iterchunks * update * update pyproject.toml * fix: pa.nulls accept type rather than types (apache#234) * [pre-commit.ci] pre-commit autoupdate (apache#232) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.8 → v0.11.9](astral-sh/ruff-pre-commit@v0.11.8...v0.11.9) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 19.4 (apache#235) * lint(pyright): disable reportUnknownMemberType (apache#239) * [pre-commit.ci] pre-commit autoupdate (apache#236) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.9 → v0.11.13](astral-sh/ruff-pre-commit@v0.11.9...v0.11.13) - [github.com/RobertCraigie/pyright-python: v1.1.400 → v1.1.401](RobertCraigie/pyright-python@v1.1.400...v1.1.401) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * feat: support pyarrow 20.0 (apache#240) * [pre-commit.ci] pre-commit autoupdate (apache#241) updates: - [github.com/RobertCraigie/pyright-python: v1.1.401 → v1.1.402](RobertCraigie/pyright-python@v1.1.401...v1.1.402) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * support docstring (apache#242) * doc: complete tensor doc * doc: complete table doc * doc: complete scalar doc * doc: complete orc doc * doc: complete memory doc * doc: complete lib doc * doc: complete json doc * doc: complete hdfs doc * doc: complete gcsfs doc * doc: complete fs doc * doc: complete flight doc * doc: complete dataset doc * doc: complete dataset parquet doc * doc: complete dataset parquet encryption doc * doc: complete cuda doc * doc: complete csv doc * doc: complete azurefs doc * doc: complete core doc * doc: complete interchange doc * doc: complete array doc * doc: complete builder doc * doc: complete device doc * doc: complete io doc * doc: complete ipc doc * doc: complete types doc * mark deprecated apis * doc: complete _compute doc * doc: complete compute doc * doc: update compute doc * lint code * release 20.0.0.20250618 (apache#243) * fix: make ParquetFileFormat constructor args optional (apache#244) * fix: Field.remove_metadata should return Self (apache#246) * [pre-commit.ci] pre-commit autoupdate (apache#245) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.13 → v0.12.0](astral-sh/ruff-pre-commit@v0.11.13...v0.12.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 20.0.0.20250627 (apache#247) * fix: chunked_array with type should be specified (apache#250) * [pre-commit.ci] pre-commit autoupdate (apache#248) updates: - [github.com/astral-sh/ruff-pre-commit: v0.12.0 → v0.12.3](astral-sh/ruff-pre-commit@v0.12.0...v0.12.3) - [github.com/RobertCraigie/pyright-python: v1.1.402 → v1.1.403](RobertCraigie/pyright-python@v1.1.402...v1.1.403) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 20.0.0.20250715 (apache#251) * fix: The type parameter of array should be covariant (apache#253) * release 20.0.0.20250716 (apache#254) * Add py.typed file to signify that the library is typed See the relevant PEP https://peps.python.org/pep-0561 * Prepare `pyarrow-stubs` for history merging MINOR: [Python] Prepare `pyarrow-stubs` for history merging Co-authored-by: ZhengYu, Xu <[email protected]> * Add `ty` configuration and suppress error codes * One line per rule * Add licence header from original repo for all `.pyi` files * Revert "Add licence header from original repo for all `.pyi` files" This reverts commit 1631f39. * Prepare for licence merging * Exclude `stubs` from `rat` test * Add Apache licence clause to `py.typed` * Reduce list * Resolve merge conflict --------- Signed-off-by: Jonas Dedden <[email protected]> Co-authored-by: ZhengYu, Xu <[email protected]> Co-authored-by: Jim Bosch <[email protected]> Co-authored-by: Oliver Mannion <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Eugene Toder <[email protected]> Co-authored-by: fvankrieken <[email protected]> Co-authored-by: Ilia Ablamonov <[email protected]> Co-authored-by: Mathias Beguin <[email protected]> Co-authored-by: Dylan Scott <[email protected]> Co-authored-by: deanm0000 <[email protected]> Co-authored-by: Jan Moravec <[email protected]> Co-authored-by: Marius van Niekerk <[email protected]> Co-authored-by: Jonas Dedden <[email protected]> Co-authored-by: Fábio D. Batista <[email protected]> Co-authored-by: ben-freist <[email protected]> Co-authored-by: Jiahao Yuan <[email protected]> Co-authored-by: Pim de Haan <[email protected]> Co-authored-by: Dan Redding <[email protected]> Co-authored-by: Tom Crasset <[email protected]> Co-authored-by: Tom McTiernan <[email protected]> Co-authored-by: Rok Mihevc <[email protected]>
* Initial commit * init project * complete most of the annotations * fix FixedSizeBufferWriter init annotation * bump 10.0.1.2 * complete parquet core annotations * bump 10.0.1.3 * re-export modules * fix: add return type for foreign_buffer * fix output_stream and read_message annotations * ci: add release job * pre-commit specify flake8 version to 5.0.4 * flake8 ignore F821 for private files * optimize annotations * bump 10.0.1.4 * if param supports IOBase, it should also support NativeFile * bump 10.0.1.5 * pre-commit adds mypy lint * bump 10.0.1.6 * fix ci name * Remove version restrictions for Python. * release 10.0.1.7 * update poetry ci * Fix stubs for Table factory methods The main problem was that these were annotated as instance methods rather than static/class methods, but I've added some detail, too. * update pre-commit * update * fix: make fs.FileSystem.from_uri and hdfs.HadoopFileSystem.from_uri as classmethod * fix: fix read_metadata and read_schema wrong annotations (#11) * fix: typo S3FileSystem schema -> scheme (#12) * bump version 10.0.1.8 (#13) * . (#16) * make DataType hashable (#22) * pa.table support recordbatch (#20) * RecordBatchStreamReader supports next (#18) * add RecordBatch.to_pylist (#23) * precise return types for to_pandas (#25) * bump version 10.0.1.9 (#26) * [pre-commit.ci] pre-commit autoupdate (#27) * [pre-commit.ci] pre-commit autoupdate (#28) * Fix types in FlightDescriptor class (#29) * Fix types in FlightDescriptor class * Add argument types * chore: update pre-commit config (#30) * build: use `pixi` to manage project (#31) * chore: add taplo config (#32) * chore: update LICENSE date (#33) * doc: add CODE_OF_CONDUCT.md (#34) * [pre-commit.ci] pre-commit autoupdate (#38) * [pre-commit.ci] pre-commit autoupdate (#39) updates: - [github.com/astral-sh/ruff-pre-commit: v0.5.7 → v0.6.1](astral-sh/ruff-pre-commit@v0.5.7...v0.6.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [pre-commit.ci] pre-commit autoupdate (apache#48) updates: - [github.com/astral-sh/ruff-pre-commit: v0.6.1 → v0.6.2](astral-sh/ruff-pre-commit@v0.6.1...v0.6.2) - [github.com/pre-commit/mirrors-mypy: v1.11.1 → v1.11.2](pre-commit/mirrors-mypy@v1.11.1...v1.11.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * refactor: rewrite type annotations by hand. (#35) * chore: restart * update ruff config * build: add extra dependencies * update mypy config * feat: add util.pyi * feat: add types.pyi * feat: impl lib.pyi * update * feat: add acero.pyi * feat: add compute.pyi * add benchmark.pyi * add cffi * feat: add csv.pyi * disable isort single line * reformat * update compute.pyi * add _auzurefs.pyi * add _cuda.pyi * add _dataset.pyi * rename _stub_typing.pyi -> _stubs_typing.pyi * add _dataset_orc.pyi * add pyarrow-stubs/_dataset_parquet_encryption.pyi * add _dataset_parquet.pyi * add _feather.pyi * feat: add _flight.pyi * add _fs.pyi * add _gcsfs.pyi * add _hdfs.pyi * add _json.pyi * add _orc.pyi * add _parquet_encryption.pyi * add _parquet.pyi * update * add _parquet.pyi * add _s3fs.pyi * add _substrait.pyi * update * update * add parquet/core.pyi * add parquet/encryption.pyi * add BufferProtocol * impl _filesystemdataset_write * add dataset.pyi * add feather.pyi * add flight.pyi * add fs.pyi * add gandiva.pyi * add json.pyi * add orc.pyi * add pandas_compat.pyi * add substrait.pyi * update util.pyi * add interchange * add __lib_pxi * update __lib_pxi * update * update * add types.pyi * feat: add scalar.pyi * update types.pyi * update types.pyi * update scalar.pyi * update * update * update * update * update * update * feat: impl array * feat: add builder.pyi * add scipy * add tensor.pyi * feat: impl NativeFile * update io.pyi * complete io.pyi * add ipc.pyi * mv benchmark.pyi into __lib_pxi * add table.pyi * do re-export in lib.pyi * fix io.pyi * update * optimize scalar.pyi * optimize indices * complete ipc.pyi * update * fix NullableIterable * fix string array * ignore overload-overlap error * fix _Tabular.__getitem__ * remove additional_dependencies * remove check-mypy.sh (apache#49) * release 20240828 (apache#50) * fix release tag (apache#51) * ci: install hatch by pip (apache#52) * ci: fix hatch keyring (apache#53) * ci: use Release environment (apache#54) * remove Scalar generic type var _IsValid (apache#56) * remove Scalar generic type var _IsValid * make Array, Scalar, Types generic type var as covariant type (apache#57) * remove Field generic type var _Nullable (apache#58) * remove Field generic type var _Nullable * fix: pa.dictionary and pa.schema annotation (apache#59) * fix pa.dictionary annotation * fix: schema annotation * release new version (apache#60) * [pre-commit.ci] pre-commit autoupdate (apache#62) * release: 2024.9.3 (apache#63) use new date release format %Y.%m.%d * support pyarrow compute funcs (apache#61) * update compute.pyi * impl Aggregation funcs * impl arithmetic * imit bit-wise functions * imit rounding functions * optimize annotation * impl logarithmic functions * update * impl comparisons funcs * impl logical funcs * impl string predicates and transforms * impl string padding * impl string trimming * impl string splitting and component extraction * impl string joining and slicing * impl Containment tests * impl Categorizations * impl Structural transforms * impl Conversions * impl Temporal component extraction * impl random, Timezone handling * impl Array-wise functions * fix timestamp scalar * support build array with list of scalar (apache#64) * release 2024.9.4 (apache#65) * Version follows the version of pyarrow (apache#66) * import parquet.core into parquet __init__.py (apache#67) Update __init__.pyi * release 17.1 (apache#69) * fix: add missing submodule benchmark, csv and cuda (apache#71) * release 17.2 (apache#72) * fix: from_pylist covariance (apache#73) * [pre-commit.ci] pre-commit autoupdate (apache#74) * Fix return type for middleware factory's start_call (apache#75) It can return None if middleware is not needed for a given call. * release 17.3 (apache#76) * fix: add missing return type in FlightDescriptor static methods (apache#80) * Support Tabular filter with Expression (apache#81) support Tabular filter with Expression * Support compute functions to accept Expression as parameter (apache#82) * fix: Fix the return value of Expression comparison (apache#83) * release 17.4 (apache#84) * fix: fix the array return type (apache#89) * a few type improvements, mostly flight related (apache#90) * FlightError.extra_info -> bytes * annotate FlightStreamReader.cancel return * BasicAuth serialize/deserialize * RecordBatchFileReader.schema * actually str | bytes * add_type_to_Field (apache#87) * add_type_to_Field * Field.type should return the covariant DataType --------- Co-authored-by: ZhengYu, Xu <[email protected]> * Support fsspec.AbstractFileSystem (apache#88) * supported_filesystem * fixes * remove unused import --------- Co-authored-by: ZhengYu, Xu <[email protected]> * release 17.5 (apache#91) * [pre-commit.ci] pre-commit autoupdate (apache#95) * fix: parquet not accepting NativeFile (apache#98) * feat: support pa.Buffer buffer protocol (apache#99) * feat: Support `compute` functions to accept ChunkedArray. (apache#100) * release 17.6 (apache#101) * [pre-commit.ci] pre-commit autoupdate (apache#102) * working towards making return signatures only have one type (mean and exp) (apache#105) * group_by_returns_TableGroupBy * return_single_type_for_mean_exp * revert table.pyi * compute.mean does not support BinaryScalar or BinaryArray --------- Co-authored-by: ZhengYu, Xu <[email protected]> * a table group_by was returing Self but should return TableGroupBy (apache#104) group_by_returns_TableGroupBy * [pre-commit.ci] pre-commit autoupdate (apache#106) updates: - [github.com/pre-commit/pre-commit-hooks: v4.6.0 → v5.0.0](pre-commit/pre-commit-hooks@v4.6.0...v5.0.0) - [github.com/astral-sh/ruff-pre-commit: v0.6.7 → v0.6.9](astral-sh/ruff-pre-commit@v0.6.7...v0.6.9) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: RecordBatch missing `from_arrays` and `from_pandas` (apache#108) * release 17.7 (apache#109) * fix_combine_chunks (apache#110) * make Self backward compatible (apache#115) * fix: update ConvertOptions (apache#114) * add type property to Array (apache#112) * add type property to Array * Array.type should return covariant --------- Co-authored-by: ZhengYu, Xu <[email protected]> * release 17.8 (apache#117) * Add include_columns parameter in ConvertOptions (apache#118) * add list[str] overload to rename_columns (apache#119) * release 17.9 (apache#120) * [pre-commit.ci] pre-commit autoupdate (apache#124) updates: - [github.com/astral-sh/ruff-pre-commit: v0.6.9 → v0.7.0](astral-sh/ruff-pre-commit@v0.6.9...v0.7.0) - [github.com/pre-commit/mirrors-mypy: v1.11.2 → v1.12.1](pre-commit/mirrors-mypy@v1.11.2...v1.12.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * improve type annotations for parquet writer (apache#125) Add support for per-field compression specification Add missing none compression value. * Add missing return type for Schema.serialize (apache#123) * Add `Schema.field(int)` (apache#122) * Change various io related functions to support `StrPath` as a path input (apache#121) * Change various io related functions to support StrPath as a path input * fmt * Added StrPath | IO for feather types * fix type hint for sort_by (apache#130) sort_by takes str or list[tuple(name, order)] as its argument where str is a field name not a sort order * metadata on a schema can be passed as str (apache#128) For details see https://github.com/apache/arrow/blob/apache-arrow-17.0.0/python/pyarrow/types.pxi\#L2053-L2056 * Correct typevars for DictionaryType, MapType, RunEncodedType (apache#126) Correct type hints for Dictionary, RunEndEncoded and Map Signed-off-by: Jonas Dedden <[email protected]> Co-authored-by: ZhengYu, Xu <[email protected]> * Add some more StrPath io parts that were overlooked. (apache#131) * Add some more StrPath io parts that were overlooked. Additionally, add the utility typealias `SingleOrList` that can be used in places where we want a concise type declaration but the there is a large union of types. * write_dataset(base_dir = ) can also take Path * Support ChunkedArray in add/append methods in Table (apache#129) * Add missing partitioning typing case (apache#132) This should now support the examples in the docstring for partitioning. * fix: typo 'permissive' instead of 'premissive' (apache#133) * release 17.10 (apache#134) * fix incorrect type hints for compute.sort_indices (apache#135) * disallow passing `names` as an argument to table when using dictionaries (apache#137) * [pre-commit.ci] pre-commit autoupdate (apache#138) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.0 → v0.7.1](astral-sh/ruff-pre-commit@v0.7.0...v0.7.1) - [github.com/pre-commit/mirrors-mypy: v1.12.1 → v1.13.0](pre-commit/mirrors-mypy@v1.12.1...v1.13.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Add missing type for FlightEndpoint (apache#136) * release 17.11 (apache#139) * [pre-commit.ci] pre-commit autoupdate (apache#140) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.1 → v0.7.2](astral-sh/ruff-pre-commit@v0.7.1...v0.7.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [pre-commit.ci] pre-commit autoupdate (apache#142) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.2 → v0.7.3](astral-sh/ruff-pre-commit@v0.7.2...v0.7.3) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * chore: Create FUNDING.yml (apache#143) Create FUNDING.yml * fix: `read_schema` should return Schema (apache#145) fix: read_schema should return Schema * release 17.12 (apache#146) * [pre-commit.ci] pre-commit autoupdate (apache#147) updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.3 → v0.7.4](astral-sh/ruff-pre-commit@v0.7.3...v0.7.4) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: `to_table` argument `columns` can be a dict of expressions (apache#149) * [pre-commit.ci] pre-commit autoupdate (apache#148) * [pre-commit.ci] pre-commit autoupdate updates: - [github.com/astral-sh/ruff-pre-commit: v0.7.4 → v0.8.1](astral-sh/ruff-pre-commit@v0.7.4...v0.8.1) * ruff: ignore PYI063 --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: ZhengYu, Xu <[email protected]> * release 17.13 (apache#151) * fix: FileSystem metadata value should be str (apache#152) * fix: FileSystemHandler metadata value should be str (apache#153) * [pre-commit.ci] pre-commit autoupdate (apache#154) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.1 → v0.8.2](astral-sh/ruff-pre-commit@v0.8.1...v0.8.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * improve coverage for pyarrow.struct typehint (apache#157) * fix: ipc typing (apache#159) * release 17.14 (apache#160) * fix: add missing param 'nbytes' to NativeFile.read (apache#163) * release 17.15 (apache#164) * [pre-commit.ci] pre-commit autoupdate (apache#161) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.2 → v0.8.3](astral-sh/ruff-pre-commit@v0.8.2...v0.8.3) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Add 'None' as a valid argument for partitioning to the various parquet reading functions (apache#166) * [pre-commit.ci] pre-commit autoupdate (apache#165) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.3 → v0.8.6](astral-sh/ruff-pre-commit@v0.8.3...v0.8.6) - [github.com/pre-commit/mirrors-mypy: v1.13.0 → v1.14.1](pre-commit/mirrors-mypy@v1.13.0...v1.14.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: should use Collection[Array] instead list[Array] (apache#170) "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance Consider using "Sequence" instead, which is covariant * fix: update type hints for path_or_paths and source parameters in ParquetDataset and read_table (apache#171) * [pre-commit.ci] pre-commit autoupdate (apache#167) updates: - [github.com/astral-sh/ruff-pre-commit: v0.8.6 → v0.9.1](astral-sh/ruff-pre-commit@v0.8.6...v0.9.1) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 17.16 (apache#172) * Fixed pa.fixed_shape_tensor (apache#175) * [pre-commit.ci] pre-commit autoupdate (apache#173) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.1 → v0.9.4](astral-sh/ruff-pre-commit@v0.9.1...v0.9.4) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: Preserve generic in `ChunkedArray.type` (apache#177) * release 17.17 (apache#178) * [pre-commit.ci] pre-commit autoupdate (apache#176) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.4 → v0.9.6](astral-sh/ruff-pre-commit@v0.9.4...v0.9.6) - [github.com/pre-commit/mirrors-mypy: v1.14.1 → v1.15.0](pre-commit/mirrors-mypy@v1.14.1...v1.15.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: support to construct ListArray with primitive type (apache#179) * fix: Avoid `chunked_array` overlapping overloads (apache#183) * fix: Add placeholder annotations to `pc.if_else` (apache#182) * fix: Widen `Array` to `Array | ChunkedArray` (apache#181) * fix: add `pc.fill_null` (apache#185) - https://arrow.apache.org/docs/python/generated/pyarrow.compute.fill_null.html - https://github.com/narwhals-dev/narwhals/blob/05e47b27ebe27b24196cee5956d07748d65a62ee/narwhals/_arrow/series.py#L675 * fix: Allow Table.from_arrays to take a list containing a mix of Array and ChunkedArray (apache#187) Update table.pyi * release 17.18 (apache#188) * [pre-commit.ci] pre-commit autoupdate (apache#180) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.6 → v0.9.10](astral-sh/ruff-pre-commit@v0.9.6...v0.9.10) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: from_arrays for both Table and RecordBatch (apache#189) * fix: resolve some `pa.compute` overlaps (apache#184) * fix: resolve overlapping `compute.(add|divide)` * fix: copy from non-cloned signature * fix: resolve overlapping `compute.exp` * fix: resolve overlapping `compute.power` * fix: resolve overlapping `compute.equal` * fix: resolve overlapping `compute.and_` * fix: Include `Array` in `chunked_array` overload (apache#190) narwhals-dev/narwhals@0237f7a * release 17.19 (apache#191) * Add Scalar, Array and Type classes for Json & Uuid (apache#194) * Add Scalar, Array and Type classes for Json & Uuid * Formatting fixes * [pre-commit.ci] pre-commit autoupdate (apache#192) updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.10 → v0.11.2](astral-sh/ruff-pre-commit@v0.9.10...v0.11.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Revert "Add Scalar, Array and Type classes for Json & Uuid" (apache#195) Revert "Add Scalar, Array and Type classes for Json & Uuid (apache#194)" * fix: Add missing `pc.equal` overload (apache#196) * feat: support pyarrow 19.0 (apache#198) * build: upgrade pyarrow min version to 19.0 * feat: support pyarrow 19.0 * omit mypy bool8 override error * fix: reexport new types (apache#199) * feat: override new patterns for func repeat and nulls (apache#200) * fix: reexport decimal64 array and decimal128 array * feat: override new patterns for func `repeat` and `nulls` * release: 19.1 (apache#201) * fix: Allow `Iterable[Table]` in `concat_tables` (apache#203) https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html > tables : iterable of pyarrow.Table objects * fix: Allow `ChunkedArray[BooleanScalar]` in `pc.invert` (apache#204) Fixes https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L298-L299 * feat: Fully spec `TableGroupBy.aggregate` (apache#197) - https://arrow.apache.org/docs/python/compute.html#grouped-aggregations - https://arrow.apache.org/docs/python/generated/pyarrow.TableGroupBy.html#pyarrow.TableGroupBy.aggregate - https://github.com/apache/arrow/blob/34a984c842db42b409a1359e6e2cf167a2365a48/python/pyarrow/table.pxi#L6578-L6604 * fix: Add missing return type to `ChunkedArray.filter` (apache#205) * fix: Add relaxed final overload to logical functions (apache#206) Covers all of `pc.(and_ | and_kleene | and_not | and_not_kleene | or_ | or_kleene | xor)` Resolves: - https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L219-L233 - https://github.com/narwhals-dev/narwhals/blob/caabc0efdef54f117c83888926860e3972ef69d5/narwhals/_arrow/series.py#L662 * fix: Allow `ChunkedArray` in `Table.set_column` (apache#211) Also being more consistent with `ArrayOrChunkedArray[Any]` everywhere Discovered in - https://github.com/vega/vega-datasets/blob/343b7101391a81190ba24e1e8d62a381d2fef3bd/scripts/species.py#L798-L799 * chore: Ignore `fsspec` `[import-untyped]` (apache#210) ```py _fs.pyi:18: error: Skipping analyzing "fsspec": module is installed, but missing library stubs or py.typed marker [import-untyped] _fs.pyi:18: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports Found 1 error in 1 file (checked 64 source files) ``` - fsspec/filesystem_spec#625 - fsspec/filesystem_spec#1676 * feat: Convert `types.is_*` into `TypeIs` guards (apache#215) * chore: Add `types.__all__` * feat: Convert `types._is_*` into `TypeIs` guards I've been using this for a little while, but makes more sense to live in the stubs https://github.com/narwhals-dev/narwhals/blob/16427440e6d74939c403083b52ce3fb0af7d63c7/narwhals/_arrow/utils.py#L44-L67 * fix: Resolve `bit_wise_and` overlaps (apache#214) Fixes 3 errors: ```py compute.pyi:608:5 - error: Overload 1 for "bit_wise_and" overlaps overload 4 and returns an incompatible type (reportOverlappingOverload) compute.pyi:608:5 - error: Overload 1 for "bit_wise_and" overlaps overload 5 and returns an incompatible type (reportOverlappingOverload) compute.pyi:620:5 - error: Overload 3 for "bit_wise_and" will never be used because its parameters overlap overload 1 (reportOverlappingOverload) ``` * fix: Resolve `list_*` overlapping overloads (apache#213) * fix: Resolve `list_value_length` overlaps * fix: Resolve `list_element` overlaps * fix: Resolve `list_(flatten|slice|parent_indices)` overlaps An improvement, but still not that accurate * fix: Include `VarianceOptions` in `TableGroupBy.aggregate` (apache#212) - Follow-up to apache#197 - Noticed while writing up (narwhals-dev/narwhals#2385) - We already use it for `std`, `var` in https://github.com/narwhals-dev/narwhals/blob/16427440e6d74939c403083b52ce3fb0af7d63c7/narwhals/_arrow/group_by.py#L81-L82 * [pre-commit.ci] pre-commit autoupdate (apache#202) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.2 → v0.11.5](astral-sh/ruff-pre-commit@v0.11.2...v0.11.5) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix: Resolve `Scalar.as_py` warnings for `DictionaryType` (apache#207) > scalar.pyi:75:20 - warning: TypeVar "_AsPyTypeK" appears only once in generic function signature > Use "object" instead (reportInvalidTypeVarUse) > scalar.pyi:85:20 - warning: TypeVar "_AsPyTypeK" appears only once in generic function signature > Use "object" instead (reportInvalidTypeVarUse) Instead just using `int`, which should be all that is possible from: https://github.com/zen-xu/pyarrow-stubs/blob/02552b81161d19d4aa71d8656b028eefac84612b/pyarrow-stubs/__lib_pxi/types.pyi#L154-L164 https://github.com/zen-xu/pyarrow-stubs/blob/02552b81161d19d4aa71d8656b028eefac84612b/pyarrow-stubs/__lib_pxi/types.pyi#L63-L70 * fix: Add default to `pc.sort_indices` (apache#216) * fix: Add default to `pc.sort_indices` Fixes narwhals-dev/narwhals#2390 (comment) Default is specified in https://arrow.apache.org/docs/python/generated/pyarrow.compute.sort_indices.html * refactor: Reuse some aliases * fix: Allow `list_size` with `Field` in `pa.list_` (apache#218) Closes apache#217 * allow `Table` or `RecordBatch` for dataset (apache#222) allow source argument pyarrow.dataset.dataset() to be RecordBatch | Table * refactor: Simplify `types` overloads (apache#219) * fix: `binary` overlap * fix: Simplify list constructors, `_Ordered` * refactor: Use `_Tz` default * fix: iter ChunkedArray should return scalar value (apache#224) * release: 19.2 (apache#225) * fix: Add missing `DictionaryArray` methods/properties (apache#226) - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.indices - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary_decode - https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html#pyarrow.DictionaryArray.dictionary_encode - https://github.com/narwhals-dev/narwhals/blob/c23e56c56630761f0fbc58b575a1c987e57d58d5/narwhals/_arrow/series.py#L787-L798 - https://github.com/narwhals-dev/narwhals/blob/c23e56c56630761f0fbc58b575a1c987e57d58d5/narwhals/_arrow/series_cat.py#L14-L18 * chore: use pyright as static type checker (apache#227) * use pyright as static type checker * make pyright happy * fix: fix pyright action (apache#229) fix github ci * fix: Match runtime behavior of `(Table|RecordBatch).select` (apache#221) * fix: Match runtime behavior of `(Table|RecordBatch).select` - https://github.com/MarcoGorelli/narwhals/blob/5b02b592183b8d39e2d32e0aedd6c234bb22d405/narwhals/_arrow/dataframe.py#L305-L307 - https://github.com/MarcoGorelli/narwhals/blob/5b02b592183b8d39e2d32e0aedd6c234bb22d405/narwhals/_arrow/dataframe.py#L285-L294 Following up on what I thought was a simple stub issue, but we're both *too strict* and *too permissive* in different ways {placeholder} - https://github.com/apache/arrow/blob/d2ddee62329eb711572b4d71d6380673d7f7edd1/python/pyarrow/table.pxi#L4367-L4374 - https://github.com/apache/arrow/blob/d2ddee62329eb711572b4d71d6380673d7f7edd1/python/pyarrow/table.pxi#L1721-L1739 * update select * update select --------- Co-authored-by: ZhengYu, Xu <[email protected]> * [pre-commit.ci] pre-commit autoupdate (apache#220) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.5 → v0.11.8](astral-sh/ruff-pre-commit@v0.11.5...v0.11.8) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * feat: narrow scalar when type is given (apache#230) * rename Uint -> UInt * feat: narrow scalar when type is given * release 19.3 (apache#231) * chore: pyright use strict mode (apache#233) * fix types * update array.pyi * update scalar.pyi * update * update array * update array * optimize chunked_array * optimizer iterchunks * update * update pyproject.toml * fix: pa.nulls accept type rather than types (apache#234) * [pre-commit.ci] pre-commit autoupdate (apache#232) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.8 → v0.11.9](astral-sh/ruff-pre-commit@v0.11.8...v0.11.9) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 19.4 (apache#235) * lint(pyright): disable reportUnknownMemberType (apache#239) * [pre-commit.ci] pre-commit autoupdate (apache#236) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.9 → v0.11.13](astral-sh/ruff-pre-commit@v0.11.9...v0.11.13) - [github.com/RobertCraigie/pyright-python: v1.1.400 → v1.1.401](RobertCraigie/pyright-python@v1.1.400...v1.1.401) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * feat: support pyarrow 20.0 (apache#240) * [pre-commit.ci] pre-commit autoupdate (apache#241) updates: - [github.com/RobertCraigie/pyright-python: v1.1.401 → v1.1.402](RobertCraigie/pyright-python@v1.1.401...v1.1.402) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * support docstring (apache#242) * doc: complete tensor doc * doc: complete table doc * doc: complete scalar doc * doc: complete orc doc * doc: complete memory doc * doc: complete lib doc * doc: complete json doc * doc: complete hdfs doc * doc: complete gcsfs doc * doc: complete fs doc * doc: complete flight doc * doc: complete dataset doc * doc: complete dataset parquet doc * doc: complete dataset parquet encryption doc * doc: complete cuda doc * doc: complete csv doc * doc: complete azurefs doc * doc: complete core doc * doc: complete interchange doc * doc: complete array doc * doc: complete builder doc * doc: complete device doc * doc: complete io doc * doc: complete ipc doc * doc: complete types doc * mark deprecated apis * doc: complete _compute doc * doc: complete compute doc * doc: update compute doc * lint code * release 20.0.0.20250618 (apache#243) * fix: make ParquetFileFormat constructor args optional (apache#244) * fix: Field.remove_metadata should return Self (apache#246) * [pre-commit.ci] pre-commit autoupdate (apache#245) updates: - [github.com/astral-sh/ruff-pre-commit: v0.11.13 → v0.12.0](astral-sh/ruff-pre-commit@v0.11.13...v0.12.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 20.0.0.20250627 (apache#247) * fix: chunked_array with type should be specified (apache#250) * [pre-commit.ci] pre-commit autoupdate (apache#248) updates: - [github.com/astral-sh/ruff-pre-commit: v0.12.0 → v0.12.3](astral-sh/ruff-pre-commit@v0.12.0...v0.12.3) - [github.com/RobertCraigie/pyright-python: v1.1.402 → v1.1.403](RobertCraigie/pyright-python@v1.1.402...v1.1.403) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * release 20.0.0.20250715 (apache#251) * fix: The type parameter of array should be covariant (apache#253) * release 20.0.0.20250716 (apache#254) * Add py.typed file to signify that the library is typed See the relevant PEP https://peps.python.org/pep-0561 * Prepare `pyarrow-stubs` for history merging MINOR: [Python] Prepare `pyarrow-stubs` for history merging Co-authored-by: ZhengYu, Xu <[email protected]> * Add `ty` configuration and suppress error codes * One line per rule * Add licence header from original repo for all `.pyi` files * Revert "Add licence header from original repo for all `.pyi` files" * Prepare for licence merging * Exclude `stubs` from `rat` test * Add Apache licence clause to `py.typed` * Reduce list * Resolve merge conflict --------- Signed-off-by: Jonas Dedden <[email protected]> Co-authored-by: ZhengYu, Xu <[email protected]> Co-authored-by: Jim Bosch <[email protected]> Co-authored-by: Oliver Mannion <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Eugene Toder <[email protected]> Co-authored-by: fvankrieken <[email protected]> Co-authored-by: Ilia Ablamonov <[email protected]> Co-authored-by: Mathias Beguin <[email protected]> Co-authored-by: Dylan Scott <[email protected]> Co-authored-by: deanm0000 <[email protected]> Co-authored-by: Jan Moravec <[email protected]> Co-authored-by: Marius van Niekerk <[email protected]> Co-authored-by: Jonas Dedden <[email protected]> Co-authored-by: Fábio D. Batista <[email protected]> Co-authored-by: ben-freist <[email protected]> Co-authored-by: Jiahao Yuan <[email protected]> Co-authored-by: Pim de Haan <[email protected]> Co-authored-by: Dan Redding <[email protected]> Co-authored-by: Tom Crasset <[email protected]> Co-authored-by: Tom McTiernan <[email protected]> Co-authored-by: Rok Mihevc <[email protected]>
I added `getNullCount()` to the `Accessor` interface. I don't know if this is the best way to achieve this. Hence, we'll have both ValueCount and NullCount immediately accessible from the accessor. Author: Mohamed Zenadi <[email protected]> Closes apache#207 from zeapo/ARROW-380 and squashes the following commits: 27c0342 [Mohamed Zenadi] implement missing getNullCount implementation for NullableMapVector 9ff3355 [Mohamed Zenadi] implement the base case of getNullCount() ad3f24a [Mohamed Zenadi] the used size is not the same as the allocated size e858432 [Mohamed Zenadi] use the valueCount as basis for counting nulls rather than allocated bytes 0530c85 [Mohamed Zenadi] test the null count byte by byte and the odd length case 95667d3 [Mohamed Zenadi] fix the comment b12a2a5 [Mohamed Zenadi] fix wrong value returned by the method f264250 [Mohamed Zenadi] use getNullCount() rather than isNull baca69c [Mohamed Zenadi] Add methods to count the number null values in the vector
I added
getNullCount()to theAccessorinterface. I don't know if this is the best way to achieve this. Hence, we'll have both ValueCount and NullCount immediately accessible from the accessor.