Skip to content

PR2 (nullability bug): adding new OH SparkCatalog which enables preserving non-nullable schemas #288

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

cbb330
Copy link
Collaborator

@cbb330 cbb330 commented Feb 19, 2025

Summary

problem: the OpenHouse spark catalog does not preserve non-null fields requested by user dataframes. Because of that, tables are saved with the wrong schema. This problem only affects CTAS

solution: we provide a new sparkcatalog with configuration to enable this in this PR ✅ then we then dictate all spark clients to use this spark catalog 🕐

//old spark client config
spark.sql.defaultExtensions=liopenhouse.relocated.org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,com.linkedin.openhouse.spark.extensions.OpenhouseSparkSessionExtensions
spark.sql.catalog.openhouse=liopenhouse.relocated.org.apache.iceberg.spark.SparkCatalog // this line
spark.sql.catalog.openhouse.catalog-impl=com.linkedin.openhouse.spark.LiOpenHouseCatalog
//new spark client config
spark.sql.defaultExtensions=liopenhouse.relocated.org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,com.linkedin.openhouse.spark.extensions.OpenhouseSparkSessionExtensions
spark.sql.catalog.openhouse=com.linkedin.openhouse.spark.SparkCatalog // this line
spark.sql.catalog.openhouse.catalog-impl=com.linkedin.openhouse.spark.LiOpenHouseCatalog

Changes

  • Client-facing API Changes
  • Internal API Changes
  • Bug Fixes
  • New Features
  • Performance Improvements
  • Code Style
  • Refactoring
  • Documentation
  • Tests

Testing Done

  • Manually Tested on local docker setup. Please include commands ran, and their output.
  • Added new tests for the changes made.
  • Updated existing tests to reflect the changes made.
  • No tests added or updated. Please explain why. If unsure, please feel free to ask for help.
  • Some other form of testing like staging or soak time in production. Please explain.

Additional Information

  • Breaking Changes
  • Deprecations
  • Large PR broken into smaller PRs, and PR plan linked in the description.

For all the boxes checked, include additional details of the changes made in this pull request.

@cbb330 cbb330 changed the title adding new OH SparkCatalog which enables preserving non-nullable schemas PR2 (nullability bug): adding new OH SparkCatalog which enables preserving non-nullable schemas Feb 25, 2025
@cbb330 cbb330 force-pushed the nonnullability branch 7 times, most recently from 494ee8d to 3f87c2e Compare February 27, 2025 21:40
Copy link
Collaborator

@HotSushi HotSushi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approach is good, but i'd prefer not introducing this in OSS, lets chat more in DM

@@ -0,0 +1,8 @@
package com.linkedin.openhouse.spark;

public class SparkCatalog extends org.apache.iceberg.spark.SparkCatalog {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd strongly prefer we not introduce this layer (ie.SparkCatalog) in OSS codebase (if its a must, a better place would be li-wrapper)
Reason:

  • iceberg solves this in 1.7.x (https://iceberg.apache.org/docs/1.7.1/spark-configuration/#catalog-configuration) with a configuration. Ideally we upgrade to 1.7.x and this problem is auto-solved for us.
  • If we override this conf, iceberg behavior will be ignored
    • It's ideal we do not touch iceberg connector code (ie: SparkCatalog, FlinkCatalog etc), its crucial OHCatalog stays minimal with its interfacing (easy for us to upgrade iceberg/spark/other dependencies)
  • Adding this layer would allow other iceberg confs to be overridden which we shouldn't allow
    • I'm concerned of it getting misused and thereby our forks/behavior deviating from OSS

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also don't see this as a perfect option and appreciate the points here.

  1. we can't upgrade to 1.7.x because it has a dependency on higher versions of spark in spark clients
  2. what exactly will be ignored / changed? because we are simply extending and adding everything should be preserved
  3. this can codified among the contributors to not edit

@@ -37,7 +37,7 @@ public static void configureCatalogs(
builder
.config(
String.format("spark.sql.catalog.%s", catalogName),
"org.apache.iceberg.spark.SparkCatalog")
"com.linkedin.openhouse.spark.SparkCatalog")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'd need to change Docker code too, and all other references to this connector.


// Verify id column is preserved in good catalog, not preserved in bad catalog
assertFalse(sourceSchema.apply("id").nullable(), "Source table id column should be required");
assertTrue(
targetSchema.apply("id").nullable(),
"Target table id column required should not be preserved -- due to 1) the CTAS non-nullable preservation is off by default and 2) OS spark3.1 catalyst connector lack of support for non-null CTAS");
targetSchemaBroken.apply("id").nullable(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if both are nullable how is the SparkCatalog helping?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants