Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[IcebergIO] Support hash distribution mode when writing rows#38061

Open
ahmedabu98 wants to merge 8 commits intoapache:masterfrom
ahmedabu98:group-partitions
Open

[IcebergIO] Support hash distribution mode when writing rows#38061
ahmedabu98 wants to merge 8 commits intoapache:masterfrom
ahmedabu98:group-partitions

Conversation

@ahmedabu98
Copy link
Copy Markdown
Contributor

Adding a new sink code path that groups rows by partition before writing, making partitioned writes a lot more efficient and scalable.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the IcebergIO sink by adding an optional feature to group rows by partition before writing them to the destination. This change is designed to optimize performance and reduce the creation of small files in partitioned Iceberg tables. The implementation introduces new transforms and utility classes to handle the grouping and writing logic, while also updating the existing API and test suites to support this new configuration.

Highlights

  • Partitioned Write Optimization: Introduced a new sink code path that groups rows by partition before writing, which significantly improves efficiency and scalability for partitioned tables by reducing the number of small files.
  • New Components: Added 'AssignDestinationsAndPartitions', 'WritePartitionedRowsToFiles', 'WriteToPartitions', and 'BeamRowWrapper' to support the new grouping logic.
  • API Updates: Updated 'IcebergIO.WriteRows' and 'IcebergWriteSchemaTransformProvider' to include a 'groupByPartitions' configuration option.
  • Test Coverage: Updated 'IcebergIOWriteTest' and 'IcebergWriteSchemaTransformProviderTest' to use parameterized tests, ensuring both grouped and non-grouped write paths are verified.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@ahmedabu98 ahmedabu98 marked this pull request as draft April 3, 2026 05:32
@ahmedabu98 ahmedabu98 marked this pull request as ready for review April 13, 2026 21:28
@ahmedabu98 ahmedabu98 changed the title [IcebergIO] Groups rows by partition before writing [IcebergIO] Support hash distribution mode when writing rows Apr 13, 2026
@github-actions
Copy link
Copy Markdown
Contributor

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

@github-actions
Copy link
Copy Markdown
Contributor

Assigning reviewers:

R: @claudevdm for label python.
R: @chamikaramj for label java.

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 15, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 58.51%. Comparing base (48a6ceb) to head (fa5ccda).
⚠️ Report is 46 commits behind head on master.

Additional details and impacted files
@@              Coverage Diff              @@
##             master   #38061       +/-   ##
=============================================
+ Coverage     54.61%   58.51%    +3.89%     
- Complexity     1689    15428    +13739     
=============================================
  Files          1067     2851     +1784     
  Lines        168152   280076   +111924     
  Branches       1226    12332    +11106     
=============================================
+ Hits          91835   163873    +72038     
- Misses        74118   109777    +35659     
- Partials       2199     6426     +4227     
Flag Coverage Δ
java 64.58% <ø> (-2.76%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-actions
Copy link
Copy Markdown
Contributor

Reminder, please take a look at this pr: @claudevdm @chamikaramj

@claudevdm
Copy link
Copy Markdown
Collaborator

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new distribution mode for Iceberg writes, allowing rows to be shuffled by partition key before writing to reduce the number of small files. It includes the implementation of AssignDestinationsAndPartitions and WriteToPartitions transforms, along with a BeamRowWrapper for Iceberg's StructLike interface. Feedback focuses on critical serialization issues in AssignDoFn, where non-serializable maps must be marked transient and initialized in @Setup. Additionally, improvements are suggested for resource management in WritePartitionedRowsToFiles using try-finally blocks and optimizing the table cache with double-checked locking.

Comment on lines +78 to +79
private final Map<String, PartitionKey> partitionKeys = new HashMap<>();
private transient @MonotonicNonNull Map<String, BeamRowWrapper> wrappers;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The partitionKeys map contains PartitionKey objects which are not Serializable. Since DoFn instances are serialized to be distributed to workers, this will cause a NotSerializableException at pipeline submission or execution time. This map should be marked transient and initialized in the @Setup method, similar to the wrappers map.

Suggested change
private final Map<String, PartitionKey> partitionKeys = new HashMap<>();
private transient @MonotonicNonNull Map<String, BeamRowWrapper> wrappers;
private transient @MonotonicNonNull Map<String, PartitionKey> partitionKeys;
private transient @MonotonicNonNull Map<String, BeamRowWrapper> wrappers;


@Setup
public void setup() {
this.wrappers = new HashMap<>();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Initialize partitionKeys here to ensure it is available on workers after deserialization.

      this.partitionKeys = new HashMap<>();
      this.wrappers = new HashMap<>();

ValueInSingleWindow.of(element, timestamp, window, paneInfo));
Row data = dynamicDestinations.getData(element);

@Nullable PartitionKey partitionKey = partitionKeys.get(tableIdentifier);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Use checkStateNotNull to safely access the transient partitionKeys map.

Suggested change
@Nullable PartitionKey partitionKey = partitionKeys.get(tableIdentifier);
@Nullable PartitionKey partitionKey = checkStateNotNull(partitionKeys).get(tableIdentifier);

}
partitionKey = new PartitionKey(spec, schema);
wrapper = new BeamRowWrapper(data.getSchema(), schema.asStruct());
partitionKeys.put(tableIdentifier, partitionKey);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Use checkStateNotNull to safely access the transient partitionKeys map.

Suggested change
partitionKeys.put(tableIdentifier, partitionKey);
checkStateNotNull(partitionKeys).put(tableIdentifier, partitionKey);

Comment on lines +139 to +145
RecordWriter writer =
new RecordWriter(table, destination.getFileFormat(), fileName, partitionData);
for (Row row : element.getValue()) {
Record record = IcebergUtils.beamRowToIcebergRecord(table.schema(), row);
writer.write(record);
}
writer.close();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The RecordWriter should be closed within a try-finally block. If an exception occurs during the write loop (e.g., due to data corruption or IO issues), the writer will not be closed, potentially leading to resource leaks such as open file handles. Note that writer is still needed after the block for writer.getDataFile(), so a try-finally is more appropriate than try-with-resources here.

      RecordWriter writer =
          new RecordWriter(table, destination.getFileFormat(), fileName, partitionData);
      try {
        for (Row row : element.getValue()) {
          Record record = IcebergUtils.beamRowToIcebergRecord(table.schema(), row);
          writer.write(record);
        }
      } finally {
        writer.close();
      }

: Maps.newHashMap();

@Nullable Table table = null;
synchronized (LAST_REFRESHED_TABLE_CACHE) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This synchronized block is missing a second check of the cache (double-checked locking pattern). Multiple threads that miss the cache for the same identifier will wait at the synchronized block and then all proceed to load or create the table sequentially. Adding a second getIfPresent check inside the block avoids redundant catalog operations.

      synchronized (LAST_REFRESHED_TABLE_CACHE) {
        lastRefreshedTable = LAST_REFRESHED_TABLE_CACHE.getIfPresent(identifier);
        if (lastRefreshedTable != null && lastRefreshedTable.table != null) {
          lastRefreshedTable.refreshIfStale();
          return lastRefreshedTable.table;
        }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants