Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

AHeise
Copy link
Contributor

@AHeise AHeise commented Sep 30, 2025

KafkaEnumerator's state contains the TopicPartitions only but not the offsets, so it doesn't contain the full split state contrary to the design intent.

There are a couple of issues with that approach. It implicitly assumes that splits are fully assigned to readers before the first checkpoint. Else the enumerator will invoke the offset initializer again on recovery from such a checkpoint leading to inconsistencies (LATEST may be initialized during the first attempt for some partitions and initialized during second attempt for others).

Through addSplitBack callback, you may also get these scenarios later for BATCH which actually leads to duplicate rows (in case of EARLIEST or SPECIFIC-OFFSETS) or data loss (in case of LATEST). Finally, it's not possible to safely use KafkaSource as part of a HybridSource because the offset initializer cannot even be recreated on recovery.

All cases are solved by also retaining the offset in the enumerator state. To that end, this commit merges the async discovery phases to immediately initialize the splits from the partitions. Any subsequent checkpoint will contain the proper start offset.

topicPartitions.add(
new KafkaPartitionSplit(
new TopicPartition(TOPIC_PREFIX + readerId, partition),
STARTING_OFFSET));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. This is a very good improvement for the connector.
I noticed that the current test creates splits using the constant KafkaPartitionSplit.EARLIEST_OFFSET, would it make sense to add a test case that uses a real-world offset (e.g., 123)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to change the logic a bit and introduced a new special value MIGRATED against which all unit tests now go. However, I also added a test with specific offsets to KafkaSourceEnumeratorTest.

public void testAddSplitsBack() throws Throwable {
@ParameterizedTest
@EnumSource(StandardOffsetsInitializer.class)
public void testAddSplitsBack(StandardOffsetsInitializer offsetsInitializer) throws Throwable {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is my understanding correct that the test verifies that the offset is correctly recalculated on recovery, but doesn't verify that the original offset(before the failure) was preserved and restored?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. I expanded the test to cover snapshotting.

@fapaul fapaul self-requested a review October 2, 2025 06:41
Copy link
Contributor

@fapaul fapaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks mostly good left some inline comments

new SplitAndAssignmentStatus(
new KafkaPartitionSplit(
new TopicPartition(topic, partition),
DEFAULT_STARTING_OFFSET),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this a behavioral change? Previously the unassigned split would get the starting offset configured by the user on reassignment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, added a new MIGRATED offset to indicate that this needs to be initialized on recovery.

@AHeise AHeise force-pushed the FLINK-38453-enum-state branch from 52382f2 to e2ede23 Compare October 7, 2025 07:02
Copy link
Contributor

@fapaul fapaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing the comments 👍 I am only missing a higher level test for the newly added offset migration in the enumerator

migratedPartitions, getOffsetsRetriever());
return splitByAssignmentStatus(
splits.stream()
.map(splitStatus -> resolveMigratedSplit(splitStatus, startOffsets)));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: The flow of extracting the migratedPartitions is overly complex because we extract the migrated partitions twice in line 179 and line 161.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's unfortunately necessary by design:

  • 161 extracts the partitions which are used to jointly look up the partition offsets
  • This is expensive as it uses admin client to contact Kafka cluster
  • The design of offset initializer is to jointly look up all partitions to have 1 request to Kafka brokers only
  • Now that we received all offsets, 179 is applying them to the split. It could be a simple map lookup but I decided to add some assertion, so it went into a different method.

public void testAddSplitsBack() throws Throwable {
@ParameterizedTest
@EnumSource(StandardOffsetsInitializer.class)
public void testAddSplitsBack(StandardOffsetsInitializer offsetsInitializer) throws Throwable {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also add a test to cover the newly added migration story?

Copy link
Contributor

@fapaul fapaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

KafkaEnumerator's state contains the TopicPartitions only but not the offsets, so it doesn't contain the full split state contrary to the design intent.

There are a couple of issues with that approach. It implicitly assumes that splits are fully assigned to readers before the first checkpoint. Else the enumerator will invoke the offset initializer again on recovery from such a checkpoint leading to inconsistencies (LATEST may be initialized during the first attempt for some partitions and initialized during second attempt for others).

Through addSplitBack callback, you may also get these scenarios later for BATCH which actually leads to duplicate rows (in case of EARLIEST or SPECIFIC-OFFSETS) or data loss (in case of LATEST). Finally, it's not possible to safely use KafkaSource as part of a HybridSource because the offset initializer cannot even be recreated on recovery.

All cases are solved by also retaining the offset in the enumerator state. To that end, this commit merges the async discovery phases to immediately initialize the splits from the partitions. Any subsequent checkpoint will contain the proper start offset.
@AHeise AHeise force-pushed the FLINK-38453-enum-state branch from 52cccfe to 3b1dcee Compare October 7, 2025 14:15
@Savonitar
Copy link

LGTM

@AHeise AHeise merged commit cb5c5c0 into apache:main Oct 10, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants