-
Notifications
You must be signed in to change notification settings - Fork 166
[FLINK-38453] Add full splits to KafkaSourceEnumState #192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
topicPartitions.add( | ||
new KafkaPartitionSplit( | ||
new TopicPartition(TOPIC_PREFIX + readerId, partition), | ||
STARTING_OFFSET)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. This is a very good improvement for the connector.
I noticed that the current test creates splits using the constant KafkaPartitionSplit.EARLIEST_OFFSET
, would it make sense to add a test case that uses a real-world offset (e.g., 123
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to change the logic a bit and introduced a new special value MIGRATED against which all unit tests now go. However, I also added a test with specific offsets to KafkaSourceEnumeratorTest
.
public void testAddSplitsBack() throws Throwable { | ||
@ParameterizedTest | ||
@EnumSource(StandardOffsetsInitializer.class) | ||
public void testAddSplitsBack(StandardOffsetsInitializer offsetsInitializer) throws Throwable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is my understanding correct that the test verifies that the offset is correctly recalculated on recovery, but doesn't verify that the original offset(before the failure) was preserved and restored?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. I expanded the test to cover snapshotting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks mostly good left some inline comments
new SplitAndAssignmentStatus( | ||
new KafkaPartitionSplit( | ||
new TopicPartition(topic, partition), | ||
DEFAULT_STARTING_OFFSET), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this a behavioral change? Previously the unassigned split would get the starting offset configured by the user on reassignment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, added a new MIGRATED offset to indicate that this needs to be initialized on recovery.
.../src/main/java/org/apache/flink/connector/kafka/source/enumerator/KafkaSourceEnumerator.java
Show resolved
Hide resolved
.../src/main/java/org/apache/flink/connector/kafka/source/enumerator/KafkaSourceEnumerator.java
Show resolved
Hide resolved
.../test/java/org/apache/flink/connector/kafka/source/enumerator/KafkaSourceEnumeratorTest.java
Show resolved
Hide resolved
52382f2
to
e2ede23
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing the comments 👍 I am only missing a higher level test for the newly added offset migration in the enumerator
migratedPartitions, getOffsetsRetriever()); | ||
return splitByAssignmentStatus( | ||
splits.stream() | ||
.map(splitStatus -> resolveMigratedSplit(splitStatus, startOffsets))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: The flow of extracting the migratedPartitions is overly complex because we extract the migrated partitions twice in line 179 and line 161.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's unfortunately necessary by design:
- 161 extracts the partitions which are used to jointly look up the partition offsets
- This is expensive as it uses admin client to contact Kafka cluster
- The design of offset initializer is to jointly look up all partitions to have 1 request to Kafka brokers only
- Now that we received all offsets, 179 is applying them to the split. It could be a simple map lookup but I decided to add some assertion, so it went into a different method.
public void testAddSplitsBack() throws Throwable { | ||
@ParameterizedTest | ||
@EnumSource(StandardOffsetsInitializer.class) | ||
public void testAddSplitsBack(StandardOffsetsInitializer offsetsInitializer) throws Throwable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also add a test to cover the newly added migration story?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
KafkaEnumerator's state contains the TopicPartitions only but not the offsets, so it doesn't contain the full split state contrary to the design intent. There are a couple of issues with that approach. It implicitly assumes that splits are fully assigned to readers before the first checkpoint. Else the enumerator will invoke the offset initializer again on recovery from such a checkpoint leading to inconsistencies (LATEST may be initialized during the first attempt for some partitions and initialized during second attempt for others). Through addSplitBack callback, you may also get these scenarios later for BATCH which actually leads to duplicate rows (in case of EARLIEST or SPECIFIC-OFFSETS) or data loss (in case of LATEST). Finally, it's not possible to safely use KafkaSource as part of a HybridSource because the offset initializer cannot even be recreated on recovery. All cases are solved by also retaining the offset in the enumerator state. To that end, this commit merges the async discovery phases to immediately initialize the splits from the partitions. Any subsequent checkpoint will contain the proper start offset.
52cccfe
to
3b1dcee
Compare
LGTM |
KafkaEnumerator's state contains the TopicPartitions only but not the offsets, so it doesn't contain the full split state contrary to the design intent.
There are a couple of issues with that approach. It implicitly assumes that splits are fully assigned to readers before the first checkpoint. Else the enumerator will invoke the offset initializer again on recovery from such a checkpoint leading to inconsistencies (LATEST may be initialized during the first attempt for some partitions and initialized during second attempt for others).
Through addSplitBack callback, you may also get these scenarios later for BATCH which actually leads to duplicate rows (in case of EARLIEST or SPECIFIC-OFFSETS) or data loss (in case of LATEST). Finally, it's not possible to safely use KafkaSource as part of a HybridSource because the offset initializer cannot even be recreated on recovery.
All cases are solved by also retaining the offset in the enumerator state. To that end, this commit merges the async discovery phases to immediately initialize the splits from the partitions. Any subsequent checkpoint will contain the proper start offset.