Thanks to visit codestin.com
Credit goes to github.com

Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: influxdata/rskafka
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: main
Choose a base ref
...
head repository: influxdata/rskafka
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: dom/message-too-large
Choose a head ref
Checking mergeability… Don’t worry, you can still create the pull request.
  • 4 commits
  • 13 files changed
  • 1 contributor

Commits on Mar 30, 2022

  1. refactor: wrap record key/value in Arc

    Allows "copies" of the key/value bytes to be held for produce request
    retries without having to actually copy the underlying buffer, at the
    minor cost of extra pointer indirection.
    domodwyer committed Mar 30, 2022
    Configuration menu
    Copy the full SHA
    43b776e View commit details
    Browse the repository at this point in the history
  2. refactor: lend Records to produce() calls

    Changes the ProducerClient::produce() call to take a slice of Records
    rather than a Vec, allowing the caller to retain the original buffer and
    easily provide a subset slice of the original buffer as needed.
    domodwyer committed Mar 30, 2022
    Configuration menu
    Copy the full SHA
    39bac2d View commit details
    Browse the repository at this point in the history
  3. refactor: let MockClient return response sequences

    This change affects test code only, changing the MockClient to return a
    sequence of configured errors, rather than the same error all the time.
    
    As an example, this can be used to construct a test that observes an
    error, followed by a different error, followed by a success - the
    previous mock was restricted to always returning the same response.
    domodwyer committed Mar 30, 2022
    Configuration menu
    Copy the full SHA
    9b7ecf1 View commit details
    Browse the repository at this point in the history
  4. feat: split produce batch for MESSAGE_TOO_LARGE

    Aggregating the records for produce requests is approximate, and as such
    sometimes may result in a batch larger than the configured maximum - if
    the size of the batch exceeds the broker's maximum allowed message size
    (message.max.bytes) a MESSAGE_TOO_LARGE error is returned.
    
    At this point and prior to this commit, the whole batch was dropped and
    the caller has to retry, hoping the retries do not batch into a
    too-large batch once again. After this commit, exactly one attempt is
    made to automatically/internally split the batch in half and attempt to
    concurrently produce each half transparently to the caller. A message is
    logged at WARN to help the user identify and tune this config to avoid
    this.
    
    The two downsides of this are:
    
        * Unpredictable latency if a batch needs splitting and re-submitting
        * A partial write may occur (writing one half succeeds, other fails)
    
    These downsides are offset by the fact the MESSAGE_TOO_LARGE should be a
    rare event, making the latency hit equally rare. Succeeding in producing
    one batch but not the other is even rarer still, but does result in an
    error being returned to the client even though have the records were
    successfully produced. The caller had no control over which writes were
    in which aggregated batches anyway, so has no expectation of two writes
    succeeding/failing together (atomicity).
    domodwyer committed Mar 30, 2022
    Configuration menu
    Copy the full SHA
    25b6b5e View commit details
    Browse the repository at this point in the history
Loading