Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

nightmared
Copy link

Hello,
this PR aims to tackle one problem I had: streaming the output of a program while it is running over HTTP, similarly to the Github/GitLab CI "live" screens.
In fact, the goal is to perform a request to the server with cURL inside a gitlab-ci job, and see the output unfolds as the program is executed:


              web browser
----------    ------------> -------------------
| CLIENT |                  |  GitLab CI job  |
----------   <------------  -------------------
               streams the     |              ∧
                  output       |              |
                               | executes     | prints the
                               |              | output
                               |              | "live"
                               ∨              |
                             -------------------
                             |       cURL      |
                             -------------------
                              |               ∧
                              |               | streaming
                              | HTTP          | (chunked
                              | query         | transfer)
                              |               | of stdin
                              |               | and stdout
                  spawns      ∨               |
-----------  <------------   --------------------
| program |                  | tiny-http server |
-----------   ------------>  --------------------
               sends stdout
               and stdin in
                 a pipe

Without this patch, the program executes in its entirety, and the output is sent all at once
once the pipe where the program writes its output is closed.

To achieve that goal, I submit two "sub-features":

  • The ability to disable buffering inside Response objects with the Response::with_buffered method.
    Enabling this will force the transfer encoding to be TransferEncoding::Chunked and will ask the
    chunks encoder to flush to its underlying writer on every write.
  • To get "instantaneous" write, disabling buffering in the chunks encoder is not enough, as the underlying
    writer returned when calling Server::recv() (ClientConnection.sink) is in fact a BufWriter wrapping
    the "real" output. The buffered_writer option in ServerConfig, when set to false while instantiating
    a new server, omits the BufWriter to write to the TcpStream withotu any buffering. The cost of that
    abstraction is that ClientConnection.sink now boxes the writer to be able to choose between
    BufWriter<RefinedTcpStream> or RefinedTcpStream dynamically, which means there is now one additional
    pointer deference. I do however expect the performance impact to be small as this pointer is then stored
    in an Arc<Mutex<>>, and I think locking/unlocking the mutex should be more costly that deferencing the
    pointer.

I expect this to decrease performance when sending big files, which is why these two sub-features are disabled
by default, and must explicitly be opted-in (by calling the with_buffered method for the first, and by
instanciating the server with the buffered_writer setting for the second).

Also note that the current iteration of this work breaks the current ABI, as it updates ServerConfig to
add the buffered_writer option, and ServerConfig was already exposed through the Server::new function.

Copy link
Member

@bradfier bradfier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @nightmared thanks so much for planning and building this draft.

First of all to give you some confidence to continue, I'd be happy to merge this feature, I think it's a useful extension on the current library behaviour.

If you could include some of your commentary on the change in the commit message when you finalise the PR that would be great too.

@nightmared nightmared force-pushed the unbuffering_support_pr branch from 2355e58 to 3a4ed4f Compare August 26, 2022 20:46
@nightmared nightmared force-pushed the unbuffering_support_pr branch from 3a4ed4f to 4968af3 Compare April 28, 2023 18:18
@nightmared
Copy link
Author

Alright, I have reworked the PR to rebase it on master, and also:

  • removed all the boolean parameters.
  • Added a new MaybeBufferedWriter enum to remove the Box<dyn Write> that I have introduced in the previous version. This should lead to slighter better performance (and mainly produce less allocations).

Note that the rustc-1.56 failure is not due to the PR itself, but the need to bump the minimal version in the CI to 1.57 for rustls.

@nightmared nightmared requested a review from bradfier April 28, 2023 18:23
@nightmared nightmared force-pushed the unbuffering_support_pr branch 2 times, most recently from 0197e28 to 22db82a Compare August 21, 2024 18:33
This is achieved through two different means:
- The ability to disable buffering inside `Response` objects with the
  `Response::with_buffering` method. Enabling buffering will force the transfer
  encoding to be `TransferEncoding::Chunked` and configure the chunks encoder
  to flush to its underlying writer on every write.
- To get "instantaneous" write, disabling buffering in the chunks encoder is
  not enough, as the underlying writer returned when calling `Server::recv()`
  (`ClientConnection.sink`) is in fact a `BufWriter` wrapping the "real" output.
  The `writer_buffering` parameter in `ServerConfig.advanced` can alter the
  server behavior to omit the BufWriter when writing to the TcpStream.

This will probably decrease performance significantly when sending big files,
which is why these two subfeatures are disabled by default, and must be opted-in
(by calling the `with_buffering` method for the first, and by instanciating the
server with a call to `with_writer_buffering_mode` for the second).
@nightmared nightmared force-pushed the unbuffering_support_pr branch from 22db82a to 7ca33d0 Compare August 21, 2024 19:49
@nightmared
Copy link
Author

@bradfier Sorry to dust up this old PR, but there were conflicts preventing an eventual merge, so I updated the PR and reworked the then-outdated commit message.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants