From a17c2f87ff1026cde768fe6a5d1e30ed8b63894a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sa=C3=BAl=20Ibarra=20Corretg=C3=A9?= Date: Thu, 27 Nov 2014 08:50:51 +0100 Subject: [PATCH 1/4] lep: request all the things --- XXX-request-all-the-things.md | 184 ++++++++++++++++++++++++++++++++++ 1 file changed, 184 insertions(+) create mode 100644 XXX-request-all-the-things.md diff --git a/XXX-request-all-the-things.md b/XXX-request-all-the-things.md new file mode 100644 index 0000000..f0506d4 --- /dev/null +++ b/XXX-request-all-the-things.md @@ -0,0 +1,184 @@ +| Title | Request all the things | +|--------|-------------------------| +| Author | @saghul | +| Status | DRAFT | +| Date | 2014-11-27 07:43:59 | + + +## Overview + +This proposal describes a new approach for dealing with operations in libuv. As of +right now, handles define an entity which is capable of performing certain operations. +These operations are sometimes a result of a request being sent and some other times a +result of a callback (which was passed by the user) being called. This proposal aims +to make this behavior more consistent, by turning several operations that currently +just take a callback into a request form. + + +### uv_read + +(This was previously discussed, but it’s added here for completeness). + +Instead of using a callback passed to `uv_read_start`, the plan is to use a `uv_read` +function which performs a single read operation. The initial prototype was defined +as follows: + +~~~~ +int uv_read(uv_read_t* req, uv_stream_t* handle, uv_buf_t[] bufs, unsigned int nbufs, uv_read_cb, cb) +~~~~ + +The read callback is defined as: + +~~~~ +typedef void (*uv_read_cb)(uv_read_t* req, int status) +~~~~ + +This approach has one problem, though: memory for reading needs to be allocated upfront, +which might not be desirable in all cases. For this reason, a secondary version which takes +an allocation callback is also proposed: + +~~~~ +int uv_read2(uv_read_t* req, uv_stream_t* handle, uv_alloc_cb alloc_cb, uv_read_cb cb) +~~~~ + +Applications can use one or the other or mixed without problems. + +Implementation details: we probably will want to have some `bufsml` of size 4 where we +copy the structures when the request is created, like `uv_write` does. Thus, the user can +pass a `uv_buf_t` array which is allocated on the stack, as long as the memory in each `buf->base` +is valid until the request callback is called. + +Inline reading: if there are no conditions which would prevent otherwise, we could try to do +a read on the spot. This should work ok if the user provided preallocated buffers, because +we can hold on to them if we get EAGAIN. If `uv_read2` is used, instead, we won’t attempt +to read on the spot because the allocation callback would have to be called and we’d end +up holding on to the buffer for too long, thus defeating the purpose of deferred allocation. +A best effort inline reading function is also proposed: + +~~~~ +int uv_try_read(uv_stream_t* handle, uv_buf_t[] bufs, int nbufs) +~~~~ + +It does basically the analogous to `uv_try_write`, that is, attempt to read inline and +doesn’t queue a request if it doesn’t succeed. + +### uv_stream_poll + +In case `uv_read` and `uv_read2` are not enough, another way to read or write on streams +would be to get a callback when the stream is readable / writable, and use the `uv_try_*` +family of functions to perform the reads and writes inline. The proposed API for this: + +~~~~ +int uv_stream_poll(uv_stream_poll_t* req, uv_stream_t* handle, int events, uv_stream_poll cb) +~~~~ + +`events` would be a mask composed of `UV_READABLE` and / or `UV_WRITABLE`. + +The callback is defined as: + +~~~~ +typedef void (*uv_stream_poll_cb)(uv_stream_poll_t* req, int status) +~~~~ + + +### uv_timeout + +Currently libuv implements repeating timers in the form of a handle. The current implementation +does not account for the time taken during the callback, and this has caused some trouble +every now and then, since people have different expectations when it comes to repeating timers. + +This proposal removes the timer handle and makes timers a request, which gets its callback +called when the timeout is hit: + +~~~~ +int uv_timeout(uv_timeout_t* req, uv_loop_t* loop, int timeout, uv_timeout_cb cb) +~~~~ + +Timers are one shot, so no assumptions are made and repeating timers can be easily +implemented on top (by users). + +The callback takes the following form: + +~~~~ +typedef void (*uv_timeout_cb)(uv_timeout_t* req, int status) +~~~~ + +The status argument would indicate success or failure. One possible failure is cancellation, +which would make status == `UV_ECANCELED`. + +Implementation detail: Timers will be the first thing to be processed after polling for i/o. + + +### uv_callback + +In certain environments users would like to get a callback called by the event loop, but +scheduling this callback would happen from a different thread. This can be implemented using +`uv_async_t` handles in combination with some sort of thread safe queue, but it’s not +straightforward. Also, many have fallen in the trap of `uv_async_send` coalescing calls, +that is, calling the function X times does yield the callback being called X times; it’s +called at least once. + +`uv_callback` requests will queue the given callback, so that it’s called “as soon as +possible” by the event loop. 2 API calls are provided, in order to make the thread-safe +version explicit: + +~~~~ +int uv_callback(uv_callback_t* req, uv_loop_t* loop, uv_callback_cb cb) +int uv_callback_threadsafe(uv_callback_t* req, uv_loop_t* loop, uv_callback_cb cb) +~~~~ + +The callback definition: + +~~~~ +typedef void (*uv_callback_cb)(uv_callback_t* req, int status) +~~~~ + +The introduction of `uv_callback` would deprecate and remove `uv_async_t` handles. +Now, in some cases it might be desired to just wakeup the event loop, and having to +create a request might be too much, thus, the following API call is also proposed: + +~~~~ +void uv_loop_wakeup(const uv_loop_t* loop) +~~~~ + +Which would just wakeup the event loop in case it was blocked waiting for i/o. + +Implementation detail: the underlying mechanism for `uv_async_t` would remain (at least on Unix). + +Note: As a result of this addition, `uv_idle_t` handles will be deprecated an removed. +It may not seem obvious at first, but `uv_callback` achieves the same: the loop won’t block +for i/o if any `uv_callback` request is pending. This becomes even more obvious with the +“‘pull based’ event loop” proposal. + + +### uv_accept / uv_listen + +Currently there is no way to stop listening for incoming connections. Making the concept +of accepting connections also request based makes the API more consistent and easier +to use: if the user decides so (maybe because she is getting EMFILE because she ran +out of file descriptors, for example) she can stop accepting new connections. + +New API: + +~~~~ +int uv_listen(uv_stream_t* stream, int backlog) +~~~~ + +The uv_listen function loses its callback, becoming the equivalent of `listen(2)`. + +~~~~ +int uv_accept(uv_accept_t* req, uv_stream_t* stream, uv_accept_cb cb) +typedef void (*uv_accept_cb)(uv_accept_t* req, int status) +~~~~ + +Once a connection is accepted the request callback will be called with status == 0. +The `req->fd` field will contain a `uv_os_fd_t` value, which the user can use together +with `uv_tcp_open` for example. (This needs further investigation to verify it would +work in all cases). + + +### A note on uv_cancel + +Gradually, `uv_cancel` needs to be improved to allow for cancelling any kind of requests. +Some of them might be a bit harder, but `uv_timeout` and `uv_callback` should be easy +enough to do. From 21488aa6bbef8df9e6217ff0b8c75edee6572262 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sa=C3=BAl=20Ibarra=20Corretg=C3=A9?= Date: Fri, 28 Nov 2014 09:18:43 +0100 Subject: [PATCH 2/4] Addressed first round of feedback by @bnoordhuis --- XXX-request-all-the-things.md | 50 ++++++++++++++++++++++++++--------- 1 file changed, 38 insertions(+), 12 deletions(-) diff --git a/XXX-request-all-the-things.md b/XXX-request-all-the-things.md index f0506d4..3f92810 100644 --- a/XXX-request-all-the-things.md +++ b/XXX-request-all-the-things.md @@ -24,7 +24,11 @@ function which performs a single read operation. The initial prototype was defin as follows: ~~~~ -int uv_read(uv_read_t* req, uv_stream_t* handle, uv_buf_t[] bufs, unsigned int nbufs, uv_read_cb, cb) +int uv_read(uv_read_t* req, + uv_stream_t* handle, + const uv_buf_t[] bufs, + unsigned int nbufs, + uv_read_cb, cb) ~~~~ The read callback is defined as: @@ -38,7 +42,10 @@ which might not be desirable in all cases. For this reason, a secondary version an allocation callback is also proposed: ~~~~ -int uv_read2(uv_read_t* req, uv_stream_t* handle, uv_alloc_cb alloc_cb, uv_read_cb cb) +int uv_read_alloc(uv_read_t* req, + uv_stream_t* handle, + uv_alloc_cb alloc_cb, + uv_read_cb cb) ~~~~ Applications can use one or the other or mixed without problems. @@ -50,13 +57,15 @@ is valid until the request callback is called. Inline reading: if there are no conditions which would prevent otherwise, we could try to do a read on the spot. This should work ok if the user provided preallocated buffers, because -we can hold on to them if we get EAGAIN. If `uv_read2` is used, instead, we won’t attempt +we can hold on to them if we get EAGAIN. If `uv_read_alloc` is used, instead, we won’t attempt to read on the spot because the allocation callback would have to be called and we’d end up holding on to the buffer for too long, thus defeating the purpose of deferred allocation. A best effort inline reading function is also proposed: ~~~~ -int uv_try_read(uv_stream_t* handle, uv_buf_t[] bufs, int nbufs) +int uv_try_read(uv_stream_t* handle, + const uv_buf_t[] bufs, + unsigned int nbufs) ~~~~ It does basically the analogous to `uv_try_write`, that is, attempt to read inline and @@ -64,12 +73,15 @@ doesn’t queue a request if it doesn’t succeed. ### uv_stream_poll -In case `uv_read` and `uv_read2` are not enough, another way to read or write on streams +In case `uv_read` and `uv_read_alloc` are not enough, another way to read or write on streams would be to get a callback when the stream is readable / writable, and use the `uv_try_*` family of functions to perform the reads and writes inline. The proposed API for this: ~~~~ -int uv_stream_poll(uv_stream_poll_t* req, uv_stream_t* handle, int events, uv_stream_poll cb) +int uv_stream_poll(uv_stream_poll_t* req, + uv_stream_t* handle, + int events, + uv_stream_poll cb) ~~~~ `events` would be a mask composed of `UV_READABLE` and / or `UV_WRITABLE`. @@ -91,9 +103,16 @@ This proposal removes the timer handle and makes timers a request, which gets it called when the timeout is hit: ~~~~ -int uv_timeout(uv_timeout_t* req, uv_loop_t* loop, int timeout, uv_timeout_cb cb) +int uv_timeout(uv_timeout_t* req, + uv_loop_t* loop, + double timeout, + uv_timeout_cb cb) ~~~~ +The `timeout` is now expressed as a double. The fractional part will get rounded up +to platform granularity. For example: 1.2345 becomes 1230 ms or 1,234,500 us, +depending on whether the platform supports sub-millisecond precision. + Timers are one shot, so no assumptions are made and repeating timers can be easily implemented on top (by users). @@ -115,7 +134,7 @@ In certain environments users would like to get a callback called by the event l scheduling this callback would happen from a different thread. This can be implemented using `uv_async_t` handles in combination with some sort of thread safe queue, but it’s not straightforward. Also, many have fallen in the trap of `uv_async_send` coalescing calls, -that is, calling the function X times does yield the callback being called X times; it’s +that is, calling the function X times does not yield the callback being called X times; it’s called at least once. `uv_callback` requests will queue the given callback, so that it’s called “as soon as @@ -133,17 +152,24 @@ The callback definition: typedef void (*uv_callback_cb)(uv_callback_t* req, int status) ~~~~ +Implementation detail: since the callback request cannot be safely initialized outside +of the loop thread, when `uv_callback_threadsafe` is used, the request will be put +in a queue which will be processed by the loop at some point, fully initializing the +requests. + The introduction of `uv_callback` would deprecate and remove `uv_async_t` handles. -Now, in some cases it might be desired to just wakeup the event loop, and having to +Now, in some cases it might be desired to just wake up the event loop, and having to create a request might be too much, thus, the following API call is also proposed: ~~~~ -void uv_loop_wakeup(const uv_loop_t* loop) +void uv_loop_wakeup(uv_loop_t* loop) ~~~~ -Which would just wakeup the event loop in case it was blocked waiting for i/o. +Which would just wake up the event loop in case it was blocked waiting for i/o. -Implementation detail: the underlying mechanism for `uv_async_t` would remain (at least on Unix). +Implementation detail: the underlying mechanism for waking up the loop will be decided +later on. The current `uv_async_t` machanism could remain (on Unix) or atomic ops +could be used instead. Note: As a result of this addition, `uv_idle_t` handles will be deprecated an removed. It may not seem obvious at first, but `uv_callback` achieves the same: the loop won’t block From ac4ab1e8828c3e8f6964c7ece516bed3ee314578 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sa=C3=BAl=20Ibarra=20Corretg=C3=A9?= Date: Mon, 1 Dec 2014 11:18:09 +0100 Subject: [PATCH 3/4] Moar fixes: - Remove note on uv_cancel - Remove uv_try_read - Remove uv_read_alloc - uv_read with NULL callback == former uv_try_read - Clarified that uv_callback requests cannot be cancelled - Removed status from uv_callback's callback - Clarify when timers can fail --- XXX-request-all-the-things.md | 51 ++++++++++------------------------- 1 file changed, 14 insertions(+), 37 deletions(-) diff --git a/XXX-request-all-the-things.md b/XXX-request-all-the-things.md index 3f92810..f37fa25 100644 --- a/XXX-request-all-the-things.md +++ b/XXX-request-all-the-things.md @@ -37,45 +37,26 @@ The read callback is defined as: typedef void (*uv_read_cb)(uv_read_t* req, int status) ~~~~ -This approach has one problem, though: memory for reading needs to be allocated upfront, -which might not be desirable in all cases. For this reason, a secondary version which takes -an allocation callback is also proposed: - -~~~~ -int uv_read_alloc(uv_read_t* req, - uv_stream_t* handle, - uv_alloc_cb alloc_cb, - uv_read_cb cb) -~~~~ - -Applications can use one or the other or mixed without problems. - Implementation details: we probably will want to have some `bufsml` of size 4 where we copy the structures when the request is created, like `uv_write` does. Thus, the user can pass a `uv_buf_t` array which is allocated on the stack, as long as the memory in each `buf->base` is valid until the request callback is called. -Inline reading: if there are no conditions which would prevent otherwise, we could try to do -a read on the spot. This should work ok if the user provided preallocated buffers, because -we can hold on to them if we get EAGAIN. If `uv_read_alloc` is used, instead, we won’t attempt -to read on the spot because the allocation callback would have to be called and we’d end -up holding on to the buffer for too long, thus defeating the purpose of deferred allocation. -A best effort inline reading function is also proposed: +Inline reading: if the passed callback `cb` is NULL and there are no more queued read requests +an attempt to read inline will be made. -~~~~ -int uv_try_read(uv_stream_t* handle, - const uv_buf_t[] bufs, - unsigned int nbufs) -~~~~ -It does basically the analogous to `uv_try_write`, that is, attempt to read inline and -doesn’t queue a request if it doesn’t succeed. +### uv_write and uv_try_write + +`uv_write` will be modified to behave just like `uv_read`, that is, try to do the operation +inline if `cb` is NULL, and thus `uv_try_write` will be removed. + ### uv_stream_poll -In case `uv_read` and `uv_read_alloc` are not enough, another way to read or write on streams -would be to get a callback when the stream is readable / writable, and use the `uv_try_*` -family of functions to perform the reads and writes inline. The proposed API for this: +In case `uv_read` and `uv_write` are not enough, another way to read or write on streams +is to get a callback when the stream is readable / writable, and use `uv_read` and `uv_write` +to perform the reads and writes inline (passing a NULL callback). The proposed API for this: ~~~~ int uv_stream_poll(uv_stream_poll_t* req, @@ -122,7 +103,7 @@ The callback takes the following form: typedef void (*uv_timeout_cb)(uv_timeout_t* req, int status) ~~~~ -The status argument would indicate success or failure. One possible failure is cancellation, +The status argument would indicate success or failure. The only possible failure is cancellation, which would make status == `UV_ECANCELED`. Implementation detail: Timers will be the first thing to be processed after polling for i/o. @@ -149,7 +130,7 @@ int uv_callback_threadsafe(uv_callback_t* req, uv_loop_t* loop, uv_callback_cb c The callback definition: ~~~~ -typedef void (*uv_callback_cb)(uv_callback_t* req, int status) +typedef void (*uv_callback_cb)(uv_callback_t* req) ~~~~ Implementation detail: since the callback request cannot be safely initialized outside @@ -161,6 +142,8 @@ The introduction of `uv_callback` would deprecate and remove `uv_async_t` handle Now, in some cases it might be desired to just wake up the event loop, and having to create a request might be too much, thus, the following API call is also proposed: +`uv_callback` requests cannot be cancelled. + ~~~~ void uv_loop_wakeup(uv_loop_t* loop) ~~~~ @@ -202,9 +185,3 @@ The `req->fd` field will contain a `uv_os_fd_t` value, which the user can use to with `uv_tcp_open` for example. (This needs further investigation to verify it would work in all cases). - -### A note on uv_cancel - -Gradually, `uv_cancel` needs to be improved to allow for cancelling any kind of requests. -Some of them might be a bit harder, but `uv_timeout` and `uv_callback` should be easy -enough to do. From ca4479113e0c75351cbc5065d23a30a5600a6a69 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sa=C3=BAl=20Ibarra=20Corretg=C3=A9?= Date: Thu, 4 Dec 2014 09:37:53 +0100 Subject: [PATCH 4/4] Addressed feedback by @trevnorris, thanks! --- XXX-request-all-the-things.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/XXX-request-all-the-things.md b/XXX-request-all-the-things.md index f37fa25..2923ff8 100644 --- a/XXX-request-all-the-things.md +++ b/XXX-request-all-the-things.md @@ -28,7 +28,7 @@ int uv_read(uv_read_t* req, uv_stream_t* handle, const uv_buf_t[] bufs, unsigned int nbufs, - uv_read_cb, cb) + uv_read_cb cb) ~~~~ The read callback is defined as: @@ -91,7 +91,7 @@ int uv_timeout(uv_timeout_t* req, ~~~~ The `timeout` is now expressed as a double. The fractional part will get rounded up -to platform granularity. For example: 1.2345 becomes 1230 ms or 1,234,500 us, +to platform granularity. For example: 1.2345 becomes 1235 ms or 1,234,500 us, depending on whether the platform supports sub-millisecond precision. Timers are one shot, so no assumptions are made and repeating timers can be easily @@ -157,15 +157,16 @@ could be used instead. Note: As a result of this addition, `uv_idle_t` handles will be deprecated an removed. It may not seem obvious at first, but `uv_callback` achieves the same: the loop won’t block for i/o if any `uv_callback` request is pending. This becomes even more obvious with the -“‘pull based’ event loop” proposal. +"pull based’ event loop" proposal. ### uv_accept / uv_listen Currently there is no way to stop listening for incoming connections. Making the concept of accepting connections also request based makes the API more consistent and easier -to use: if the user decides so (maybe because she is getting EMFILE because she ran -out of file descriptors, for example) she can stop accepting new connections. +to use: if the user decides so (maybe because the system ran out of file descriptors +and EMFILE erros are returned, foe example) it's possible to stop accepting new +connections by just stopping to create new accept requests. New API: