You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently PEP 554 states, that the idea of channels acting like queues was rejected.
I'm a bit surprised about that as single-producer-single-consumer queues (e.g. a ring buffer) are basically the most efficient known structures to share data in small batches (potentially of varying sizes in runtime) between operating system threads running on different physical CPU cores.
My question has thus 2 parts:
a) Is it expected, that Python at some point in the future will "promote" the currently single-item send() and recv() to batch variants supporting buffering? Or will this be rejected as well in the future and not implemented thus leaving Python with the same issue as in Go (see some details):
"it turns out that nearly every single team that has tried to scale golang servers has come to the same position. Give up channels as your concurrency primitives and revert to the sync package if you need to really scale golang programs"
b) Will Python add support for polling interface (poll_send(), poll_recv()) on the channels additionally to the current send() and recv() interface? Note, this does not imply support for timeouts though admittedly it could be achieved also with them.
The text was updated successfully, but these errors were encountered:
The main goal of the PEP is to get to a minimum level of functionality. We can build from there. So I guess the answer to all your questions is "we can work that out later". :) That said, I don't see a problem with expanding the capability of channels in the ways that you've described (though you'd need to do the work or find someone to do it). It's too early to promise anything, so let's revisit this after PEP 554 is accepted and the implementation merged.
Also, keep in mind that the PEP 554 implementation is effectively just an extension module. Alternate sharing mechanisms, e.g. something more similar to queue.Queue, shouldn't be that hard to implement and publish to PyPI.
Currently PEP 554 states, that the idea of channels acting like queues was rejected.
I'm a bit surprised about that as single-producer-single-consumer queues (e.g. a ring buffer) are basically the most efficient known structures to share data in small batches (potentially of varying sizes in runtime) between operating system threads running on different physical CPU cores.
My question has thus 2 parts:
a) Is it expected, that Python at some point in the future will "promote" the currently single-item
send()
andrecv()
to batch variants supporting buffering? Or will this be rejected as well in the future and not implemented thus leaving Python with the same issue as in Go (see some details):b) Will Python add support for polling interface (
poll_send()
,poll_recv()
) on the channels additionally to the currentsend()
andrecv()
interface? Note, this does not imply support for timeouts though admittedly it could be achieved also with them.The text was updated successfully, but these errors were encountered: