Thanks to visit codestin.com
Credit goes to github.com

Skip to content

fix: actually handle sessions in parallel #308

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Aug 12, 2021
Merged

fix: actually handle sessions in parallel #308

merged 6 commits into from
Aug 12, 2021

Conversation

coryan
Copy link
Contributor

@coryan coryan commented Aug 12, 2021

We were creating a thread to handle each request, and then promptly
blocking until the thread completed. With this change we leave the
thread running. Before accepting a new connection we cleanup the memory
resources for old sessions, not that they are that big. We block and
stop accepting requests if there are 64 sessions running, at that point
it is probably better to let the hosting environment (Cloud Run, Cloud
Functions, whatever) create new instances to handle the additional load.

Fixes #309

@google-cla google-cla bot added the cla: yes This human has signed the Contributor License Agreement. label Aug 12, 2021
Copy link
Contributor

@devjgm devjgm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but some questions so that I understand.

According to https://cloud.google.com/functions/docs/concepts/exec#auto-scaling_and_concurrency

Each instance of a function handles only one concurrent request at a time. This means that while your code is processing one request, there is no possibility of a second request being routed to the same instance. Thus the original request can use the full amount of resources (CPU and memory) that you requested.

So in the context of Cloud Functions, this parallelism will never be used, is that right? If CF is all we cared about, we wouldn't need this change.

However, this parallelism may still be useful in cases where this framework is deployed directly to Cloud Run, which by default has a concurrency value of 80. Is that the thinking?

Since I'm guessing that 32 and 64 were chosen as nice round numbers, we might want to consider using 80 somewhere to match Cloud Run's default concurrency.

@codecov
Copy link

codecov bot commented Aug 12, 2021

Codecov Report

Merging #308 (946adc2) into main (b5a5878) will increase coverage by 0.01%.
The diff coverage is 83.58%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #308      +/-   ##
==========================================
+ Coverage   55.47%   55.48%   +0.01%     
==========================================
  Files         562      562              
  Lines       15105    15205     +100     
==========================================
+ Hits         8380     8437      +57     
- Misses       6725     6768      +43     
Impacted Files Coverage Δ
google/cloud/functions/internal/framework_impl.cc 74.24% <44.44%> (-13.76%) ⬇️
...ctions/integration_tests/basic_integration_test.cc 88.65% <95.23%> (-3.01%) ⬇️
...tions/integration_tests/cloud_event_conformance.cc 95.83% <100.00%> (+0.59%) ⬆️
...functions/integration_tests/cloud_event_handler.cc 100.00% <100.00%> (ø)
.../integration_tests/cloud_event_integration_test.cc 91.94% <100.00%> (-3.68%) ⬇️
...e/cloud/functions/integration_tests/echo_server.cc 100.00% <100.00%> (ø)
...ud/functions/integration_tests/http_conformance.cc 100.00% <100.00%> (ø)
...ed/x64-linux/include/boost/system/system_error.hpp 0.00% <0.00%> (-41.67%) ⬇️
...64-linux/include/boost/asio/detail/throw_error.hpp 66.66% <0.00%> (-16.67%) ⬇️
.../include/boost/asio/ip/basic_resolver_iterator.hpp 87.87% <0.00%> (-9.19%) ⬇️
... and 10 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b5a5878...946adc2. Read the comment docs.

@coryan
Copy link
Contributor Author

coryan commented Aug 12, 2021

LGTM, but some questions so that I understand.

According to https://cloud.google.com/functions/docs/concepts/exec#auto-scaling_and_concurrency

Each instance of a function handles only one concurrent request at a time. This means that while your code is processing one request, there is no possibility of a second request being routed to the same instance. Thus the original request can use the full amount of resources (CPU and memory) that you requested.

So in the context of Cloud Functions, this parallelism will never be used, is that right?

Probably.

If CF is all we cared about, we wouldn't need this change.

Yes.

However, this parallelism may still be useful in cases where this framework is deployed directly to Cloud Run, which by default has a concurrency value of 80. Is that the thinking?

Since I'm guessing that 32 and 64 were chosen as nice round numbers, we might want to consider using 80 somewhere to match Cloud Run's default concurrency.

Doh. I should have looked this up, completely forgot. Will change.

@coryan coryan marked this pull request as ready for review August 12, 2021 17:27
@coryan coryan requested a review from a team as a code owner August 12, 2021 17:27
coryan added 6 commits August 12, 2021 19:48
We were creating a thread to handle each request, and then promptly
blocking until the thread completed. With this change we leave the
thread running. Before accepting a new connection we cleanup the memory
resources for old sessions, not that they are that big. We block and
stop accepting requests if there are 64 sessions running, at that point
it is probably better to let the hosting environment (Cloud Run, Cloud
Functions, whatever) create new instances to handle the additional load.
@coryan coryan merged commit 0784827 into GoogleCloudPlatform:main Aug 12, 2021
@coryan coryan deleted the fix-actually-run-parallel-threads branch August 12, 2021 23:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes This human has signed the Contributor License Agreement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

No parallelism / concurrency when handling jobs
2 participants