-
Notifications
You must be signed in to change notification settings - Fork 4k
revert to use withoutConcurrentExecution #3292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of the deployments: Version 1
Version 2
Test content |
The latest updates on your projects. Learn more about Argos notifications ↗︎
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, but when I think about it, we might still need to manage concurrency inside the use cache
: imagine that the Node.js process receive 10 requests to generate pages in the same space, right now it'll result in 10 API requests to the same space for each of the top level requests.
I think we should manage concurrency at 2 levels:
- Wrapping in the
use cache
: per-request usingcache(() => {})
- Wrapping the API call in the
use cache
: global in the process
This PR reintroduce
withoutConcurrentExecution, because on some big page like the MariaDB one, it will just crash. In order to avoid the previous issue on vercel, we use
React.cache({})` as our global request context so that it doesn't leak to other request with fluid compute.It also disable
resolvingAnchorText
onInlineLink
because with it, the request will just timeout if there is too many of them in a single page