[RFC] purge on kernel terminate, instead of post flush#1286
[RFC] purge on kernel terminate, instead of post flush#1286bendavies wants to merge 1 commit intoapi-platform:masterfrom bendavies:purge-on-terminate
Conversation
|
It seems strange to me, you are now purging on each terminate? |
| * Purges tags collected during this request, and clears the tag list. | ||
| */ | ||
| public function postFlush() | ||
| public function onKernelTerminate() |
There was a problem hiding this comment.
If you do that you are not in the right place. This is in the doctrine bridge
There was a problem hiding this comment.
indeed good point. there would be a better place if the change is thought to be good.
|
@Simperfit what do you find strange now? why would you not purge on terminate? This is where the far more mature https://github.com/FriendsOfSymfony/FOSHttpCacheBundle does it |
|
An optimization could be to call purge in an asynchronous way, and to wait the sync just before the kernel.terminate. |
|
Thanks @dunglas . For reference @jderusse's original comment: #952 (comment) As far as i can see, there is no race condition here? I can't see an eventuality where the wrong data will be served by varnish for any considerable length of time. Using both Secondly, @dunglas, why do you think serving a user an error if a varnish purge fails is a good thing? There is nothing wrong with their request - it has persisted to the database just just fine. It's an implementation detail that the platform is purging varnish. The user doesn't care. They don't want to know if it fails. If it fails, maybe the platform should be retrying the purge in some way, again, async and transparent to the user. @dbu, might you have some valuable insight? |
In summary the topic is to decreasing consistency to gain milliseconds on writes? If yes, I suggest to run the write queries in async way for your client (ajax/guzzle/whatever), and let the choice to user who want consitency (like a TestSuite) to wait the end of the complete transaction. |
|
hi. i don't know all of the context, so i can just offer some random thoughts.
i guess the question here is how bad serving outdated information is. we recently had a situation with our api where we send message queue notifications of changes and because the consumers were too fast, they fetched the content before the cache was invalidated. we refactored our system to guarantee that invalidation is done before the notification is sent. but that comes at a real cost of making the requests that change data slower. on further reflection, we realized that the consumers of such messages (in our case its only a single consumer for each resource) only need the notification because they maintain some sort of cache of their own (in the end, the notification is kindof like a cache invalidation too...). so the most efficient way to handle this imho is for those consumers to send a request that bypasses the cache (either directly to symfony, or with a no-cache request header that varnish would respect). i am sure there are other scenarios however, where this approach does not work out. but i would assume that the default case is where a slight delay in invalidation is not a problem. i would probably rather explain this somewhere in the caching documentations, and mention that if you send out notifications and need to guarantee that the cache is invalidated when they are sent, you should at that point manually flush the CacheInvalidator. as for getting an error when invalidation fails, i also think as client of the API, i don't want to have to care about this. |
|
closing as old, lmk if I should reopen |
@experimentalIt seems wrong to me to do purging in post flush, as this causes wait time for the user.
kernel.terminateseems the right place for this, as it was designed for heavy work after the user has their response.Additionally, currently, if a purge fails, the user will receive a 500, which seems wrong, as their request actually succeeded (writing to the api).