Replies: 1 comment 2 replies
-
|
This is related to the optimistic resolution of those mutations. It's basically hard to make a normalised cache predictable, so we've got several tricks we apply. The data is layered while in flight typically. For example, given mutations 1 and 2, if 2 completes first, it'll create a layer and apply on top of 1. Meaning, you can always predict the outcome of how the cache updates will apply. This however stands in direct conflict with optimistic updates. You have mutations 1 and 2. If an optimistic update has been applied, both now are assumed to be temporarily applied and that optimistic update is on those layers. The cache updates will use the optimistic layer. However, now, if only one of the mutations completes that's trouble. Say the second mutation completes, deletes its layer and tries to apply its cache update. The update has access to the prior mutations data (say, it manipulates a list). It could now take that data from the optimistic mutation 1 and apply it to the non-optimistic layer 2. To prevent this, what we actually do is, we hold a lock on all the fields that the optimistic updates relied on. While this lock is maintained no cache updates can run for the fields that are locked. When the mutations with the locks complete, all cache updates are flushed simultaneously. This has the side-effect that outside of the cache you can't tell when exactly a mutation has actually been sent and a response has received. It has the upside however that the cache never gets into an unpredictable or mixed state. There's other solutions we considered, like replaying mutation results and reverting layers as non-optimistic results are swapped in. But that could have both performance and DX implications that are worse. So this is the compromise. The idea (and we're kind of forcing an opinion on people here, I know) is that optimistic mutations will mostly be used for "low stakes" operations, where it either:
So we're basically encouraging making errors the primary way to communicate to users that something's not been saved / has failed, or to not use optimistic mutations when the outcome of a mutation is critical. I know that's not the most satisfying answer but I hope it at least explains what's happening |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm experiencing an interesting behaviour with multiple simultaneous mutations that I'd like to understand better. Here's what I'm observing:
I have a React component that can trigger multiple updateData mutations in quick succession (e.g., user rapidly changing dataset). Each mutation:
Observed behaviour:
Timeline:
Cache update:
Mutation Callbacks:
Key Questions
a. Is this the expected behaviour? Are multiple mutations supposed to complete their callbacks simultaneously when they finish around the same time?
b. Is Urql intentionally batching cache updates or mutation resolutions? Or is this happening due to some other mechanism?
c. Is there a way to force mutations to resolve independently so each callback runs as soon as its respective mutation completes?
Use Case Impact
This behaviour causes multiple toast notifications to appear simultaneously, which confuses users.
Beta Was this translation helpful? Give feedback.
All reactions