-
-
Notifications
You must be signed in to change notification settings - Fork 9.6k
[Serializer] Change visibility of getNormalizer and getDenormalizer #24206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I found this use case legit and right now I don't see an easy way to achieve this perf optim (quite huge in this project) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks sensible to me.
It used to have a cache for this method, it has been removed in #17140. But the removal of this cache introduced a huge performance penalty (the one you suffered). |
The reason why we remove the cache would be still valid, so that this option would be a foot gun I guess. |
@welcoMattic, can you share your profile publicly? |
@nicolas-grekas I think it won't help, the previous cache allowed to skip the whole loop. Here we still need to loop and to call Another proposal: introduce a |
we talked with @dunglas and might have found an idea: |
is the vary strategy worth it? It's nice.. but feels a bit over-engineered =/ Eventually the Exposing here is the easiest no? Or introduce a new abstraction layer? While at it; |
@ro0NL exposing only fix partially the problem, from my experience, this performance problem occurs often when you use the serializer intensively (yes I've API Platform in mind). Unlike exposing, fixing this problem would benefit to every users without requiring custom developments. |
What about a new So kinda following the same path proposed by @nicolas-grekas, but perhaps easier to implement. |
I'd be interested in knowing if a simple approach like #24228 could be enough? |
Also Yet i understand you prefer this extension point, in favor to copy/pasting |
@dunglas hmm, sorry it's a private application. But I can make a new simple Sf app with as many normalizers. Thanks all for discussing about this performance issue. I'll try @nicolas-grekas PR soon. |
Hi
In our case it's not enough. It's logical because our normalizers don't have this cache layer. The only way to make it faster in our case is to allow us to override the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just blocking merge until #24206 (comment) has been addressed as OK or KO as a more generic fix for the perf issue.
…kas) This PR was merged into the 3.4 branch. Discussion ---------- [Serializer] Add local cache to normalizers | Q | A | ------------- | --- | Branch? | 3.4 | Bug fix? | no | New feature? | no | BC breaks? | no | Deprecations? | no | Tests pass? | yes | Fixed tickets | #24206 | License | MIT | Doc PR | - Should help making the Serializer a bit faster. Commits ------- b0c5cf0 [Serializer] Add local cache to normalizers
@welcoMattic @lyrixx @dunglas any OK/KO about #24206 (comment)? Status: waiting feedback |
I'm not sure to follow you. Because how would you generated a cache-id when the normalizer vary on the the data? If it's an object it's quite simple (spl_object_hash) but if it's an array ... |
We would not. That'd then be the responsibility of the normalizer class. But in you example, you don't vary by "data", but only by "type(data)", which would be covered by the interface so caching would happen. |
Ok, so your cache layer will only cache something based on the context ? |
Almost like http vary yes, with 3 vary-by possible: type(data) and/or context yes, state inside data cannot be considered from this layer |
Oh, but there would still be a benefit when the data is considered: if you say "for this type, it varies by data", then the cache layer will still be able to skip the normalizer for any other non specified type. |
It looks good to me. We will try to update this PR as soon as possible. |
Thanks for all your ideas, I'll see with @lyrixx to update the PR before the feature freeze |
I think it's too late for 3.4, adding this vary-by cache should be done for 4.1 now. |
See #24436 |
I'm currently working on a projet that have many normalizers. Enough to get almost 20sec in
getDenormalizer
method:I would like to write my own Serializer, and I figure out that
getNormalizer
andgetDenormalizer
methods are private.Here what I would like to do:
I figure out too these
$normalizers
areprotected
which enforce me to submit this PR.Ping @dunglas @lyrixx