Cache Parse() results #26
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Each parse takes ~2ms on my machine, and it's pretty common throughout the life of a running
process, to parse identical user-agent strings. This adds a very primitive cache similar in vein
to the cache inside the
urlparse
package.Before:
After:
Cache memory overhead:
Given the user agent of
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.52 Safari/537.36
, we get280 bytes
per parsed object.So that means a
MAX_CACHE_SIZE
of 20, will incur an overhead of 5600 bytes. Granted, this is also ignoring the cache key used in the_parsed_cache
dict, but we're in the ballpark of a few KB total at most. :)