Thanks to visit codestin.com
Credit goes to github.com

Skip to content

significantly speed up import time of ua_parser #171

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

significantly speed up import time of ua_parser #171

wants to merge 1 commit into from

Conversation

asottile-sentry
Copy link

empty python interpreter

$ best-of -n 100 -- python3 -c ''
....................................................................................................
best of 100: 0.0173s

before

$ best-of -n 100 -- python3 -c 'import ua_parser.user_agent_parser'
....................................................................................................
best of 100: 0.1002s

after

$ best-of -n 100 -- python3 -c 'import ua_parser.user_agent_parser'
....................................................................................................
best of 100: 0.0189s

@masklinn
Copy link
Contributor

This seems to be a repeat of #57, and with similar issues: it "fixes" an "issue" which can largely already be fixed (by importing the library itself lazily), and in the process creates a new issue which can't (the cost is now paid at first use rather than on init leading to first-use slowdowns, made worse if preforking).

Lazy initialisation should be more of a possibility if #116 ever gets finished, but for the same reason as #57 I don't think it's a useful consideration for 0.x.

@asottile-sentry
Copy link
Author

rather than everyone paying the lazy tax I would hope the library itself would help with that -- understandable though

@asottile-sentry asottile-sentry deleted the speed-up-import-time branch December 27, 2023 13:22
@masklinn masklinn mentioned this pull request Jan 15, 2024
masklinn added a commit to masklinn/uap-python that referenced this pull request Feb 13, 2024
Support is addef for lazy builtin matchers (with a separately compiled
file), as well as loading json or yaml files using lazy matchers.

Lazy matchers are very much a tradeoff: they improve import speed, but
slow down run speed, possibly dramatically.

Use them by default for the re2 parser, but not the basic parser:
experimentally, on Python 3.11

- importing the package itself takes ~36ms
- importing the lazy matchers takes ~36ms (including the package, so ~0)
- importing the eager matchers takes ~97ms

the eager matchers have a significant overhead, *however* running the
bench on the sample file, they cause a runtime increase of 700~800ms
on the basic parser bench, as that ends up instantiating *every*
regex (likely due to match failures). Relatively this is not
huge (~2.5%), but the tradeoff doesn't seem great, especially since
the parser itself is initialized lazily.

The re2 parser does much better, only losing 20~30ms (~1%), this is
likely because it only needs to compile a fraction of the regexes (156
out of 1162 as of regexes.yaml version 0.18), and possibly because it
gets to avoid some of the most expensive to compile ones.

Fixes ua-parser#171, fixes ua-parser#173
@masklinn masklinn mentioned this pull request Feb 13, 2024
3 tasks
masklinn added a commit to masklinn/uap-python that referenced this pull request Feb 17, 2024
Support is added for lazy builtin matchers (with a separately compiled
file), as well as loading json or yaml files using lazy matchers.

Lazy matchers are very much a tradeoff: they improve import speed, but
slow down run speed, possibly dramatically.

Use them by default for the re2 parser, but not the basic parser:
experimentally, on Python 3.11

- importing the package itself takes ~36ms
- importing the lazy matchers takes ~36ms (including the package, so ~0)
- importing the eager matchers takes ~97ms

the eager matchers have a significant overhead, *however* running the
bench on the sample file, they cause a runtime increase of 700~800ms
on the basic parser bench, as that ends up instantiating *every*
regex (likely due to match failures). Relatively this is not
huge (~2.5%), but the tradeoff doesn't seem great, especially since
the parser itself is initialized lazily.

The re2 parser does much better, only losing 20~30ms (~1%), this is
likely because it only needs to compile a fraction of the regexes (156
out of 1162 as of regexes.yaml version 0.18), and possibly because it
gets to avoid some of the most expensive to compile ones.

Fixes ua-parser#171, fixes ua-parser#173
masklinn added a commit to masklinn/uap-python that referenced this pull request Feb 17, 2024
Support is added for lazy builtin matchers (with a separately compiled
file), as well as loading json or yaml files using lazy matchers.

Lazy matchers are very much a tradeoff: they improve import speed, but
slow down run speed, possibly dramatically.

Use them by default for the re2 parser, but not the basic parser:
experimentally, on Python 3.11

- importing the package itself takes ~36ms
- importing the lazy matchers takes ~36ms (including the package, so ~0)
- importing the eager matchers takes ~97ms

the eager matchers have a significant overhead, *however* running the
bench on the sample file, they cause a runtime increase of 700~800ms
on the basic parser bench, as that ends up instantiating *every*
regex (likely due to match failures). Relatively this is not
huge (~2.5%), but the tradeoff doesn't seem great, especially since
the parser itself is initialized lazily.

The re2 parser does much better, only losing 20~30ms (~1%), this is
likely because it only needs to compile a fraction of the regexes (156
out of 1162 as of regexes.yaml version 0.18), and possibly because it
gets to avoid some of the most expensive to compile ones.

Fixes ua-parser#171, fixes ua-parser#173
masklinn added a commit to masklinn/uap-python that referenced this pull request Feb 18, 2024
Add lazy builtin matchers (with a separately compiled file), as well
as loading json or yaml files using lazy matchers.

Lazy matchers are very much a tradeoff: they improve import speed (and
memory consumption until triggered), but slow down run speed, possibly
dramatically:

- importing the package itself takes ~36ms
- importing the lazy matchers takes ~36ms (including the package, so
  ~0) and ~70kB RSS
- importing the eager matchers takes ~97ms and ~780kB RSS
- triggering the instantiation of the lazy matchers adds ~800kB RSS
- running bench on the sample file using the lazy matcher has
  700~800ms overhead compared to the eager matchers

While the lazy matchers are less costly across the board until they're
used, benching the sample file causes the loading of *every* regex --
likely due to matching failures -- has a 700~800ms overhead over eager
matchers, and increases the RSS by ~800kB (on top of the original 70).

Thus lazy matchers are not a great default for the basic parser.
Though they might be a good opt-in if the user only ever uses one of
the domains (especially if it's not the devices one as that's by far
the largest).

With the re2 parser however, only 156 of the 1162 regexes get
evaluated, leading to a minor CPU overhead of 20~30ms (1% of bench
time) and a more reasonable memory overhead. Thus use the lazy matcher
fot the re2 parser.

On the more net-negative but relatively minor side of things, the
pregenerated lazy matchers file adds 120k to the on-disk requirements
of the library, and ~25k to the wheel archive. This is also what the
_regexes and _matchers precompiled files do. pyc files seem to be even
bigger (~130k) so the tradeoff is dubious even if they are slightly
faster.

Fixes ua-parser#171, fixes ua-parser#173
masklinn added a commit that referenced this pull request Feb 18, 2024
Add lazy builtin matchers (with a separately compiled file), as well
as loading json or yaml files using lazy matchers.

Lazy matchers are very much a tradeoff: they improve import speed (and
memory consumption until triggered), but slow down run speed, possibly
dramatically:

- importing the package itself takes ~36ms
- importing the lazy matchers takes ~36ms (including the package, so
  ~0) and ~70kB RSS
- importing the eager matchers takes ~97ms and ~780kB RSS
- triggering the instantiation of the lazy matchers adds ~800kB RSS
- running bench on the sample file using the lazy matcher has
  700~800ms overhead compared to the eager matchers

While the lazy matchers are less costly across the board until they're
used, benching the sample file causes the loading of *every* regex --
likely due to matching failures -- has a 700~800ms overhead over eager
matchers, and increases the RSS by ~800kB (on top of the original 70).

Thus lazy matchers are not a great default for the basic parser.
Though they might be a good opt-in if the user only ever uses one of
the domains (especially if it's not the devices one as that's by far
the largest).

With the re2 parser however, only 156 of the 1162 regexes get
evaluated, leading to a minor CPU overhead of 20~30ms (1% of bench
time) and a more reasonable memory overhead. Thus use the lazy matcher
fot the re2 parser.

On the more net-negative but relatively minor side of things, the
pregenerated lazy matchers file adds 120k to the on-disk requirements
of the library, and ~25k to the wheel archive. This is also what the
_regexes and _matchers precompiled files do. pyc files seem to be even
bigger (~130k) so the tradeoff is dubious even if they are slightly
faster.

Fixes #171, fixes #173
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants