Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Segmentation fault with auto instrumentation in asgi applications #2030

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
potatochip opened this issue Aug 10, 2021 · 15 comments
Open

Segmentation fault with auto instrumentation in asgi applications #2030

potatochip opened this issue Aug 10, 2021 · 15 comments
Labels
bug Something isn't working triaged

Comments

@potatochip
Copy link

Describe your environment I am experiencing a segmentation fault when using auto instrumentation with any asgi application. I have tested with both starlette and fastapi.

Steps to reproduce
Can be created with the following repo. Steps in README.md. https://github.com/potatochip/opentelemetry-segfault-example

What is the expected behavior?
No segfault

What is the actual behavior?
Segfault

Additional context
Load testing sometimes needs to happen a couple times before a segfault occurs, but usually happens on the first try.

@potatochip potatochip added the bug Something isn't working label Aug 10, 2021
@owais
Copy link
Contributor

owais commented Aug 10, 2021

Thanks for the report and reproducible example. Could you please also attach a traceback of the failure?

@owais owais added the triaged label Aug 10, 2021
@potatochip
Copy link
Author

The container exits with 139. These are the logs

[2021-08-10 20:59:12 +0000] [8] [INFO] Starting gunicorn 20.1.0
[2021-08-10 20:59:12 +0000] [8] [INFO] Listening at: http://0.0.0.0:8000 (8)
[2021-08-10 20:59:12 +0000] [8] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2021-08-10 20:59:12 +0000] [14] [INFO] Booting worker with pid: 14
Overriding of current TracerProvider is not allowed
[2021-08-10 20:59:12 +0000] [14] [INFO] Started server process [14]
[2021-08-10 20:59:12 +0000] [14] [INFO] Waiting for application startup.
[2021-08-10 20:59:12 +0000] [14] [INFO] Application startup complete.
Queue is full, likely spans will be dropped.
Queue is full, likely spans will be dropped.
scripts/auto: line 2:     8 Segmentation fault      opentelemetry-instrument gunicorn -c gunicorn_conf.py auto_main:app

@potatochip
Copy link
Author

Immediate difference that I see is that the message "Queue is full, likely spans will be dropped." occurs twice with automatic instrumentation, but only once with manual instrumentation.

@owais
Copy link
Contributor

owais commented Aug 10, 2021

Does this only when auto instrumenting and not with manual setup? From the logs, it does look like that the tracing pipeline is set up twice. Not sure if that is the issue though. Can you please try setting OTEL_BSP_MAX_QUEUE_SIZE to a very large value (it's 2048 by default) and see if that makes the segfault go away or take a longer time to happen? Alternatively, you could set it to a very low number such as 10 and see if it happens almost immediately. It could help us narrow down the issue a bit.

@potatochip
Copy link
Author

potatochip commented Aug 11, 2021

This only happens with auto instrumentation, not manual. I set OTEL_BSP_MAX_QUEUE_SIZE=999999 and segfault still occurs. No change in the amount of time it takes. The only difference is that the queue full message does not appear before the segfault occurs.

@owais
Copy link
Contributor

owais commented Aug 11, 2021

Thanks. That means the queue is not the issue. My initial naive guess is that somehow we are instrumenting things multiple times and that is causing weird memory access issues. Ill try to reproduce with you example and debug later this week.

@srikanthccv
Copy link
Member

I tried to run this on my machine. I could recreate the issue. Here is the stack trace with more info

srikanthc@FL-LPT-388s-MacBook-Pro opentelemetry-segfault-example % docker run -it -p 8000:8000 --entrypoint scripts/auto otseg
[2021-08-12 01:21:14 +0000] [7] [INFO] Starting gunicorn 20.1.0
[2021-08-12 01:21:14 +0000] [7] [INFO] Listening at: http://0.0.0.0:8000 (7)
[2021-08-12 01:21:14 +0000] [7] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2021-08-12 01:21:14 +0000] [13] [INFO] Booting worker with pid: 13
WARNING:opentelemetry.trace:Overriding of current TracerProvider is not allowed
[2021-08-12 01:21:14 +0000] [13] [INFO] Started server process [13]
[2021-08-12 01:21:14 +0000] [13] [INFO] Waiting for application startup.
[2021-08-12 01:21:14 +0000] [13] [INFO] Application startup complete.
Fatal Python error: Segmentation fault

Thread 0x00007fc5877fe700 (most recent call first):
  File "/usr/local/lib/python3.8/threading.py", line 306 in wait
  File "/usr/local/lib/python3.8/site-packages/opentelemetry/sdk/trace/export/__init__.py", line 230 in worker
  File "/usr/local/lib/python3.8/threading.py", line 870 in run
  File "/usr/local/lib/python3.8/threading.py", line 932 in _bootstrap_inner
  File "/usr/local/lib/python3.8/threading.py", line 890 in _bootstrap

Thread 0x00007fc58f627740 (most recent call first):
  File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 357 in sleep
  File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 209 in run
  File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 72 in run
  File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 231 in run
  File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 67 in run
  File "/usr/local/bin/gunicorn", line 8 in <module>
scripts/auto: line 2:     7 Segmentation fault      opentelemetry-instrument gunicorn -c gunicorn_conf.py auto_main:app

@owais
Copy link
Contributor

owais commented Aug 12, 2021

opentelemetry-instrument gunicorn -c gunicorn_conf.py auto_main:app

gunicorn_config.py already sets up the tracing pipeline and I see opentelemtry-distro is installed as well meaning the instrument command would definitely setup two pipeline. We should update docs to not recommend using the instrument command with gunicorn but instead enable the instrumentations in gunicorn_config.py. I think that should solve this specific issue.

That said, we still want to support setting up multiple pipelines when a user really wants it so we should figure out if setting up multiple pipelines is actually what causes the issue and fix it.

@srikanthccv
Copy link
Member

Interestingly I am unable to reproduce this issue if I don't use docker.

@rhymes
Copy link

rhymes commented Dec 14, 2022

I'm experiencing a similar error in production:

Dec 14 05:57:28 PM  Fatal Python error: Segmentation fault
Dec 14 05:57:28 PM
Dec 14 05:57:28 PM  Thread 0x00007f1e0332f700 (most recent call first):
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 316 in wait
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/opentelemetry/sdk/trace/export/__init__.py", line 254 in worker
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 917 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/sentry_sdk/integrations/threading.py", line 67 in run
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 980 in _bootstrap_inner
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 937 in _bootstrap
Dec 14 05:57:28 PM
Dec 14 05:57:28 PM  Thread 0x00007f1e0e3e6740 (most recent call first):
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/arbiter.py", line 357 in sleep
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/arbiter.py", line 209 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/app/base.py", line 72 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/app/base.py", line 231 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py", line 67 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/bin/gunicorn", line 8 in <module>

I have seen this quite often during the day, though at irregular intervals. We do have a quite complex instrumentation setup that's entirely manual.

Versions:

  • Python 3.9.15
  • gunicorn 20.1.0 with default settings (so 1 worker process and 1 thread)

And these are the OpenTelemetry packages we use directly:

opentelemetry-api                        1.7.1
opentelemetry-exporter-otlp-proto-grpc   1.7.1
opentelemetry-instrumentation-botocore   0.26b1
opentelemetry-instrumentation-celery     0.26b1
opentelemetry-instrumentation-flask      0.26b1
opentelemetry-instrumentation-redis      0.26b1
opentelemetry-instrumentation-requests   0.26b1
opentelemetry-instrumentation-sqlalchemy 0.26b1
opentelemetry-sdk                        1.7.1

I checked the changelog for the newer versions of the library but I haven't seen anything related.

Any ideas?

@srikanthccv
Copy link
Member

There wasn't an exact fix, but there were many related fixes that happened. Can you check if this still happens on 1.15.0 and 0.36b0?

@rhymes
Copy link

rhymes commented Dec 19, 2022

@srikanthccv thanks for the tip! I will attempt to upgrade and report back :-) It'll take a bit as it's not a smooth upgrade

@rhymes
Copy link

rhymes commented Dec 21, 2022

Hi @srikanthccv,

unfortunately it didn't work. It might be due to OpenTelemetry's interaction with Sentry SDK within the Flask app, I've also tried to invert the order of the initialization of the two but nothing gained. I'll try using a CPU and memory profiler to see if I can get to the bottom of this.

Thanks in the meantime!

@dimaqq
Copy link
Contributor

dimaqq commented May 30, 2025

I think I have managed to hit same (or very similar)

Thread 0x0000704b0f63f6c0 (most recent call first):
  File "/home/dima/.local/share/uv/python/cpython-3.9.22-linux-x86_64-gnu/lib/python3.9/threading.py", line 316 in wait
  File "/code/operator/.tox/unit/lib/python3.9/site-packages/opentelemetry/sdk/trace/export/__init__.py", line 258 in worker
  File "/home/dima/.local/share/uv/python/cpython-3.9.22-linux-x86_64-gnu/lib/python3.9/threading.py", line 917 in run

Full trace:
https://gist.github.com/dimaqq/c0ab0cae051a2177c28ec4a560dff183

My case is similar to OP, because the unit tests are distributed by pytest-xdist, meaning that worker processes are spawned, similar to gunicorn.

@dimaqq
Copy link
Contributor

dimaqq commented May 30, 2025

Given that the issue occurred in older pythons that don't accept bug fixes any more, and could not be reproduced in newer pythons, I recommend that this issue be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triaged
Projects
None yet
Development

No branches or pull requests

5 participants