-
-
Notifications
You must be signed in to change notification settings - Fork 5k
Description
Checklist
- I have verified that the issue exists against the
mainbranch of Celery. - This has already been asked to the discussions forum first.
- I have read the relevant section in the
contribution guide
on reporting bugs. - I have checked the issues list
for similar or identical bug reports. - I have checked the pull requests list
for existing proposed fixes. - I have checked the commit log
to find out if the bug was already fixed in the main branch. - I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway). - I have tried to reproduce the issue with pytest-celery and added the reproduction script below.
Mandatory Debugging Information
- I have included the output of
celery -A proj reportin the issue.
(if you are not able to do this, then at least specify the Celery
version affected). - I have verified that the issue exists against the
mainbranch of Celery. - I have included the contents of
pip freezein the issue. - I have included all the versions of all the external dependencies required
to reproduce this bug.
Optional Debugging Information
- I have tried reproducing the issue on more than one Python version
and/or implementation. - I have tried reproducing the issue on more than one message broker and/or
result backend. - I have tried reproducing the issue on more than one version of the message
broker and/or result backend. - I have tried reproducing the issue on more than one operating system.
- I have tried reproducing the issue on more than one workers pool.
- I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled. - I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
Related Issues and Possible Duplicates
Related Issues
I couldn't find any
Possible Duplicates
Couldn't find any
Environment & Settings
❯ uv run celery --version
5.6.1 (recovery)
celery report Output:
❯ uv run celery -A app report
software -> celery:5.6.1 (recovery) kombu:5.6.2 py:3.14.0rc2
billiard:4.2.4 py-amqp:5.3.1
platform -> system:Darwin arch:64bit, Mach-O
kernel version:25.2.0 imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:disabled
broker_url: 'amqp://guest:********@localhost:5672//'
task_ignore_result: True
deprecated_settings: None
Steps to Reproduce
Required Dependencies
I tested on some old versions just to see if this was related to a recent upgrade, I did not test all intermediate versions, or older than 4.4.7, or individual dependency versions
- Minimal Python Version: 3.10 + 3.14
- Minimal Celery Version: celery:4.4.7 (cliffs) + 5.6.1
- Minimal Kombu Version: 4.6.11
- Minimal Broker Version: N/A
-
- Minimal Result Backend Version: N/A
- Minimal OS and/or Kernel Version: Mac OS 26.2
- Minimal Broker Client Version: py-amqp:2.6.1
- Minimal Result Backend Client Version: N/A
Python Packages
pip freeze Output:
altgraph @ file:///AppleInternal/Library/BuildRoots/4~CDlvugAeMYaHcTWIELyAQ2joHAecmlI0GAlKPsw/Library/Caches/com.apple.xbs/Sources/python3/altgraph-0.17.2-py2.py3-none-any.whl
backports-datetime-fromisoformat==2.0.3
cachetools==6.0.0
certifi==2025.4.26
cfgv==3.4.0
chardet==5.2.0
charset-normalizer==3.4.2
colorama==0.4.6
distlib==0.3.7
docutils==0.21.2
exceptiongroup==1.3.0
filelock==3.13.1
flit==3.12.0
flit_core==3.12.0
future @ file:///AppleInternal/Library/BuildRoots/4~CDlvugAeMYaHcTWIELyAQ2joHAecmlI0GAlKPsw/Library/Caches/com.apple.xbs/Sources/python3/future-0.18.2-py3-none-any.whl
identify==2.5.31
idna==3.10
iniconfig==2.1.0
macholib @ file:///AppleInternal/Library/BuildRoots/4~CDlvugAeMYaHcTWIELyAQ2joHAecmlI0GAlKPsw/Library/Caches/com.apple.xbs/Sources/python3/macholib-1.15.2-py2.py3-none-any.whl
# Editable install with no version control (marshmallow==4.0.0)
-e /Users/nate/wave/marshmallow
nodeenv==1.8.0
packaging==25.0
pbr==6.0.0
platformdirs==3.11.0
pluggy==1.6.0
pre-commit==3.5.0
pyproject-api==1.9.1
pytest==8.3.5
PyYAML==6.0.1
requests==2.32.3
simplejson==3.20.1
six @ file:///AppleInternal/Library/BuildRoots/4~CDlvugAeMYaHcTWIELyAQ2joHAecmlI0GAlKPsw/Library/Caches/com.apple.xbs/Sources/python3/six-1.15.0-py2.py3-none-any.whl
stevedore==5.1.0
tomli==2.2.1
tomli_w==1.2.0
tox==4.11.4
typing_extensions==4.13.2
urllib3==2.4.0
virtualenv==20.24.6
virtualenv-clone==0.5.7
virtualenvwrapper==4.8.4
Other Dependencies
Details
N/A
Minimally Reproducible Test Case
I have pushed up an example repository here:
Here is a specific gist: https://gist.github.com/compyman/c5c74c7c59088568a39b20203b522be0
There is some other weird behavior that I am trying to explore in that repository, but this is the main issue right now.
Expected Behavior
I would expect that:
- that multiple tasks would be able to co-operatively yield resources
- that gevent monkey patching would effectively patch resources used by celery
Actual Behavior
using the gevent worker, tasks are executing sequentially - it yields 5 tasks into the the message broker, but does not yield the greenthread to let another task so the greenthreads run mostly sequentially
In the gist you can see that sometimes 2 greenthreads will co-operate.
I have been attempting to diagnose this as well and have noticed some things that may be related.
- The producer queue
app.producer_pool._resourceis not being monkey patched - it is an instance of aqueue.LifoQueue(this appears to be imported during monkey-patching while looking up class on the default celery app which is a little gnarly) - if you raise a gevent timeout in the loop - it's possible to get an
raised unexpected: RuntimeError('Semaphore released too many times')error (I can raise another issue for this) - Some of this behavior seems related to the broker_pool_limit
- if the broker limit is 0, then we do always yield greenthreads (I assume because establishing the connection yields)
- if the broker limit is less than the number of looping tasks, then we will yield more often, I assume while waiting for an available producer?
Let me know if I should raise this issue on another repo, or split it into multiple issues.
Also please let me know if there's any other information I could help out with.
Thank you!