Replies: 3 comments 1 reply
-
I have the exact same problem and I found there are more people with similar issues (see #8959). I created a repository to reproduce this issue when setting @mapa17 Were you able to find a solution to this problem? ππΌ |
Beta Was this translation helpful? Give feedback.
-
Hi @humitos, wasting so much time on celery I tried to stay away from it as far as I could. Looking at your repo, I think you have a slightly different use case, having some expectations about the duration of the execution of a task. Anyways, I looked into it (wasting some more hours on celery) and created an PR: humitos/celery-redis-visibility-timeout#1 Maybe this solves your issues. Using the app configurations # Make sure that tasks are acknowledged only once they are completed and timeout will trigger a retry
app.conf.task_acks_on_failure_or_timeout = False
app.conf.task_acks_late = True Will ensure that an task that timed out is going to be retried. In case of your example, indefinitely (i think). |
Beta Was this translation helpful? Give feedback.
-
Hi, Its my understanding, that while a worker process is running, and no task So based on your comments I am surprised that you see a difference in your tasks based on PS: I can not remember the details when looking through the celery codebase, but i think that visibility_timeout was defining how frequent the worker is notifying the broker about it executing a specific task. If the broker gets no updates after |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello Celery community,
I have long runnings tasks that are emitted by producers and consumed by workers running in different containers that can be terminated and their work interrupted at any time. I want each container to be working only at one task and in case it gets interrupted i want celery to resubmit the task, so another worker can pick up the job.
In order to not look tasks and use celery for task retry, I am testing a small test setup where the producers emitting tasks that cause a worker to sleep for 30 seconds.
I am using redis as a broker and the workers are configured to be executing with concurrency=1
celeryconfig.py
The producer is configured with
celeryconfig.py
I manually trigger a task on the producer that will send out a single task for the workers, and observe the unacked hash map and the queue (i.e. long) that is holding the tasks
I want to make sure that celery will be resubmitting tasks that have been started by workes that have been shutdown/terminated before finishing.
I therefore kill either the complete worker task (celery main thread) or only the subprocessed spawn and observe the queues.
I can see that a task is added to unacked as soon as a worker starts working on it, but instead of being resubmitted once visibility_timeout is reached (for test purposes i set this to 1 sec) it remains in the long queue much longer before its than inserted into the long queue.
Question
So my problem is that I dont know if i set the visiblity_timeout correctly, as it appears that my settings are ignored.
To make it more concrete, I from the documentation dont understand how broker_transport_options are set correctly in a situation with one redis broker, and multiple producers and consumers. Where, who and in which order the broker_settings must be set? I understand this to be a global or broker specific settings. Do all workers have to have the same value for this setting?
Any clarification is apprechated.
Beta Was this translation helpful? Give feedback.
All reactions