[Provider] LLM-powered smart retry operator using local Ollama #64365
Replies: 1 comment 1 reply
-
|
Check: https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-105%3A+Pluggable+Retry+Policies PR (core): #65450 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hey Airflow Community! π
I built a new provider that uses a local LLM (via Ollama) to make
intelligent retry decisions when tasks fail.
The Problem
Airflow's retry mechanism is static β same wait time, same retry
count regardless of why the task failed.
The Solution
LLMSmartRetryOperatorclassifies the error and applies the right strategy:rate_limitnetworkauthdata_schemaPrivacy First π
100% local inference via Ollama β no data leaves your infrastructure.
Links
Would love feedback from the community! π
Beta Was this translation helpful? Give feedback.
All reactions