-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Lines 147 to 158 in 05dae77
| // Now we know how many bots we have, adjust down the reservation | |
| // We need: 1 (ListBots) + len(bots) (GetListOfDeals per bot) | |
| if e.limiter != nil { | |
| neededSlots := 1 + len(bots) | |
| if err := e.limiter.AdjustDown(workflowID, neededSlots); err != nil { | |
| e.logger.Warn("rate limit adjust down failed", slog.String("error", err.Error())) | |
| } | |
| } | |
| // Fetch deals per bot concurrently with tier-specific concurrency cap | |
| g, gctx := errgroup.WithContext(ctx) | |
| g.SetLimit(e.produceConcurrency) |
The new rate‑limited producer reserves at most limit slots based on Limiter.Stats() and only ever reduces that reservation (AdjustDown). When len(bots) exceeds the per‑minute limit, the subsequent GetListOfDeals calls past the first limit bots immediately fail with ErrConsumeExceedsLimit in the rate‑limited client, hit the logger.Error("list deals for bot") path, and are silently dropped. Because the loop order is deterministic, the same leading limit bots are processed every resync while the rest never have their deals polled, so those bots will stall indefinitely. Either cap the number of bots processed per window or wait for additional capacity instead of issuing calls that you know will exceed the reservation.