Closed
Description
When custom model provider sends an error message (e.g. rate limit exceeded), agents falls with error
...\agents\models\openai_chatcompletions.py", line 134, in get_response
f"LLM resp:\n{json.dumps(response.choices[0].message.model_dump(), indent=2)}\n"
~~~~~~~~~~~~~~~~^^^
TypeError: 'NoneType' object is not subscriptable
Im use an gemini model through openrouter, and answer looks like this:
{
"id": null,
"choices": null,
"created": null,
"model": null,
"object": null,
"service_tier": null,
"system_fingerprint": null,
"usage": null,
"error": {
"message": "Rate limit exceeded: google/gemini-2.0-flash-exp/...",
"code": 429,
"metadata": {
"headers": {
"X-RateLimit-Limit": "4",
"X-RateLimit-Remaining": "0",
"X-RateLimit-Reset": "1743142560000"
},
"provider_name": "Google AI Studio"
}
},
"user_id": "..."
}
As i can see, agents dont have any provider errors handling mechanism (including errors from OpenAI), so it would be nice to add it