Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Gemini API (OpenAI-compatible) gemini-2.5-flash-preview-05-20 causes AttributeError in Agents SDK when tools are specified #744

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
salah9003 opened this issue May 22, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@salah9003
Copy link

Please read this first

  • Have you read the custom model provider docs, including the 'Common issues' section? Yes
  • Have you searched for related issues? Yes

Describe the question

When using the Agents SDK with Google's Gemini API (OpenAI-compatible endpoint https://generativelanguage.googleapis.com/v1beta/openai/) as a custom model provider, using the gemini-2.5-flash-preview-05-20 model with tools specified in the Agent leads to an AttributeError: 'NoneType' object has no attribute 'model_dump' within the SDK's openai_chatcompletions.py file.

This error occurs because the Gemini API returns a ChatCompletion object where choices[0].message is None. Debugging reveals the finish_reason for this None message is content_filter: OTHER.

Key observations:

  • The AttributeError (and the underlying None message with content_filter: OTHER) does not occur with the older gemini-2.5-flash-preview-04-17 model when tools are specified.
  • The AttributeError does not occur with the gemini-2.5-flash-preview-05-20 model if tools are omitted from the Agent initialization.

This suggests an issue with how the gemini-2.5-flash-preview-05-20 model (via its OpenAI-compatible API) handles requests that include the tools parameter, leading to a response structure that causes an error in the Agents SDK.

Debug information

  • Agents SDK version: v0.0.16 (approximated, based on User-Agent during testing)
  • Python version: Python 3.12
  • Relevant Traceback (from initial user report):
    Traceback (most recent call last):
      File "c:\Users\S\Documents\Projects-2\DumbCompUse\agenttest copy.py", line 298, in main
        result = await Runner.run(agent, input=conversation, max_turns=100)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "C:\Users\S\AppData\Roaming\Python\Python312\site-packages\agents\run.py", line 218, in run
        input_guardrail_results, turn_result = await asyncio.gather(
                                               ^^^^^^^^^^^^^^^^^^^^^
      File "C:\Users\S\AppData\Roaming\Python\Python312\site-packages\agents\run.py", line 762, in _run_single_turn
        new_response = await cls._get_new_response(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "C:\Users\S\AppData\Roaming\Python\Python312\site-packages\agents\run.py", line 921, in _get_new_response
        new_response = await model.get_response(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^
      File "C:\Users\S\AppData\Roaming\Python\Python312\site-packages\agents\models\openai_chatcompletions.py", line 78, in get_response
        f"LLM resp:\n{json.dumps(response.choices[0].message.model_dump(), indent=2)}\n"
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    AttributeError: 'NoneType' object has no attribute 'model_dump'
    

Repro steps

  1. Setup: Use openai-agents, openai libraries. Point OpenAIChatCompletionsModel to Gemini's OpenAI-compatible endpoint with a valid API key.
  2. Script (ensure openai_chatcompletions.py is in its original state, without the manual error handling for None messages):
    import asyncio
    from openai import AsyncOpenAI
    from agents import Agent, Runner, function_tool, OpenAIChatCompletionsModel, set_tracing_disabled
    import logging
    # Set to logging.DEBUG to see underlying API responses if SDK is modified to not crash
    logging.basicConfig(level=logging.INFO) 
    
    BASE_URL = "https://generativelanguage.googleapis.com/v1beta/openai/"
    API_KEY = "YOUR_GEMINI_API_KEY" # Replace with your actual key
    MODEL_WITH_ISSUE = "gemini-2.5-flash-preview-05-20"
    MODEL_WORKING_FINE = "gemini-2.5-flash-preview-04-17"
    
    # Test this model to see the AttributeError
    CURRENT_MODEL_TO_TEST = MODEL_WITH_ISSUE 
    
    client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)
    set_tracing_disabled(True) # Keep it simple
    
    @function_tool
    def get_weather(city: str):
        """Gets the current weather."""
        return f"The weather in {city} is sunny."
    
    async def main():
        print(f"--- Testing: {CURRENT_MODEL_TO_TEST} with tools (expecting AttributeError) ---")
        agent = Agent(
            name="TestBot", 
            model=OpenAIChatCompletionsModel(model=CURRENT_MODEL_TO_TEST, openai_client=client), 
            tools=[get_weather], 
            instructions="Respond in haikus."
        )
        try:
            result = await Runner.run(agent, "What is the weather in Tokyo?")
            print(f"Output (with tools): {result.final_output if result else 'None - this should not be reached if AttributeError occurs'}")
        except AttributeError as e:
            print(f"Caught expected AttributeError: {e}")
        except Exception as e:
            print(f"Caught other exception: {e}")
    
        # For comparison, this works:
        print(f"\n--- Testing: {MODEL_WORKING_FINE} with tools (expected to work) ---")
        agent_ok = Agent(
            name="TestBotOK", 
            model=OpenAIChatCompletionsModel(model=MODEL_WORKING_FINE, openai_client=client), 
            tools=[get_weather], 
            instructions="Respond in haikus."
        )
        result_ok = await Runner.run(agent_ok, "What is the weather in Tokyo?")
        print(f"Output (MODEL_WORKING_FINE with tools): {result_ok.final_output if result_ok else 'None'}")
    
    asyncio.run(main())
  3. Run: Execute the script with CURRENT_MODEL_TO_TEST set to MODEL_WITH_ISSUE.
  4. Observed: The script raises AttributeError: 'NoneType' object has no attribute 'model_dump' at agents\models\openai_chatcompletions.py", line 78. If the SDK is modified to inspect the response before this line, it's seen that response.choices[0].message is None and response.choices[0].finish_reason is content_filter: OTHER.

Expected behavior

The gemini-2.5-flash-preview-05-20 model, when used with the Agents SDK via its OpenAI-compatible endpoint and with tools specified, should return a ChatCompletion object where choices[0].message is properly populated (with text content or tool_calls). This would prevent the AttributeError in the SDK and allow for normal tool use, consistent with gemini-2.5-flash-preview-04-17 or when tools are omitted for the 05-20 model. The API should not return a None message with content_filter: OTHER solely due to the presence of the tools parameter for innocuous prompts.

@salah9003 salah9003 added the bug Something isn't working label May 22, 2025
@yonatanlavy
Copy link

same happens for me

@rm-openai
Copy link
Collaborator

looking into it

rm-openai added a commit that referenced this issue May 23, 2025
## Summary
- avoid AttributeError when Gemini API returns `None` for chat message
- return empty output if message is filtered
- add regression test

## Testing
- `make format`
- `make lint`
- `make mypy`
- `make tests`

Towards #744
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants