Thanks to visit codestin.com
Credit goes to github.com

Skip to content

gemini-3-flash-preview openai compatible tool call error #2806

@shaunyang12

Description

@shaunyang12

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • This is an issue with the Python library

Describe the bug

When I use Gemini's OpenAI compatible endpoint to make streaming tool calls, the package raises the following error:
File "/Users/shaun.yang/Desktop/medalsoft/serviceme-next-backend/serviceme/llm_gateway/llm_adapters/base/decorators.py", line 68, in async_gen_wrapper
async for chunk in async_gen:
File "/Users/shaun.yang/Desktop/medalsoft/serviceme-next-backend/serviceme/llm_gateway/llm_adapters/openai/adapter.py", line 105, in _chat_completion_stream_async
async for chunk in stream:
File "/Users/shaun.yang/Desktop/medalsoft/serviceme-next-backend/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 195, in aiter
async for item in self._iterator:
File "/Users/shaun.yang/Desktop/medalsoft/serviceme-next-backend/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 241, in stream
events_to_fire = self._state.handle_chunk(sse_event)
File "/Users/shaun.yang/Desktop/medalsoft/serviceme-next-backend/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 348, in handle_chunk
return self._build_events(
File "/Users/shaun.yang/Desktop/medalsoft/serviceme-next-backend/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 536, in _build_events
tool_call = tool_calls[tool_call_delta.index]
TypeError: list indices must be integers or slices, not NoneType

To Reproduce

use the following code to reproduce:

from openai import AsyncClient
async def main():
    client = AsyncClient(
        api_key='<my-apikey>',
        base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
    )
    tools = [
      {
        "type": "function",
        "function": {
            "strict": True,
          "name": "get_weather",
          "description": "Get the weather in a given location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {
                "type": "string",
                "description": "The city and state, e.g. Chicago, IL",
              },
              "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
            },
            "required": ["location"],
          },
        }
      }
    ]

    messages = [{"role": "user", "content": "What's the weather like in Chicago today?"}]
    async with client.chat.completions.stream(
      model="gemini-3-flash-preview",
      messages=messages,
      tools=tools,
      tool_choice="auto",
    ) as stream:
        async for event in stream:
            print(event)

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Code snippets

OS

macOS

Python version

Python 3.11.13

Library version

openai v2.14.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions