Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Streaming: ResponseUsage validation error when token fields are None #1179

@kobol

Description

@kobol

Please read this first

  • Have you read the docs?Agents SDK docs
  • Have you searched for related issues? Others may have faced similar issues.

Describe the bug

When using streaming responses, some models may return usage fields (prompt_tokens, completion_tokens, total_tokens) as None with chat/completion api endpoint. The current implementation passes these values directly to the ResponseUsage Pydantic model, which expects integers. This causes a validation error like:

Traceback (most recent call last):
  File "/Users/xxx/PycharmProjects/mai-dxo/src/api/routes/chat.py", line 38, in generate_stream
    async for event in result.stream_events():
  File "/Users/xxx/PycharmProjects/mai-dxo/.venv/lib/python3.12/site-packages/agents/result.py", line 215, in stream_events
    raise self._stored_exception
  File "/Users/xxx/PycharmProjects/mai-dxo/.venv/lib/python3.12/site-packages/agents/run.py", line 723, in _start_streaming
    turn_result = await cls._run_single_turn_streamed(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xxx/PycharmProjects/mai-dxo/.venv/lib/python3.12/site-packages/agents/run.py", line 867, in _run_single_turn_streamed
    async for event in model.stream_response(
  File "/Users/xxx/PycharmProjects/mai-dxo/.venv/lib/python3.12/site-packages/agents/models/openai_chatcompletions.py", line 169, in stream_response
    async for chunk in ChatCmplStreamHandler.handle_stream(response, stream):
  File "/Users/xxx/PycharmProjects/mai-dxo/.venv/lib/python3.12/site-packages/agents/models/chatcmpl_stream_handler.py", line 495, in handle_stream
    ResponseUsage(
  File "/Users/xxx/PycharmProjects/mai-dxo/.venv/lib/python3.12/site-packages/pydantic/main.py", line 253, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 3 validation errors for ResponseUsage
input_tokens
  Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/int_type
output_tokens
  Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/int_type
total_tokens
  Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/int_type

Debug information

  • Agents SDK version: (e.g. v0.2.2)
  • Python version ( Python 3.12)

Repro steps

To Reproduce

  1. Use a model or endpoint that does not return usage tokens in streaming mode, or returns them as null.
  2. Call the streaming API via the SDK.
  3. Observe the above validation error in logs or exceptions.

correct case:usage is None

2025-07-18 15:00:34,734 DEBUG openai.agents:87: Received chunk: ChatCompletionChunk(id='chatcmpl-BuZMzsMYIKU9XggwEpz4Xtm3Jbfvb', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, refusal=None, role=None, tool_calls=None), finish_reason='stop', index=0, logprobs=None)], created=1752822029, model='o4-mini-2025-04-16', object='chat.completion.chunk', service_tier=None, system_fingerprint=None, usage=None)

exception case:sometimes usage is not None,but prompt_tokens is None

2025-07-18 14:56:07,106 DEBUG openai.agents:87: Received chunk: ChatCompletionChunk(id='chatcmpl-BuZIbXa9i6Rp2g886V6IFTwiDw5N6', choices=[Choice(delta=ChoiceDelta(content=None, function_call=None, refusal=None, role=None, tool_calls=None), finish_reason='stop', index=0, logprobs=None)], created=1752821757, model='o3-2025-04-16', object='chat.completion.chunk', service_tier=None, system_fingerprint=None, usage=CompletionUsage(**completion_tokens=None, prompt_tokens=None, total_tokens=None,** completion_tokens_details=None, prompt_tokens_details=None))

final_response.usage = (
            ResponseUsage(
                **input_tokens=usage.prompt_tokens,
                output_tokens=usage.completion_tokens,
                total_tokens=usage.total_tokens,**
                output_tokens_details=OutputTokensDetails(
                    reasoning_tokens=usage.completion_tokens_details.reasoning_tokens
                    if usage.completion_tokens_details
                    and usage.completion_tokens_details.reasoning_tokens
                    else 0
                ),
                input_tokens_details=InputTokensDetails(
                    cached_tokens=usage.prompt_tokens_details.cached_tokens
                    if usage.prompt_tokens_details and usage.prompt_tokens_details.cached_tokens
                    else 0
                ),
            )
            if usage
            else None
        )

Expected behavior

The SDK should gracefully handle missing or None token usage fields, defaulting them to 0 to avoid Pydantic validation errors.

Proposed solution

In agents/models/chatcmpl_stream_handler.py, ensure all token fields passed to ResponseUsage are always integers, using or 0 or similar fallback logic.

Environment

  • openai-agents-python version: (0.2.2)
  • Python version: (3.12)
  • Model/API endpoint: (/chat/completions)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions