-
Notifications
You must be signed in to change notification settings - Fork 2k
Comparing changes
Open a pull request
base repository: openai/openai-agents-python
base: v0.2.2
head repository: openai/openai-agents-python
compare: main
- 10 commits
- 38 files changed
- 11 contributors
Commits on Jul 17, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 72bd289 - Browse repository at this point
Copy the full SHA 72bd289View commit details -
Update all translated document pages (#1173)
Automated update of translated documentation Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for fdbc618 - Browse repository at this point
Copy the full SHA fdbc618View commit details
Commits on Jul 18, 2025
-
Configuration menu - View commit details
-
Copy full SHA for bc32b99 - Browse repository at this point
Copy the full SHA bc32b99View commit details -
fix: ensure ResponseUsage token fields are int, not None (fixes #1179) (
#1181) ### Problem When using streaming responses, some models or API endpoints may return `usage` fields (`prompt_tokens`, `completion_tokens`, `total_tokens`) as `None` or omit them entirely. The current implementation passes these values directly to the `ResponseUsage` Pydantic model, which expects integers. This causes a validation error: 3 validation errors for ResponseUsage input_tokens Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType] output_tokens Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType] total_tokens Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType] ### Solution This PR ensures that all token fields passed to `ResponseUsage` are always integers. If any of the fields are `None` or missing, they default to `0`. This is achieved by using `or 0` and explicit `is not None` checks for nested fields. **Key changes:** - All `input_tokens`, `output_tokens`, `total_tokens` fields use `or 0` fallback. ### Impact - Fixes Pydantic validation errors for streaming responses with missing/None usage fields. - Improves compatibility with OpenAI and third-party models. - No breaking changes; only adds robustness. fixes #1179 Co-authored-by: thomas <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 23404e8 - Browse repository at this point
Copy the full SHA 23404e8View commit details -
Add missing guardrail exception import to quickstart (#1161)
docs: add missing InputGuardrailTripwireTriggered import to quickstart example Fixes NameError when handling guardrail exceptions by including the required import in the docs.
Configuration menu - View commit details
-
Copy full SHA for 9046577 - Browse repository at this point
Copy the full SHA 9046577View commit details -
fix: fallback to function name for unnamed output_guardrail decorators (
#1133) **Overview:** This PR improves the output_guardrail behavior by ensuring a valid name is always assigned to the guardrail, even when the decorator is used without parentheses or without explicitly providing a name. **Problem:** Previously, when the decorator @output_guardrail was used without a name (and without parentheses), the name attribute of the guardrail remained None. This resulted in issues during runtime — specifically, the guardrail name did not appear in result.input_guardrail_results, making it harder to trace or debug guardrail outputs. While the OutputGuardrail.get_name() method correctly defaults to the function name when name is None, this method is not used inside the decorator. Hence, unless a name is provided explicitly, the OutputGuardrail instance holds None for its name internally. **Solution:** This PR updates the decorator logic to: Automatically fallback to the function name if the name parameter is not provided. Ensure that the guardrail always has a meaningful identifier, which improves downstream behavior such as logging, debugging, and result tracing. **Example Behavior Before:** @output_guardrail def validate_output(...): Name remains None **Example Behavior After:** @output_guardrail def validate_output(...): Name becomes "validate_output" automatically **Why it matters:** This small change avoids hidden bugs or inconsistencies in downstream systems (like guardrail_results) that rely on guardrail names being defined. It also brings consistent behavior whether or not parentheses are used in the decorator.
Configuration menu - View commit details
-
Copy full SHA for 4b8f40e - Browse repository at this point
Copy the full SHA 4b8f40eView commit details -
Mark some dataclasses as pydantic dataclasses (#1131)
This is the set of top level types used by Temporal for serialization across activity boundaries. In order to ensure that the models contained in these dataclasses are built prior to use, the dataclasses need to be `pydantic.dataclasses.dataclass` rather than `dataclasses.dataclass` This fixes issues where the types cannot be serialized if the contained types happen not to have been built. This happens particularly often when model logging is disabled, which happened to build the pydantic models as a side effect.
Configuration menu - View commit details
-
Copy full SHA for 4a31bb6 - Browse repository at this point
Copy the full SHA 4a31bb6View commit details -
fix: Apply strict JSON schema validation in FunctionTool constructor (#…
…1041) ## Summary Fixes an issue where directly created `FunctionTool` objects fail with OpenAI's Responses API due to missing `additionalProperties: false` in the JSON schema, while the `@function_tool` decorator works correctly. ## Problem The documentation example for creating `FunctionTool` objects directly fails with: ``` Error code: 400 - {'error': {'message': "Invalid schema for function 'process_user': In context=(), 'additionalProperties' is required to be supplied and to be false.", 'type': 'invalid_request_error', 'param': 'tools[0].parameters', 'code': 'invalid_function_parameters'}} ``` This creates an inconsistency between `FunctionTool` and `@function_tool` behavior, both of which have `strict_json_schema=True` by default. ## Solution - Added `__post_init__` method to `FunctionTool` dataclass - Automatically applies `ensure_strict_json_schema()` when `strict_json_schema=True` - Makes behavior consistent with `@function_tool` decorator - Maintains backward compatibility ## Testing The fix can be verified by running the reproduction case from the issue: ```python from typing import Any from pydantic import BaseModel from agents import RunContextWrapper, FunctionTool, Agent, Runner class FunctionArgs(BaseModel): username: str age: int async def run_function(ctx: RunContextWrapper[Any], args: str) -> str: parsed = FunctionArgs.model_validate_json(args) return f"{parsed.username} is {parsed.age} years old" # This now works without manual ensure_strict_json_schema() call tool = FunctionTool( name="process_user", description="Processes extracted user data", params_json_schema=FunctionArgs.model_json_schema(), on_invoke_tool=run_function, ) agent = Agent( name="Test Agent", instructions="You are a test agent", tools=[tool] ) result = Runner.run_sync(agent, "Process user data for John who is 30 years old") ```
Configuration menu - View commit details
-
Copy full SHA for de8accc - Browse repository at this point
Copy the full SHA de8acccView commit details -
Fix image_generator example error on Windows OS (#1180)
Due to the method name typo, it does not work on the OS.
Configuration menu - View commit details
-
Copy full SHA for 7a8866f - Browse repository at this point
Copy the full SHA 7a8866fView commit details -
Update all translated document pages (#1184)
Automated update of translated documentation Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for f20aa40 - Browse repository at this point
Copy the full SHA f20aa40View commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff v0.2.2...main