-
Notifications
You must be signed in to change notification settings - Fork 0
Comparing changes
Open a pull request
base repository: yangzxstar/openai-agents-python
base: main
head repository: openai/openai-agents-python
compare: main
- 15 commits
- 34 files changed
- 10 contributors
Commits on May 29, 2025
-
Add Portkey AI as a tracing provider (openai#785)
This PR adds Portkey AI as a tracing provider. Portkey helps you take your OpenAI agents from prototype to production. Portkey turns your experimental OpenAI Agents into production-ready systems by providing: - Complete observability of every agent step, tool use, and interaction - Built-in reliability with fallbacks, retries, and load balancing - Cost tracking and optimization to manage your AI spend - Access to 1600+ LLMs through a single integration - Guardrails to keep agent behavior safe and compliant - Version-controlled prompts for consistent agent performance Towards openai#786
Configuration menu - View commit details
-
Copy full SHA for d46e2ec - Browse repository at this point
Copy the full SHA d46e2ecView commit details -
Added RunErrorDetails object for MaxTurnsExceeded exception (openai#743)
### Summary Introduced the `RunErrorDetails` object to get partial results from a run interrupted by `MaxTurnsExceeded` exception. In this proposal the `RunErrorDetails` object contains all the fields from `RunResult` with `final_output` set to `None` and `output_guardrail_results` set to an empty list. We can decide to return less information. @rm-openai At the moment the exception doesn't return the `RunErrorDetails` object for the streaming mode. Do you have any suggestions on how to deal with it? In the `_check_errors` function of `agents/result.py` file. ### Test plan I have not implemented any tests currently, but if needed I can implement a basic test to retrieve partial data. ### Issue number This PR is an attempt to solve issue openai#719 ### Checks - [✅ ] I've added new tests (if relevant) - [ ] I've added/updated the relevant documentation - [ ✅] I've run `make lint` and `make format` - [ ✅] I've made sure tests pass
Configuration menu - View commit details
-
Copy full SHA for 7196862 - Browse repository at this point
Copy the full SHA 7196862View commit details -
Configuration menu - View commit details
-
Copy full SHA for 47fa8e8 - Browse repository at this point
Copy the full SHA 47fa8e8View commit details
Commits on May 30, 2025
-
Small fix for litellm model (openai#789)
Small fix: Removing `import litellm.types` as its outside the try except block for importing litellm so the import error message isn't displayed, and the line actually isn't needed. I was reproducing a GitHub issue and came across this in the process.
Configuration menu - View commit details
-
Copy full SHA for b699d9a - Browse repository at this point
Copy the full SHA b699d9aView commit details -
Fix typo in assertion message for handoff function (openai#780)
### Overview This PR fixes a typo in the assert statement within the `handoff` function in `handoffs.py`, changing `'on_input'` to `'on_handoff`' for accuracy and clarity. ### Changes - Corrected the word “on_input” to “on_handoff” in the docstring. ### Motivation Clear and correct documentation improves code readability and reduces confusion for users and contributors. ### Checklist - [x] I have reviewed the docstring after making the change. - [x] No functionality is affected. - [x] The change follows the repository’s contribution guidelines.
Configuration menu - View commit details
-
Copy full SHA for 16fb29c - Browse repository at this point
Copy the full SHA 16fb29cView commit details -
Fix typo: Replace 'two' with 'three' in /docs/mcp.md (openai#757)
The documentation in `docs/mcp.md` listed three server types (stdio, HTTP over SSE, Streamable HTTP) but incorrectly stated "two kinds of servers" in the heading. This PR fixes the numerical discrepancy. **Changes:** - Modified from "two kinds of servers" to "three kinds of servers". - File: `docs/mcp.md` (line 11).
Configuration menu - View commit details
-
Copy full SHA for 0a28d71 - Browse repository at this point
Copy the full SHA 0a28d71View commit details -
Update input_guardrails.py (openai#774)
Changed the function comment as input_guardrails only deals with input messages
Configuration menu - View commit details
-
Copy full SHA for ad80f78 - Browse repository at this point
Copy the full SHA ad80f78View commit details -
docs: fix typo in docstring for is_strict_json_schema method (openai#775
) ### Overview This PR fixes a small typo in the docstring of the `is_strict_json_schema` abstract method of the `AgentOutputSchemaBase` class in `agent_output.py`. ### Changes - Corrected the word “valis” to “valid” in the docstring. ### Motivation Clear and correct documentation improves code readability and reduces confusion for users and contributors. ### Checklist - [x] I have reviewed the docstring after making the change. - [x] No functionality is affected. - [x] The change follows the repository’s contribution guidelines.
Configuration menu - View commit details
-
Copy full SHA for 6438350 - Browse repository at this point
Copy the full SHA 6438350View commit details -
Add comment to handoff_occured misspelling (openai#792)
People keep trying to fix this, but its a breaking change.
Configuration menu - View commit details
-
Copy full SHA for cfe9099 - Browse repository at this point
Copy the full SHA cfe9099View commit details
Commits on Jun 2, 2025
-
Fix openai#777 by handling MCPCall events in RunImpl (openai#799)
This pull request resolves openai#777; If you think we should introduce a new item type for MCP call output, please let me know. As other hosted tools use this event, I believe using the same should be good to go tho.
Configuration menu - View commit details
-
Copy full SHA for 3e7b286 - Browse repository at this point
Copy the full SHA 3e7b286View commit details -
Ensure item.model_dump only contains JSON serializable types (openai#801
Configuration menu - View commit details
-
Copy full SHA for 775d3e2 - Browse repository at this point
Copy the full SHA 775d3e2View commit details -
Don't cache agent tools during a run (openai#803)
### Summary: Towards openai#767. We were caching the list of tools for an agent, so if you did `agent.tools.append(...)` from a tool call, the next call to the model wouldn't include the new tool. THis is a bug. ### Test Plan: Unit tests. Note that now MCP tools are listed each time the agent runs (users can still cache the `list_tools` however).
Configuration menu - View commit details
-
Copy full SHA for d4c7a23 - Browse repository at this point
Copy the full SHA d4c7a23View commit details -
Only start tracing worker thread on first span/trace (openai#804)
Closes openai#796. Shouldn't start a busy waiting thread if there aren't any traces. Test plan ``` import threading assert threading.active_count() == 1 import agents assert threading.active_count() == 1 ```
Configuration menu - View commit details
-
Copy full SHA for 995af4d - Browse repository at this point
Copy the full SHA 995af4dView commit details
Commits on Jun 3, 2025
-
Add is_enabled to FunctionTool (openai#808)
### Summary: Allows a user to do `function_tool(is_enabled=<some_callable>)`; the callable is called when the agent runs. This allows you to dynamically enable/disable a tool based on the context/env. The meta-goal is to allow `Agent` to be effectively immutable. That enables some nice things down the line, and this allows you to dynamically modify the tools list without mutating the agent. ### Test Plan: Unit tests
Configuration menu - View commit details
-
Copy full SHA for 4046fcb - Browse repository at this point
Copy the full SHA 4046fcbView commit details
Commits on Jun 4, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 204bec1 - Browse repository at this point
Copy the full SHA 204bec1View commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff main...main