Thanks to visit codestin.com
Credit goes to github.com

Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: yangzxstar/openai-agents-python
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: main
Choose a base ref
...
head repository: openai/openai-agents-python
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref
Checking mergeability… Don’t worry, you can still create the pull request.
  • 15 commits
  • 34 files changed
  • 10 contributors

Commits on May 29, 2025

  1. Add Portkey AI as a tracing provider (openai#785)

    This PR adds Portkey AI as a tracing provider. Portkey helps you take
    your OpenAI agents from prototype to production.
    
    Portkey turns your experimental OpenAI Agents into production-ready
    systems by providing:
    
    - Complete observability of every agent step, tool use, and interaction
    - Built-in reliability with fallbacks, retries, and load balancing
    - Cost tracking and optimization to manage your AI spend
    - Access to 1600+ LLMs through a single integration
    - Guardrails to keep agent behavior safe and compliant
    - Version-controlled prompts for consistent agent performance
    
    
    Towards openai#786
    siddharthsambharia-portkey authored May 29, 2025
    Configuration menu
    Copy the full SHA
    d46e2ec View commit details
    Browse the repository at this point in the history
  2. Added RunErrorDetails object for MaxTurnsExceeded exception (openai#743)

    ### Summary
    
    Introduced the `RunErrorDetails` object to get partial results from a
    run interrupted by `MaxTurnsExceeded` exception. In this proposal the
    `RunErrorDetails` object contains all the fields from `RunResult` with
    `final_output` set to `None` and `output_guardrail_results` set to an
    empty list. We can decide to return less information.
    
    @rm-openai At the moment the exception doesn't return the
    `RunErrorDetails` object for the streaming mode. Do you have any
    suggestions on how to deal with it? In the `_check_errors` function of
    `agents/result.py` file.
    
    ### Test plan
    
    I have not implemented any tests currently, but if needed I can
    implement a basic test to retrieve partial data.
    
    ### Issue number
    
    This PR is an attempt to solve issue openai#719 
    
    ### Checks
    
    - [✅ ] I've added new tests (if relevant)
    - [ ] I've added/updated the relevant documentation
    - [ ✅] I've run `make lint` and `make format`
    - [ ✅] I've made sure tests pass
    DanieleMorotti authored May 29, 2025
    Configuration menu
    Copy the full SHA
    7196862 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    47fa8e8 View commit details
    Browse the repository at this point in the history

Commits on May 30, 2025

  1. Small fix for litellm model (openai#789)

    Small fix:
    
    Removing `import litellm.types` as its outside the try except block for
    importing litellm so the import error message isn't displayed, and the
    line actually isn't needed. I was reproducing a GitHub issue and came
    across this in the process.
    robtinn authored May 30, 2025
    Configuration menu
    Copy the full SHA
    b699d9a View commit details
    Browse the repository at this point in the history
  2. Fix typo in assertion message for handoff function (openai#780)

    ### Overview
    
    This PR fixes a typo in the assert statement within the `handoff`
    function in `handoffs.py`, changing `'on_input'` to `'on_handoff`' for
    accuracy and clarity.
    
    ### Changes
    
    - Corrected the word “on_input” to “on_handoff” in the docstring.
    
    ### Motivation
    
    Clear and correct documentation improves code readability and reduces
    confusion for users and contributors.
    
    ### Checklist
    
    - [x] I have reviewed the docstring after making the change.
    - [x] No functionality is affected.
    - [x] The change follows the repository’s contribution guidelines.
    Rehan-Ul-Haq authored May 30, 2025
    Configuration menu
    Copy the full SHA
    16fb29c View commit details
    Browse the repository at this point in the history
  3. Fix typo: Replace 'two' with 'three' in /docs/mcp.md (openai#757)

    The documentation in `docs/mcp.md` listed three server types (stdio,
    HTTP over SSE, Streamable HTTP) but incorrectly stated "two kinds of
    servers" in the heading. This PR fixes the numerical discrepancy.
    
    **Changes:** 
    
    - Modified from "two kinds of servers" to "three kinds of servers". 
    - File: `docs/mcp.md` (line 11).
    luochang212 authored May 30, 2025
    Configuration menu
    Copy the full SHA
    0a28d71 View commit details
    Browse the repository at this point in the history
  4. Update input_guardrails.py (openai#774)

    Changed the function comment as input_guardrails only deals with input
    messages
    venkatnaveen7 authored May 30, 2025
    Configuration menu
    Copy the full SHA
    ad80f78 View commit details
    Browse the repository at this point in the history
  5. docs: fix typo in docstring for is_strict_json_schema method (openai#775

    )
    
    ### Overview
    
    This PR fixes a small typo in the docstring of the
    `is_strict_json_schema` abstract method of the `AgentOutputSchemaBase`
    class in `agent_output.py`.
    
    ### Changes
    
    - Corrected the word “valis” to “valid” in the docstring.
    
    ### Motivation
    
    Clear and correct documentation improves code readability and reduces
    confusion for users and contributors.
    
    ### Checklist
    
    - [x] I have reviewed the docstring after making the change.
    - [x] No functionality is affected.
    - [x] The change follows the repository’s contribution guidelines.
    Rehan-Ul-Haq authored May 30, 2025
    Configuration menu
    Copy the full SHA
    6438350 View commit details
    Browse the repository at this point in the history
  6. Add comment to handoff_occured misspelling (openai#792)

    People keep trying to fix this, but its a breaking change.
    rm-openai authored May 30, 2025
    Configuration menu
    Copy the full SHA
    cfe9099 View commit details
    Browse the repository at this point in the history

Commits on Jun 2, 2025

  1. Fix openai#777 by handling MCPCall events in RunImpl (openai#799)

    This pull request resolves openai#777; If you think we should introduce a new
    item type for MCP call output, please let me know. As other hosted tools
    use this event, I believe using the same should be good to go tho.
    seratch authored Jun 2, 2025
    Configuration menu
    Copy the full SHA
    3e7b286 View commit details
    Browse the repository at this point in the history
  2. Ensure item.model_dump only contains JSON serializable types (openai#801

    )
    
    The EmbeddedResource from MCP tool call contains a field with type
    AnyUrl that is not JSON-serializable. To avoid this exception, use
    item.model_dump(mode="json") to ensure a JSON-serializable return value.
    westhood authored Jun 2, 2025
    Configuration menu
    Copy the full SHA
    775d3e2 View commit details
    Browse the repository at this point in the history
  3. Don't cache agent tools during a run (openai#803)

    ### Summary:
    Towards openai#767. We were caching the list of tools for an agent, so if you
    did `agent.tools.append(...)` from a tool call, the next call to the
    model wouldn't include the new tool. THis is a bug.
    
    ### Test Plan:
    Unit tests. Note that now MCP tools are listed each time the agent runs
    (users can still cache the `list_tools` however).
    rm-openai authored Jun 2, 2025
    Configuration menu
    Copy the full SHA
    d4c7a23 View commit details
    Browse the repository at this point in the history
  4. Only start tracing worker thread on first span/trace (openai#804)

    Closes openai#796. Shouldn't start a busy waiting thread if there aren't any
    traces.
    
    Test plan
    ```
    import threading
    assert threading.active_count() == 1
    import agents
    assert threading.active_count() == 1
    ```
    rm-openai authored Jun 2, 2025
    Configuration menu
    Copy the full SHA
    995af4d View commit details
    Browse the repository at this point in the history

Commits on Jun 3, 2025

  1. Add is_enabled to FunctionTool (openai#808)

    ### Summary:
    Allows a user to do `function_tool(is_enabled=<some_callable>)`; the
    callable is called when the agent runs.
    
    This allows you to dynamically enable/disable a tool based on the
    context/env.
    
    The meta-goal is to allow `Agent` to be effectively immutable. That
    enables some nice things down the line, and this allows you to
    dynamically modify the tools list without mutating the agent.
    
    ### Test Plan:
    Unit tests
    rm-openai authored Jun 3, 2025
    Configuration menu
    Copy the full SHA
    4046fcb View commit details
    Browse the repository at this point in the history

Commits on Jun 4, 2025

  1. v0.0.17 (openai#809)

    bump version
    rm-openai authored Jun 4, 2025
    Configuration menu
    Copy the full SHA
    204bec1 View commit details
    Browse the repository at this point in the history
Loading