Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[pull] main from openai:main #9

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 266 commits into
base: main
Choose a base branch
from
Open

[pull] main from openai:main #9

wants to merge 266 commits into from

Conversation

pull[bot]
Copy link

@pull pull bot commented Mar 19, 2025

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.1)

Can you help keep this open source service alive? 💖 Please sponsor : )

@pull pull bot added the ⤵️ pull label Mar 19, 2025
rm-openai and others added 19 commits March 18, 2025 21:55
## Context
By default, the outputs of tools are sent to the LLM again. The LLM gets
to read the outputs, and produce a new response. There are cases where
this is not desired:
1. Every tool results in another round trip, and sometimes the output of
the tool is enough.
2. If you force tool use (via model settings `tool_choice=required`),
then the agent will just infinite loop.

This enables you to have different behavior, e.g. use the first tool
output as the final output, or write a custom function to process tool
results and potentially produce an output.

## Test plan
Added new tests and ran existing tests
Also added examples.


Closes #117
Comet Opik added support for Agent SDK tracing and should be included.
Have gotten feedback that Examples are somewhat buried in the Github
docs. Adding new page after quickstart.
DanieleMorotti and others added 30 commits May 14, 2025 11:31
When an input image is given as input, the code tries to access the
'detail' key, that may not be present as noted in #159.

With this pull request, now it tries to access the key, otherwise set
the value to `None`.
@pakrym-oai  or @rm-openai let me know if you want any changes.
**Purpose**  
Allow arbitrary `extra_body` parameters (e.g. `cached_content`) to be
forwarded into the LiteLLM call. Useful for context caching in Gemini
models
([docs](https://ai.google.dev/gemini-api/docs/caching?lang=python)).

**Example usage**  
```python
import os
from agents import Agent, ModelSettings
from agents.extensions.models.litellm_model import LitellmModel

cache_name = "cachedContents/34jopukfx5di"  # previously stored context

gemini_model = LitellmModel(
    model="gemini/gemini-1.5-flash-002",
    api_key=os.getenv("GOOGLE_API_KEY")
)

agent = Agent(
    name="Cached Gemini Agent",
    model=gemini_model,
    model_settings=ModelSettings(
        extra_body={"cached_content": cache_name}
    )
)
Added missing word "be" in prompt instructions.

This is unlikely to change the agent functionality in most cases, but
optimal clarity in prompt language is a best practice.
Adding an AGENTS.md file for Codex use
Added the `instructions` attribute to the MCP servers to solve #704 .

Let me know if you want to add an example to the documentation.
PR to enhance the `Usage` object and related logic, to support more
granular token accounting, matching the details available in the [OpenAI
Responses API](https://platform.openai.com/docs/api-reference/responses)
. Specifically, it:

- Adds `input_tokens_details` and `output_tokens_details` fields to the
`Usage` dataclass, storing detailed token breakdowns (e.g.,
`cached_tokens`, `reasoning_tokens`).
- Flows this change through
- Updates and extends tests to match
- Adds a test for the Usage.add method

### Motivation
- Aligns the SDK’s usage with the latest OpenAI responses API Usage
object
- Supports downstream use cases that require fine-grained token usage
data (e.g., billing, analytics, optimization) requested by startups

---------

Co-authored-by: Wulfie Bain <[email protected]>
---
[//]: # (BEGIN SAPLING FOOTER)
* #732
* #731
* __->__ #730
---
[//]: # (BEGIN SAPLING FOOTER)
* #732
* __->__ #731
## Summary
- avoid infinite recursion in visualization by tracking visited agents
- test cycle detection in graph utility

## Testing
- `make mypy`
- `make tests` 

Resolves #668
## Summary
- mention MCPServerStreamableHttp in MCP server docs
- document CodeInterpreterTool, HostedMCPTool, ImageGenerationTool and
LocalShellTool
- update Japanese translations
## Summary
- avoid AttributeError when Gemini API returns `None` for chat message
- return empty output if message is filtered
- add regression test

## Testing
- `make format`
- `make lint`
- `make mypy`
- `make tests`

Towards #744
This PR adds Portkey AI as a tracing provider. Portkey helps you take
your OpenAI agents from prototype to production.

Portkey turns your experimental OpenAI Agents into production-ready
systems by providing:

- Complete observability of every agent step, tool use, and interaction
- Built-in reliability with fallbacks, retries, and load balancing
- Cost tracking and optimization to manage your AI spend
- Access to 1600+ LLMs through a single integration
- Guardrails to keep agent behavior safe and compliant
- Version-controlled prompts for consistent agent performance


Towards #786
### Summary

Introduced the `RunErrorDetails` object to get partial results from a
run interrupted by `MaxTurnsExceeded` exception. In this proposal the
`RunErrorDetails` object contains all the fields from `RunResult` with
`final_output` set to `None` and `output_guardrail_results` set to an
empty list. We can decide to return less information.

@rm-openai At the moment the exception doesn't return the
`RunErrorDetails` object for the streaming mode. Do you have any
suggestions on how to deal with it? In the `_check_errors` function of
`agents/result.py` file.

### Test plan

I have not implemented any tests currently, but if needed I can
implement a basic test to retrieve partial data.

### Issue number

This PR is an attempt to solve issue #719 

### Checks

- [✅ ] I've added new tests (if relevant)
- [ ] I've added/updated the relevant documentation
- [ ✅] I've run `make lint` and `make format`
- [ ✅] I've made sure tests pass
Small fix:

Removing `import litellm.types` as its outside the try except block for
importing litellm so the import error message isn't displayed, and the
line actually isn't needed. I was reproducing a GitHub issue and came
across this in the process.
### Overview

This PR fixes a typo in the assert statement within the `handoff`
function in `handoffs.py`, changing `'on_input'` to `'on_handoff`' for
accuracy and clarity.

### Changes

- Corrected the word “on_input” to “on_handoff” in the docstring.

### Motivation

Clear and correct documentation improves code readability and reduces
confusion for users and contributors.

### Checklist

- [x] I have reviewed the docstring after making the change.
- [x] No functionality is affected.
- [x] The change follows the repository’s contribution guidelines.
The documentation in `docs/mcp.md` listed three server types (stdio,
HTTP over SSE, Streamable HTTP) but incorrectly stated "two kinds of
servers" in the heading. This PR fixes the numerical discrepancy.

**Changes:** 

- Modified from "two kinds of servers" to "three kinds of servers". 
- File: `docs/mcp.md` (line 11).
Changed the function comment as input_guardrails only deals with input
messages
### Overview

This PR fixes a small typo in the docstring of the
`is_strict_json_schema` abstract method of the `AgentOutputSchemaBase`
class in `agent_output.py`.

### Changes

- Corrected the word “valis” to “valid” in the docstring.

### Motivation

Clear and correct documentation improves code readability and reduces
confusion for users and contributors.

### Checklist

- [x] I have reviewed the docstring after making the change.
- [x] No functionality is affected.
- [x] The change follows the repository’s contribution guidelines.
People keep trying to fix this, but its a breaking change.
This pull request resolves #777; If you think we should introduce a new
item type for MCP call output, please let me know. As other hosted tools
use this event, I believe using the same should be good to go tho.
The EmbeddedResource from MCP tool call contains a field with type
AnyUrl that is not JSON-serializable. To avoid this exception, use
item.model_dump(mode="json") to ensure a JSON-serializable return value.
### Summary:
Towards #767. We were caching the list of tools for an agent, so if you
did `agent.tools.append(...)` from a tool call, the next call to the
model wouldn't include the new tool. THis is a bug.

### Test Plan:
Unit tests. Note that now MCP tools are listed each time the agent runs
(users can still cache the `list_tools` however).
Closes #796. Shouldn't start a busy waiting thread if there aren't any
traces.

Test plan
```
import threading
assert threading.active_count() == 1
import agents
assert threading.active_count() == 1
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.