Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 333858b

Browse files
authored
Merge branch 'main' into patch-1
2 parents 7e85c03 + 697f647 commit 333858b

18 files changed

+354
-42
lines changed
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
### Summary
2+
3+
<!-- Please give a short summary of the change and the problem this solves. -->
4+
5+
### Test plan
6+
7+
<!-- Please explain how this was tested -->
8+
9+
### Issue number
10+
11+
<!-- For example: "Closes #1234" -->
12+
13+
### Checks
14+
15+
- [ ] I've added new tests (if relevant)
16+
- [ ] I've added/updated the relevant documentation
17+
- [ ] I've run `make lint` and `make format`
18+
- [ ] I've made sure tests pass

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,11 @@ print(result.final_output)
4747

4848
(_If running this, ensure you set the `OPENAI_API_KEY` environment variable_)
4949

50+
(_For Jupyter notebook users, see [hello_world_jupyter.py](examples/basic/hello_world_jupyter.py)_)
51+
5052
## Handoffs example
5153

52-
```py
54+
```python
5355
from agents import Agent, Runner
5456
import asyncio
5557

@@ -146,6 +148,7 @@ The Agents SDK automatically traces your agent runs, making it easy to track and
146148
- [Comet Opik](https://www.comet.com/docs/opik/tracing/integrations/openai_agents)
147149
- [Keywords AI](https://docs.keywordsai.co/integration/development-frameworks/openai-agent)
148150
- [Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents)
151+
- [Scorecard](https://docs.scorecard.io/docs/documentation/features/tracing#openai-agents-sdk-integration)
149152

150153
For more details about how to customize or disable tracing, see [Tracing](http://openai.github.io/openai-agents-python/tracing).
151154

docs/models.md

Lines changed: 35 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -53,21 +53,41 @@ async def main():
5353

5454
## Using other LLM providers
5555

56-
Many providers also support the OpenAI API format, which means you can pass a `base_url` to the existing OpenAI model implementations and use them easily. `ModelSettings` is used to configure tuning parameters (e.g., temperature, top_p) for the model you select.
56+
You can use other LLM providers in 3 ways (examples [here](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/)):
5757

58-
```python
59-
external_client = AsyncOpenAI(
60-
api_key="EXTERNAL_API_KEY",
61-
base_url="https://api.external.com/v1/",
62-
)
58+
1. [`set_default_openai_client`][agents.set_default_openai_client] is useful in cases where you want to globally use an instance of `AsyncOpenAI` as the LLM client. This is for cases where the LLM provider has an OpenAI compatible API endpoint, and you can set the `base_url` and `api_key`. See a configurable example in [examples/model_providers/custom_example_global.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_global.py).
59+
2. [`ModelProvider`][agents.models.interface.ModelProvider] is at the `Runner.run` level. This lets you say "use a custom model provider for all agents in this run". See a configurable example in [examples/model_providers/custom_example_provider.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_provider.py).
60+
3. [`Agent.model`][agents.agent.Agent.model] lets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in [examples/model_providers/custom_example_agent.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_agent.py).
61+
62+
In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](tracing.md).
63+
64+
!!! note
65+
66+
In these examples, we use the Chat Completions API/model, because most LLM providers don't yet support the Responses API. If your LLM provider does support it, we recommend using Responses.
67+
68+
## Common issues with using other LLM providers
69+
70+
### Tracing client error 401
71+
72+
If you get errors related to tracing, this is because traces are uploaded to OpenAI servers, and you don't have an OpenAI API key. You have three options to resolve this:
73+
74+
1. Disable tracing entirely: [`set_tracing_disabled(True)`][agents.set_tracing_disabled].
75+
2. Set an OpenAI key for tracing: [`set_tracing_export_api_key(...)`][agents.set_tracing_export_api_key]. This API key will only be used for uploading traces, and must be from [platform.openai.com](https://platform.openai.com/).
76+
3. Use a non-OpenAI trace processor. See the [tracing docs](tracing.md#custom-tracing-processors).
77+
78+
### Responses API support
79+
80+
The SDK uses the Responses API by default, but most other LLM providers don't yet support it. You may see 404s or similar issues as a result. To resolve, you have two options:
81+
82+
1. Call [`set_default_openai_api("chat_completions")`][agents.set_default_openai_api]. This works if you are setting `OPENAI_API_KEY` and `OPENAI_BASE_URL` via environment vars.
83+
2. Use [`OpenAIChatCompletionsModel`][agents.models.openai_chatcompletions.OpenAIChatCompletionsModel]. There are examples [here](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/).
84+
85+
### Structured outputs support
86+
87+
Some model providers don't have support for [structured outputs](https://platform.openai.com/docs/guides/structured-outputs). This sometimes results in an error that looks something like this:
6388

64-
spanish_agent = Agent(
65-
name="Spanish agent",
66-
instructions="You only speak Spanish.",
67-
model=OpenAIChatCompletionsModel(
68-
model="EXTERNAL_MODEL_NAME",
69-
openai_client=external_client,
70-
),
71-
model_settings=ModelSettings(temperature=0.5),
72-
)
7389
```
90+
BadRequestError: Error code: 400 - {'error': {'message': "'response_format.type' : value is not one of the allowed values ['text','json_object']", 'type': 'invalid_request_error'}}
91+
```
92+
93+
This is a shortcoming of some model providers - they support JSON outputs, but don't allow you to specify the `json_schema` to use for the output. We are working on a fix for this, but we suggest relying on providers that do have support for JSON schema output, because otherwise your app will often break because of malformed JSON.

docs/tracing.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ async def main():
5050

5151
with trace("Joke workflow"): # (1)!
5252
first_result = await Runner.run(agent, "Tell me a joke")
53-
second_result = await Runner.run(agent, f"Rate this joke: {first_output.final_output}")
53+
second_result = await Runner.run(agent, f"Rate this joke: {first_result.final_output}")
5454
print(f"Joke: {first_result.final_output}")
5555
print(f"Rating: {second_result.final_output}")
5656
```
@@ -93,5 +93,6 @@ External trace processors include:
9393
- [AgentOps](https://docs.agentops.ai/v1/integrations/agentssdk)
9494
- [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk)
9595
- [Comet Opik](https://www.comet.com/docs/opik/tracing/integrations/openai_agents)
96+
- [Scorecard](https://docs.scorecard.io/docs/documentation/features/tracing#openai-agents-sdk-integration))
9697
- [Keywords AI](https://docs.keywordsai.co/integration/development-frameworks/openai-agent)
9798
- [Pydantic Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents)

examples/basic/hello_world_jupyter.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
from agents import Agent, Runner
2+
3+
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
4+
5+
# Intended for Jupyter notebooks where there's an existing event loop
6+
result = await Runner.run(agent, "Write a haiku about recursion in programming.") # type: ignore[top-level-await] # noqa: F704
7+
print(result.final_output)
8+
9+
# Code within code loops,
10+
# Infinite mirrors reflect—
11+
# Logic folds on self.

examples/model_providers/README.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
# Custom LLM providers
2+
3+
The examples in this directory demonstrate how you might use a non-OpenAI LLM provider. To run them, first set a base URL, API key and model.
4+
5+
```bash
6+
export EXAMPLE_BASE_URL="..."
7+
export EXAMPLE_API_KEY="..."
8+
export EXAMPLE_MODEL_NAME"..."
9+
```
10+
11+
Then run the examples, e.g.:
12+
13+
```
14+
python examples/model_providers/custom_example_provider.py
15+
16+
Loops within themselves,
17+
Function calls its own being,
18+
Depth without ending.
19+
```
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
import asyncio
2+
import os
3+
4+
from openai import AsyncOpenAI
5+
6+
from agents import Agent, OpenAIChatCompletionsModel, Runner, function_tool, set_tracing_disabled
7+
8+
BASE_URL = os.getenv("EXAMPLE_BASE_URL") or ""
9+
API_KEY = os.getenv("EXAMPLE_API_KEY") or ""
10+
MODEL_NAME = os.getenv("EXAMPLE_MODEL_NAME") or ""
11+
12+
if not BASE_URL or not API_KEY or not MODEL_NAME:
13+
raise ValueError(
14+
"Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code."
15+
)
16+
17+
"""This example uses a custom provider for a specific agent. Steps:
18+
1. Create a custom OpenAI client.
19+
2. Create a `Model` that uses the custom client.
20+
3. Set the `model` on the Agent.
21+
22+
Note that in this example, we disable tracing under the assumption that you don't have an API key
23+
from platform.openai.com. If you do have one, you can either set the `OPENAI_API_KEY` env var
24+
or call set_tracing_export_api_key() to set a tracing specific key.
25+
"""
26+
client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)
27+
set_tracing_disabled(disabled=True)
28+
29+
# An alternate approach that would also work:
30+
# PROVIDER = OpenAIProvider(openai_client=client)
31+
# agent = Agent(..., model="some-custom-model")
32+
# Runner.run(agent, ..., run_config=RunConfig(model_provider=PROVIDER))
33+
34+
35+
@function_tool
36+
def get_weather(city: str):
37+
print(f"[debug] getting weather for {city}")
38+
return f"The weather in {city} is sunny."
39+
40+
41+
async def main():
42+
# This agent will use the custom LLM provider
43+
agent = Agent(
44+
name="Assistant",
45+
instructions="You only respond in haikus.",
46+
model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client),
47+
tools=[get_weather],
48+
)
49+
50+
result = await Runner.run(agent, "What's the weather in Tokyo?")
51+
print(result.final_output)
52+
53+
54+
if __name__ == "__main__":
55+
asyncio.run(main())
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
import asyncio
2+
import os
3+
4+
from openai import AsyncOpenAI
5+
6+
from agents import (
7+
Agent,
8+
Runner,
9+
function_tool,
10+
set_default_openai_api,
11+
set_default_openai_client,
12+
set_tracing_disabled,
13+
)
14+
15+
BASE_URL = os.getenv("EXAMPLE_BASE_URL") or ""
16+
API_KEY = os.getenv("EXAMPLE_API_KEY") or ""
17+
MODEL_NAME = os.getenv("EXAMPLE_MODEL_NAME") or ""
18+
19+
if not BASE_URL or not API_KEY or not MODEL_NAME:
20+
raise ValueError(
21+
"Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code."
22+
)
23+
24+
25+
"""This example uses a custom provider for all requests by default. We do three things:
26+
1. Create a custom client.
27+
2. Set it as the default OpenAI client, and don't use it for tracing.
28+
3. Set the default API as Chat Completions, as most LLM providers don't yet support Responses API.
29+
30+
Note that in this example, we disable tracing under the assumption that you don't have an API key
31+
from platform.openai.com. If you do have one, you can either set the `OPENAI_API_KEY` env var
32+
or call set_tracing_export_api_key() to set a tracing specific key.
33+
"""
34+
35+
client = AsyncOpenAI(
36+
base_url=BASE_URL,
37+
api_key=API_KEY,
38+
)
39+
set_default_openai_client(client=client, use_for_tracing=False)
40+
set_default_openai_api("chat_completions")
41+
set_tracing_disabled(disabled=True)
42+
43+
44+
@function_tool
45+
def get_weather(city: str):
46+
print(f"[debug] getting weather for {city}")
47+
return f"The weather in {city} is sunny."
48+
49+
50+
async def main():
51+
agent = Agent(
52+
name="Assistant",
53+
instructions="You only respond in haikus.",
54+
model=MODEL_NAME,
55+
tools=[get_weather],
56+
)
57+
58+
result = await Runner.run(agent, "What's the weather in Tokyo?")
59+
print(result.final_output)
60+
61+
62+
if __name__ == "__main__":
63+
asyncio.run(main())
Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
from __future__ import annotations
2+
3+
import asyncio
4+
import os
5+
6+
from openai import AsyncOpenAI
7+
8+
from agents import (
9+
Agent,
10+
Model,
11+
ModelProvider,
12+
OpenAIChatCompletionsModel,
13+
RunConfig,
14+
Runner,
15+
function_tool,
16+
set_tracing_disabled,
17+
)
18+
19+
BASE_URL = os.getenv("EXAMPLE_BASE_URL") or ""
20+
API_KEY = os.getenv("EXAMPLE_API_KEY") or ""
21+
MODEL_NAME = os.getenv("EXAMPLE_MODEL_NAME") or ""
22+
23+
if not BASE_URL or not API_KEY or not MODEL_NAME:
24+
raise ValueError(
25+
"Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code."
26+
)
27+
28+
29+
"""This example uses a custom provider for some calls to Runner.run(), and direct calls to OpenAI for
30+
others. Steps:
31+
1. Create a custom OpenAI client.
32+
2. Create a ModelProvider that uses the custom client.
33+
3. Use the ModelProvider in calls to Runner.run(), only when we want to use the custom LLM provider.
34+
35+
Note that in this example, we disable tracing under the assumption that you don't have an API key
36+
from platform.openai.com. If you do have one, you can either set the `OPENAI_API_KEY` env var
37+
or call set_tracing_export_api_key() to set a tracing specific key.
38+
"""
39+
client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)
40+
set_tracing_disabled(disabled=True)
41+
42+
43+
class CustomModelProvider(ModelProvider):
44+
def get_model(self, model_name: str | None) -> Model:
45+
return OpenAIChatCompletionsModel(model=model_name or MODEL_NAME, openai_client=client)
46+
47+
48+
CUSTOM_MODEL_PROVIDER = CustomModelProvider()
49+
50+
51+
@function_tool
52+
def get_weather(city: str):
53+
print(f"[debug] getting weather for {city}")
54+
return f"The weather in {city} is sunny."
55+
56+
57+
async def main():
58+
agent = Agent(name="Assistant", instructions="You only respond in haikus.", tools=[get_weather])
59+
60+
# This will use the custom model provider
61+
result = await Runner.run(
62+
agent,
63+
"What's the weather in Tokyo?",
64+
run_config=RunConfig(model_provider=CUSTOM_MODEL_PROVIDER),
65+
)
66+
print(result.final_output)
67+
68+
# If you uncomment this, it will use OpenAI directly, not the custom provider
69+
# result = await Runner.run(
70+
# agent,
71+
# "What's the weather in Tokyo?",
72+
# )
73+
# print(result.final_output)
74+
75+
76+
if __name__ == "__main__":
77+
asyncio.run(main())

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "openai-agents"
3-
version = "0.0.3"
3+
version = "0.0.4"
44
description = "OpenAI Agents SDK"
55
readme = "README.md"
66
requires-python = ">=3.9"

src/agents/__init__.py

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -92,13 +92,19 @@
9292
from .usage import Usage
9393

9494

95-
def set_default_openai_key(key: str) -> None:
96-
"""Set the default OpenAI API key to use for LLM requests and tracing. This is only necessary if
97-
the OPENAI_API_KEY environment variable is not already set.
95+
def set_default_openai_key(key: str, use_for_tracing: bool = True) -> None:
96+
"""Set the default OpenAI API key to use for LLM requests (and optionally tracing(). This is
97+
only necessary if the OPENAI_API_KEY environment variable is not already set.
9898
9999
If provided, this key will be used instead of the OPENAI_API_KEY environment variable.
100+
101+
Args:
102+
key: The OpenAI key to use.
103+
use_for_tracing: Whether to also use this key to send traces to OpenAI. Defaults to True
104+
If False, you'll either need to set the OPENAI_API_KEY environment variable or call
105+
set_tracing_export_api_key() with the API key you want to use for tracing.
100106
"""
101-
_config.set_default_openai_key(key)
107+
_config.set_default_openai_key(key, use_for_tracing)
102108

103109

104110
def set_default_openai_client(client: AsyncOpenAI, use_for_tracing: bool = True) -> None:

src/agents/_config.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,18 @@
55
from .tracing import set_tracing_export_api_key
66

77

8-
def set_default_openai_key(key: str) -> None:
9-
set_tracing_export_api_key(key)
8+
def set_default_openai_key(key: str, use_for_tracing: bool) -> None:
109
_openai_shared.set_default_openai_key(key)
1110

11+
if use_for_tracing:
12+
set_tracing_export_api_key(key)
13+
1214

1315
def set_default_openai_client(client: AsyncOpenAI, use_for_tracing: bool) -> None:
16+
_openai_shared.set_default_openai_client(client)
17+
1418
if use_for_tracing:
1519
set_tracing_export_api_key(client.api_key)
16-
_openai_shared.set_default_openai_client(client)
1720

1821

1922
def set_default_openai_api(api: Literal["chat_completions", "responses"]) -> None:

0 commit comments

Comments
 (0)