Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit d4c20b7

Browse files
committed
翻译文档为中文,包括智能体、配置、上下文管理、示例和防护栏部分
1 parent d3be189 commit d4c20b7

File tree

5 files changed

+147
-148
lines changed

5 files changed

+147
-148
lines changed

docs/zh/agents.md

Lines changed: 30 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
1-
# Agents
1+
# 智能体
22

3-
Agents are the core building block in your apps. An agent is a large language model (LLM), configured with instructions and tools.
3+
智能体是应用程序中的核心构建模块。一个智能体是一个配置了指令和工具的大型语言模型(LLM)
44

5-
## Basic configuration
5+
## 基础配置
66

7-
The most common properties of an agent you'll configure are:
7+
智能体最常配置的属性包括:
88

9-
- `instructions`: also known as a developer message or system prompt.
10-
- `model`: which LLM to use, and optional `model_settings` to configure model tuning parameters like temperature, top_p, etc.
11-
- `tools`: Tools that the agent can use to achieve its tasks.
9+
- `instructions`: 也称为开发者消息或系统提示
10+
- `model`: 使用的LLM模型,以及可选的`model_settings`用于配置模型调优参数如temperature、top_p等
11+
- `tools`: 智能体可用于完成任务的各种工具
1212

1313
```python
1414
from agents import Agent, ModelSettings, function_tool
@@ -27,7 +27,7 @@ agent = Agent(
2727

2828
## Context
2929

30-
Agents are generic on their `context` type. Context is a dependency-injection tool: it's an object you create and pass to `Runner.run()`, that is passed to every agent, tool, handoff etc, and it serves as a grab bag of dependencies and state for the agent run. You can provide any Python object as the context.
30+
智能体的`context`类型是泛型的。上下文是一个依赖注入工具:它是你创建并传递给`Runner.run()`的对象,会传递给每个智能体、工具、交接等,作为智能体运行的依赖和状态的容器。你可以提供任何Python对象作为上下文。
3131

3232
```python
3333
@dataclass
@@ -43,9 +43,9 @@ agent = Agent[UserContext](
4343
)
4444
```
4545

46-
## Output types
46+
## 输出类型
4747

48-
By default, agents produce plain text (i.e. `str`) outputs. If you want the agent to produce a particular type of output, you can use the `output_type` parameter. A common choice is to use [Pydantic](https://docs.pydantic.dev/) objects, but we support any type that can be wrapped in a Pydantic [TypeAdapter](https://docs.pydantic.dev/latest/api/type_adapter/) - dataclasses, lists, TypedDict, etc.
48+
默认情况下,智能体产生纯文本(即`str`)输出。如果你希望智能体产生特定类型的输出,可以使用`output_type`参数。常见的选择是使用[Pydantic](https://docs.pydantic.dev/)对象,但我们支持任何可以用Pydantic [TypeAdapter](https://docs.pydantic.dev/latest/api/type_adapter/)包装的类型 - 数据类、列表、TypedDict等。
4949

5050
```python
5151
from pydantic import BaseModel
@@ -66,11 +66,13 @@ agent = Agent(
6666

6767
!!! note
6868

69-
When you pass an `output_type`, that tells the model to use [structured outputs](https://platform.openai.com/docs/guides/structured-outputs) instead of regular plain text responses.
69+
当你传递`output_type`时,这会告诉模型使用[结构化输出](https://platform.openai.com/docs/guides/structured-outputs)而不是常规的纯文本响应。
7070

7171
## Handoffs
7272

73-
Handoffs are sub-agents that the agent can delegate to. You provide a list of handoffs, and the agent can choose to delegate to them if relevant. This is a powerful pattern that allows orchestrating modular, specialized agents that excel at a single task. Read more in the [handoffs](handoffs.md) documentation.
73+
`Handoffs`是智能体可以委托的子智能体。你提供一个交接列表,智能体可以在相关时选择委托给它们。这是一种强大的模式,可以协调模块化、专业化的智能体,每个智能体都擅长单一任务。更多信息请参
74+
75+
[交接](handoffs.md)文档。
7476

7577
```python
7678
from agents import Agent
@@ -89,9 +91,9 @@ triage_agent = Agent(
8991
)
9092
```
9193

92-
## Dynamic instructions
94+
## 动态指令
9395

94-
In most cases, you can provide instructions when you create the agent. However, you can also provide dynamic instructions via a function. The function will receive the agent and context, and must return the prompt. Both regular and `async` functions are accepted.
96+
在大多数情况下,你可以在创建智能体时提供指令。但是,你也可以通过函数提供动态指令。该函数将接收智能体和上下文,并必须返回提示。支持常规函数和`async`函数。
9597

9698
```python
9799
def dynamic_instructions(
@@ -106,17 +108,17 @@ agent = Agent[UserContext](
106108
)
107109
```
108110

109-
## Lifecycle events (hooks)
111+
## 生命周期事件(钩子)
110112

111-
Sometimes, you want to observe the lifecycle of an agent. For example, you may want to log events, or pre-fetch data when certain events occur. You can hook into the agent lifecycle with the `hooks` property. Subclass the [`AgentHooks`][agents.lifecycle.AgentHooks] class, and override the methods you're interested in.
113+
有时,你可能想要观察智能体的生命周期。例如,你可能想要记录事件,或在某些事件发生时预取数据。你可以通过`hooks`属性钩入智能体生命周期。子类化[`AgentHooks`][agents.lifecycle.AgentHooks]类,并重写你感兴趣的方法。
112114

113-
## Guardrails
115+
## 防护栏
114116

115-
Guardrails allow you to run checks/validations on user input, in parallel to the agent running. For example, you could screen the user's input for relevance. Read more in the [guardrails](guardrails.md) documentation.
117+
防护栏允许你在智能体运行的同时对用户输入运行检查/验证。例如,你可以筛选用户输入的相关性。更多信息请参阅[防护栏](guardrails.md)文档。
116118

117-
## Cloning/copying agents
119+
## 克隆/复制智能体
118120

119-
By using the `clone()` method on an agent, you can duplicate an Agent, and optionally change any properties you like.
121+
通过在智能体上使用`clone()`方法,你可以复制一个智能体,并可选地更改任何你想要的属性。
120122

121123
```python
122124
pirate_agent = Agent(
@@ -131,17 +133,17 @@ robot_agent = pirate_agent.clone(
131133
)
132134
```
133135

134-
## Forcing tool use
136+
## 强制使用工具
135137

136-
Supplying a list of tools doesn't always mean the LLM will use a tool. You can force tool use by setting [`ModelSettings.tool_choice`][agents.model_settings.ModelSettings.tool_choice]. Valid values are:
138+
提供工具列表并不总是意味着LLM会使用工具。你可以通过设置[`ModelSettings.tool_choice`][agents.model_settings.ModelSettings.tool_choice]来强制使用工具。有效值为:
137139

138-
1. `auto`, which allows the LLM to decide whether or not to use a tool.
139-
2. `required`, which requires the LLM to use a tool (but it can intelligently decide which tool).
140-
3. `none`, which requires the LLM to _not_ use a tool.
141-
4. Setting a specific string e.g. `my_tool`, which requires the LLM to use that specific tool.
140+
1. `auto`,允许LLM决定是否使用工具
141+
2. `required`,要求LLM必须使用工具(但可以智能地决定使用哪个工具)
142+
3. `none`,要求LLM不使用工具
143+
4. 设置特定字符串如`my_tool`,要求LLM必须使用该特定工具
142144

143145
!!! note
144146

145-
To prevent infinite loops, the framework automatically resets `tool_choice` to "auto" after a tool call. This behavior is configurable via [`agent.reset_tool_choice`][agents.agent.Agent.reset_tool_choice]. The infinite loop is because tool results are sent to the LLM, which then generates another tool call because of `tool_choice`, ad infinitum.
147+
为了防止无限循环,框架在工具调用后会自动将`tool_choice`重置为"auto"。这种行为可以通过[`agent.reset_tool_choice`][agents.agent.Agent.reset_tool_choice]配置。无限循环是因为工具结果被发送给LLM,然后由于`tool_choice`又生成另一个工具调用,如此循环往复。
146148

147-
If you want the Agent to completely stop after a tool call (rather than continuing with auto mode), you can set [`Agent.tool_use_behavior="stop_on_first_tool"`] which will directly use the tool output as the final response without further LLM processing.
149+
如果你希望智能体在工具调用后完全停止(而不是继续使用自动模式),你可以设置[`Agent.tool_use_behavior="stop_on_first_tool"`],这将直接使用工具输出作为最终响应,而不进行进一步的LLM处理。

docs/zh/config.md

Lines changed: 22 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
1-
# Configuring the SDK
1+
# SDK配置指南
22

3-
## API keys and clients
3+
## API密钥与客户端配置
44

5-
By default, the SDK looks for the `OPENAI_API_KEY` environment variable for LLM requests and tracing, as soon as it is imported. If you are unable to set that environment variable before your app starts, you can use the [set_default_openai_key()][agents.set_default_openai_key] function to set the key.
5+
默认情况下,SDK会在导入时自动查找`OPENAI_API_KEY`环境变量用于LLM请求和追踪。如果无法在应用启动前设置该环境变量,可以使用[set_default_openai_key()][agents.set_default_openai_key]函数设置密钥。
66

77
```python
88
from agents import set_default_openai_key
99

1010
set_default_openai_key("sk-...")
1111
```
1212

13-
Alternatively, you can also configure an OpenAI client to be used. By default, the SDK creates an `AsyncOpenAI` instance, using the API key from the environment variable or the default key set above. You can change this by using the [set_default_openai_client()][agents.set_default_openai_client] function.
13+
此外,您也可以配置自定义的OpenAI客户端。默认情况下,SDK会创建一个`AsyncOpenAI`实例,使用环境变量或上述设置的默认密钥。您可以通过[set_default_openai_client()][agents.set_default_openai_client]函数修改此行为。
1414

1515
```python
1616
from openai import AsyncOpenAI
@@ -20,75 +20,74 @@ custom_client = AsyncOpenAI(base_url="...", api_key="...")
2020
set_default_openai_client(custom_client)
2121
```
2222

23-
Finally, you can also customize the OpenAI API that is used. By default, we use the OpenAI Responses API. You can override this to use the Chat Completions API by using the [set_default_openai_api()][agents.set_default_openai_api] function.
23+
最后,您还可以自定义使用的OpenAI API。默认情况下,我们使用OpenAI Responses API。您可以通过[set_default_openai_api()][agents.set_default_openai_api]函数切换为Chat Completions API。
2424

2525
```python
2626
from agents import set_default_openai_api
2727

2828
set_default_openai_api("chat_completions")
2929
```
3030

31-
## Tracing
31+
## 追踪功能
3232

33-
Tracing is enabled by default. It uses the OpenAI API keys from the section above by default (i.e. the environment variable or the default key you set). You can specifically set the API key used for tracing by using the [`set_tracing_export_api_key`][agents.set_tracing_export_api_key] function.
33+
追踪功能默认启用。默认使用上述章节中的OpenAI API密钥(即环境变量或您设置的默认密钥)。您可以使用[`set_tracing_export_api_key`][agents.set_tracing_export_api_key]函数专门设置用于追踪的API密钥。
3434

3535
```python
3636
from agents import set_tracing_export_api_key
3737

3838
set_tracing_export_api_key("sk-...")
3939
```
4040

41-
You can also disable tracing entirely by using the [`set_tracing_disabled()`][agents.set_tracing_disabled] function.
41+
您也可以通过[`set_tracing_disabled()`][agents.set_tracing_disabled]函数完全禁用追踪功能。
4242

4343
```python
4444
from agents import set_tracing_disabled
4545

4646
set_tracing_disabled(True)
4747
```
4848

49-
## Debug logging
49+
## 调试日志
5050

51-
The SDK has two Python loggers without any handlers set. By default, this means that warnings and errors are sent to `stdout`, but other logs are suppressed.
51+
SDK内置两个未设置处理器的Python日志记录器。默认情况下,这意味着警告和错误会输出到`stdout`,而其他日志会被抑制。
5252

53-
To enable verbose logging, use the [`enable_verbose_stdout_logging()`][agents.enable_verbose_stdout_logging] function.
53+
要启用详细日志输出,请使用[`enable_verbose_stdout_logging()`][agents.enable_verbose_stdout_logging]函数。
5454

5555
```python
5656
from agents import enable_verbose_stdout_logging
5757

5858
enable_verbose_stdout_logging()
5959
```
6060

61-
Alternatively, you can customize the logs by adding handlers, filters, formatters, etc. You can read more in the [Python logging guide](https://docs.python.org/3/howto/logging.html).
61+
或者,您也可以通过添加处理器、过滤器、格式化程序等自定义日志。更多信息请参阅[Python日志指南](https://docs.python.org/3/howto/logging.html)
6262

6363
```python
6464
import logging
6565

66-
logger = logging.getLogger("openai.agents") # or openai.agents.tracing for the Tracing logger
66+
logger = logging.getLogger("openai.agents") # 或openai.agents.tracing获取追踪日志记录器
6767

68-
# To make all logs show up
68+
# 显示所有日志
6969
logger.setLevel(logging.DEBUG)
70-
# To make info and above show up
70+
# 显示info及以上级别日志
7171
logger.setLevel(logging.INFO)
72-
# To make warning and above show up
72+
# 显示warning及以上级别日志
7373
logger.setLevel(logging.WARNING)
74-
# etc
74+
# 等等
7575

76-
# You can customize this as needed, but this will output to `stderr` by default
76+
# 您可以根据需要自定义,默认会输出到`stderr`
7777
logger.addHandler(logging.StreamHandler())
7878
```
7979

80-
### Sensitive data in logs
80+
### 日志中的敏感数据
8181

82-
Certain logs may contain sensitive data (for example, user data). If you want to disable this data from being logged, set the following environment variables.
82+
某些日志可能包含敏感数据(例如用户数据)。如果您希望禁止记录这些数据,请设置以下环境变量。
8383

84-
To disable logging LLM inputs and outputs:
84+
禁止记录LLM输入输出:
8585

8686
```bash
8787
export OPENAI_AGENTS_DONT_LOG_MODEL_DATA=1
8888
```
8989

90-
To disable logging tool inputs and outputs:
90+
禁止记录工具输入输出:
9191

9292
```bash
9393
export OPENAI_AGENTS_DONT_LOG_TOOL_DATA=1
94-
```

docs/zh/context.md

Lines changed: 27 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,29 @@
1-
# Context management
1+
# 上下文管理
22

3-
Context is an overloaded term. There are two main classes of context you might care about:
3+
"上下文"(Context)是一个多义词。在开发中主要涉及两类上下文:
44

5-
1. Context available locally to your code: this is data and dependencies you might need when tool functions run, during callbacks like `on_handoff`, in lifecycle hooks, etc.
6-
2. Context available to LLMs: this is data the LLM sees when generating a response.
5+
1. 本地代码可用的上下文:指工具函数运行时、`on_handoff`等回调中或生命周期钩子中可能需要的数据和依赖项
6+
2. LLM可用的上下文:指LLM生成响应时能看到的数据
77

8-
## Local context
8+
## 本地上下文
99

10-
This is represented via the [`RunContextWrapper`][agents.run_context.RunContextWrapper] class and the [`context`][agents.run_context.RunContextWrapper.context] property within it. The way this works is:
10+
通过[`RunContextWrapper`][agents.run_context.RunContextWrapper]类和其中的[`context`][agents.run_context.RunContextWrapper.context]属性实现。工作机制如下:
1111

12-
1. You create any Python object you want. A common pattern is to use a dataclass or a Pydantic object.
13-
2. You pass that object to the various run methods (e.g. `Runner.run(..., **context=whatever**))`.
14-
3. All your tool calls, lifecycle hooks etc will be passed a wrapper object, `RunContextWrapper[T]`, where `T` represents your context object type which you can access via `wrapper.context`.
12+
1. 创建任意Python对象,常用模式是使用dataclass或Pydantic对象
13+
2. 将该对象传递给各种运行方法(例如`Runner.run(..., **context=whatever**)`
14+
3. 所有工具调用、生命周期钩子等都会收到一个包装器对象`RunContextWrapper[T]`,其中`T`表示您可以通过`wrapper.context`访问的上下文对象类型
1515

16-
The **most important** thing to be aware of: every agent, tool function, lifecycle etc for a given agent run must use the same _type_ of context.
16+
**最重要**的一点是:给定代理运行中的每个代理、工具函数、生命周期等必须使用相同_类型_的上下文。
1717

18-
You can use the context for things like:
18+
上下文可用于以下场景:
1919

20-
- Contextual data for your run (e.g. things like a username/uid or other information about the user)
21-
- Dependencies (e.g. logger objects, data fetchers, etc)
22-
- Helper functions
20+
- 运行时的上下文数据(例如用户名/用户ID或其他用户信息)
21+
- 依赖项(例如日志记录器对象、数据获取器等)
22+
- 辅助函数
2323

24-
!!! danger "Note"
24+
!!! danger "注意"
2525

26-
The context object is **not** sent to the LLM. It is purely a local object that you can read from, write to and call methods on it.
26+
上下文对象**不会**发送给LLM。它只是一个本地对象,您可以读取、写入并调用其方法。
2727

2828
```python
2929
import asyncio
@@ -61,17 +61,17 @@ if __name__ == "__main__":
6161
asyncio.run(main())
6262
```
6363

64-
1. This is the context object. We've used a dataclass here, but you can use any type.
65-
2. This is a tool. You can see it takes a `RunContextWrapper[UserInfo]`. The tool implementation reads from the context.
66-
3. We mark the agent with the generic `UserInfo`, so that the typechecker can catch errors (for example, if we tried to pass a tool that took a different context type).
67-
4. The context is passed to the `run` function.
68-
5. The agent correctly calls the tool and gets the age.
64+
1. 这是上下文对象。我们在此使用了dataclass,但您可以使用任何类型
65+
2. 这是一个工具函数。可以看到它接收`RunContextWrapper[UserInfo]`参数,工具实现从上下文中读取数据
66+
3. 我们用泛型`UserInfo`标记代理,以便类型检查器能捕获错误(例如,如果我们尝试传递使用不同上下文类型的工具)
67+
4. 上下文被传递给`run`函数
68+
5. 代理正确调用工具并获取年龄信息
6969

70-
## Agent/LLM context
70+
## Agent/LLM上下文
7171

72-
When an LLM is called, the **only** data it can see is from the conversation history. This means that if you want to make some new data available to the LLM, you must do it in a way that makes it available in that history. There are a few ways to do this:
72+
当调用LLM时,它**唯一**能看到的数据来自对话历史。这意味着如果您想让LLM获取新数据,必须通过某种方式将其加入对话历史。有几种实现方法:
7373

74-
1. You can add it to the Agent `instructions`. This is also known as a "system prompt" or "developer message". System prompts can be static strings, or they can be dynamic functions that receive the context and output a string. This is a common tactic for information that is always useful (for example, the user's name or the current date).
75-
2. Add it to the `input` when calling the `Runner.run` functions. This is similar to the `instructions` tactic, but allows you to have messages that are lower in the [chain of command](https://cdn.openai.com/spec/model-spec-2024-05-08.html#follow-the-chain-of-command).
76-
3. Expose it via function tools. This is useful for _on-demand_ context - the LLM decides when it needs some data, and can call the tool to fetch that data.
77-
4. Use retrieval or web search. These are special tools that are able to fetch relevant data from files or databases (retrieval), or from the web (web search). This is useful for "grounding" the response in relevant contextual data.
74+
1. 可以将其添加到Agent的`instructions`中。这也被称为"系统提示"或"开发者消息"。系统提示可以是静态字符串,也可以是接收上下文并输出字符串的动态函数。这是处理总是有用的信息(例如用户名或当前日期)的常用策略
75+
2. 在调用`Runner.run`函数时将其添加到`input`中。这与`instructions`策略类似,但允许您在[命令链](https://cdn.openai.com/spec/model-spec-2024-05-08.html#follow-the-chain-of-command)中放置较低优先级的消息
76+
3. 通过函数工具公开它。这对_按需_上下文很有用 - LLM决定何时需要某些数据,并可以调用工具来获取该数据
77+
4. 使用检索或网络搜索。这些是特殊工具,能够从文件或数据库(检索)或从网络(网络搜索)获取相关数据。这对于将响应"锚定"在相关上下文数据中很有用

0 commit comments

Comments
 (0)