You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
from agents import Agent, ModelSettings, function_tool
@@ -27,7 +27,7 @@ agent = Agent(
27
27
28
28
## Context
29
29
30
-
Agents are generic on their `context` type. Context is a dependency-injection tool: it's an object you create and pass to `Runner.run()`, that is passed to every agent, tool, handoff etc, and it serves as a grab bag of dependencies and state for the agent run. You can provide any Python object as the context.
By default, agents produce plain text (i.e. `str`) outputs. If you want the agent to produce a particular type of output, you can use the `output_type` parameter. A common choice is to use [Pydantic](https://docs.pydantic.dev/) objects, but we support any type that can be wrapped in a Pydantic [TypeAdapter](https://docs.pydantic.dev/latest/api/type_adapter/) - dataclasses, lists, TypedDict, etc.
When you pass an `output_type`, that tells the model to use [structured outputs](https://platform.openai.com/docs/guides/structured-outputs) instead of regular plain text responses.
Handoffs are sub-agents that the agent can delegate to. You provide a list of handoffs, and the agent can choose to delegate to them if relevant. This is a powerful pattern that allows orchestrating modular, specialized agents that excel at a single task. Read more in the [handoffs](handoffs.md) documentation.
In most cases, you can provide instructions when you create the agent. However, you can also provide dynamic instructions via a function. The function will receive the agent and context, and must return the prompt. Both regular and `async` functions are accepted.
Sometimes, you want to observe the lifecycle of an agent. For example, you may want to log events, or pre-fetch data when certain events occur. You can hook into the agent lifecycle with the `hooks` property. Subclass the [`AgentHooks`][agents.lifecycle.AgentHooks] class, and override the methods you're interested in.
Guardrails allow you to run checks/validations on user input, in parallel to the agent running. For example, you could screen the user's input for relevance. Read more in the [guardrails](guardrails.md) documentation.
Supplying a list of tools doesn't always mean the LLM will use a tool. You can force tool use by setting [`ModelSettings.tool_choice`][agents.model_settings.ModelSettings.tool_choice]. Valid values are:
1.`auto`, which allows the LLM to decide whether or not to use a tool.
139
-
2.`required`, which requires the LLM to use a tool (but it can intelligently decide which tool).
140
-
3.`none`, which requires the LLM to _not_ use a tool.
141
-
4.Setting a specific string e.g. `my_tool`, which requires the LLM to use that specific tool.
140
+
1.`auto`,允许LLM决定是否使用工具
141
+
2.`required`,要求LLM必须使用工具(但可以智能地决定使用哪个工具)
142
+
3.`none`,要求LLM不使用工具
143
+
4.设置特定字符串如`my_tool`,要求LLM必须使用该特定工具
142
144
143
145
!!! note
144
146
145
-
To prevent infinite loops, the framework automatically resets `tool_choice` to "auto" after a tool call. This behavior is configurable via [`agent.reset_tool_choice`][agents.agent.Agent.reset_tool_choice]. The infinite loop is because tool results are sent to the LLM, which then generates another tool call because of `tool_choice`, ad infinitum.
If you want the Agent to completely stop after a tool call (rather than continuing with auto mode), you can set [`Agent.tool_use_behavior="stop_on_first_tool"`] which will directly use the tool output as the final response without further LLM processing.
By default, the SDK looks for the `OPENAI_API_KEY` environment variable for LLM requests and tracing, as soon as it is imported. If you are unable to set that environment variable before your app starts, you can use the [set_default_openai_key()][agents.set_default_openai_key] function to set the key.
Alternatively, you can also configure an OpenAI client to be used. By default, the SDK creates an `AsyncOpenAI` instance, using the API key from the environment variable or the default key set above. You can change this by using the [set_default_openai_client()][agents.set_default_openai_client] function.
Finally, you can also customize the OpenAI API that is used. By default, we use the OpenAI Responses API. You can override this to use the Chat Completions API by using the [set_default_openai_api()][agents.set_default_openai_api] function.
Tracing is enabled by default. It uses the OpenAI API keys from the section above by default (i.e. the environment variable or the default key you set). You can specifically set the API key used for tracing by using the [`set_tracing_export_api_key`][agents.set_tracing_export_api_key] function.
The SDK has two Python loggers without any handlers set. By default, this means that warnings and errors are sent to `stdout`, but other logs are suppressed.
Alternatively, you can customize the logs by adding handlers, filters, formatters, etc. You can read more in the [Python logging guide](https://docs.python.org/3/howto/logging.html).
#You can customize this as needed, but this will output to `stderr` by default
76
+
#您可以根据需要自定义,默认会输出到`stderr`
77
77
logger.addHandler(logging.StreamHandler())
78
78
```
79
79
80
-
### Sensitive data in logs
80
+
### 日志中的敏感数据
81
81
82
-
Certain logs may contain sensitive data (for example, user data). If you want to disable this data from being logged, set the following environment variables.
Copy file name to clipboardExpand all lines: docs/zh/context.md
+27-27Lines changed: 27 additions & 27 deletions
Original file line number
Diff line number
Diff line change
@@ -1,29 +1,29 @@
1
-
# Context management
1
+
# 上下文管理
2
2
3
-
Context is an overloaded term. There are two main classes of context you might care about:
3
+
"上下文"(Context)是一个多义词。在开发中主要涉及两类上下文:
4
4
5
-
1.Context available locally to your code: this is data and dependencies you might need when tool functions run, during callbacks like `on_handoff`, in lifecycle hooks, etc.
6
-
2.Context available to LLMs: this is data the LLM sees when generating a response.
This is represented via the [`RunContextWrapper`][agents.run_context.RunContextWrapper] class and the [`context`][agents.run_context.RunContextWrapper.context] property within it. The way this works is:
1.You create any Python object you want. A common pattern is to use a dataclass or a Pydantic object.
13
-
2.You pass that object to the various run methods (e.g. `Runner.run(..., **context=whatever**))`.
14
-
3.All your tool calls, lifecycle hooks etc will be passed a wrapper object, `RunContextWrapper[T]`, where `T` represents your context object type which you can access via `wrapper.context`.
-Contextual data for your run (e.g. things like a username/uid or other information about the user)
21
-
-Dependencies (e.g. logger objects, data fetchers, etc)
22
-
-Helper functions
20
+
-运行时的上下文数据(例如用户名/用户ID或其他用户信息)
21
+
-依赖项(例如日志记录器对象、数据获取器等)
22
+
-辅助函数
23
23
24
-
!!! danger "Note"
24
+
!!! danger "注意"
25
25
26
-
The context object is **not** sent to the LLM. It is purely a local object that you can read from, write to and call methods on it.
26
+
上下文对象**不会**发送给LLM。它只是一个本地对象,您可以读取、写入并调用其方法。
27
27
28
28
```python
29
29
import asyncio
@@ -61,17 +61,17 @@ if __name__ == "__main__":
61
61
asyncio.run(main())
62
62
```
63
63
64
-
1.This is the context object. We've used a dataclass here, but you can use any type.
65
-
2.This is a tool. You can see it takes a `RunContextWrapper[UserInfo]`. The tool implementation reads from the context.
66
-
3.We mark the agent with the generic `UserInfo`, so that the typechecker can catch errors (for example, if we tried to pass a tool that took a different context type).
67
-
4.The context is passed to the `run` function.
68
-
5.The agent correctly calls the tool and gets the age.
When an LLM is called, the **only** data it can see is from the conversation history. This means that if you want to make some new data available to the LLM, you must do it in a way that makes it available in that history. There are a few ways to do this:
1.You can add it to the Agent `instructions`. This is also known as a "system prompt" or "developer message". System prompts can be static strings, or they can be dynamic functions that receive the context and output a string. This is a common tactic for information that is always useful (for example, the user's name or the current date).
75
-
2.Add it to the `input` when calling the `Runner.run` functions. This is similar to the `instructions` tactic, but allows you to have messages that are lower in the [chain of command](https://cdn.openai.com/spec/model-spec-2024-05-08.html#follow-the-chain-of-command).
76
-
3.Expose it via function tools. This is useful for _on-demand_ context - the LLM decides when it needs some data, and can call the tool to fetch that data.
77
-
4.Use retrieval or web search. These are special tools that are able to fetch relevant data from files or databases (retrieval), or from the web (web search). This is useful for "grounding" the response in relevant contextual data.
0 commit comments