2. Agent Loop
At its most basic definition, an agent loop is a loop that:
- processes user input
- decides to either call a tool or respond to the user
- if a tool is selected, calls the tool and gets the response
- loops until no tools are selected and a final response is generated
Using pseudo-code, we can represent this as:
let userInput = getUserInput();let messages = [{ role: "user", content: userInput }];while (true) { const response = await llm(messages, tools); messages.push({ role: "assistant", content: response.content, tool_calls: response.tool_calls, });
if (response.tool_calls) { for (const toolCall of response.tool_calls) { const toolResponse = await callTool(toolCall, tools); messages.push({ role: "tool", content: toolResponse, tool_call_id: toolCall.id, }); } } else { break; }}
console.log(response);
Let’s explore this in more detail, first with a basic implementation, then using workflows.
Basic Implementation
Section titled “Basic Implementation”Let’s start by implementing the agent loop using the same structure as our pseudo-code, but with real functions. This will help us understand the core logic before we structure it with workflows.
Note that we’re using type aliases for the OpenAI API types to increase readability.
As filename, we’re using 2a-agent-loop.ts
.
import { OpenAI } from "openai";import { ChatCompletionMessage as Message, ChatCompletionMessageParam as InputMessage, ChatCompletionMessageFunctionToolCall as ToolCall, ChatCompletionTool as Tool,} from "openai/resources/chat/completions";
// Initialize OpenAI clientconst openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});
// Define available toolsconst tools: Tool[] = [ { type: "function" as const, function: { name: "get_weather", description: "Get the current weather for a location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, }, required: ["location"], }, }, },];
// LLM function - handles the AI reasoningasync function llm(messages: InputMessage[], tools: Tool[]): Promise<Message> { const completion = await openai.chat.completions.create({ model: "gpt-4.1-mini", messages, tools, tool_choice: "auto", });
const message = completion.choices[0]?.message; if (!message) { throw new Error("No response from LLM"); }
return message;}
// Tool calling function - executes the requested toolsasync function callTool(toolCall: ToolCall): Promise<string> { const toolName = toolCall.function.name; const toolInput = JSON.parse(toolCall.function.arguments);
// Execute the requested tool switch (toolName) { case "get_weather": // Mock weather API call const location = toolInput.location; return `The weather in ${location} is sunny and 72°F`; default: return `Unknown tool: ${toolName}`; }}
// Now implement our agent loopasync function runAgentLoop(userInput: string) { let messages: InputMessage[] = [{ role: "user", content: userInput }];
while (true) { const response = await llm(messages, tools);
// Add the assistant's response to the conversation messages.push(response);
if (response.tool_calls) { // Process each tool call for (const toolCall of response.tool_calls) { if (toolCall.type !== "function") { throw new Error("Unsupported tool call type"); } const toolResponse = await callTool(toolCall); messages.push({ role: "tool", content: toolResponse, tool_call_id: toolCall.id, }); } } else { // No tools needed, we have our final response return response.content; } }}
// Run the agentconst result = await runAgentLoop("What's the weather in San Francisco?");console.log(result);
This implementation follows our pseudo-code exactly, but with real OpenAI API calls. The logic is straightforward:
- Call the LLM with current conversation
- If it wants to use tools, execute them and add results to conversation
- If no tools, return the final response
Converting to Workflows
Section titled “Converting to Workflows”Now let’s see how workflows can help us structure this same logic. Workflows provide several benefits:
- Event-driven: Each step is triggered by events, making the flow more explicit
- Composable: We can easily add new handlers or modify existing ones
- Streaming: We can stream events and responses in real-time
- Scalable: Multiple handlers can process events concurrently
Scaffolding our workflow
Section titled “Scaffolding our workflow”Let’s convert our agent loop to use workflows. As filename, we’re going to use 2b-agent-loop-workflow.ts
.
1. Introducing events
Section titled “1. Introducing events”We’ll keep the same imports, tools and helper functions (llm()
and callTool()
).
Additionally, we define the events that represent key points in the loop: user input, tool calls, tool responses, and the final response:
import { createWorkflow, workflowEvent } from "@llamaindex/workflow-core";import { OpenAI } from "openai";import { ChatCompletionMessage as Message, ChatCompletionMessageParam as InputMessage, ChatCompletionMessageFunctionToolCall as ToolCall, ChatCompletionTool as Tool, ChatCompletionToolMessageParam as ToolResponseMessage,} from "openai/resources/chat/completions";
const workflow = createWorkflow();
// Define our eventsconst userInputEvent = workflowEvent<{ messages: InputMessage[];}>();const toolCallEvent = workflowEvent<{ toolCall: ToolCall;}>();const toolResponseEvent = workflowEvent<ToolResponseMessage>();const finalResponseEvent = workflowEvent<string>();
// Initialize OpenAI client (same as before)const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});
// Define available tools (same as before)const tools = [...];
// Same LLM function as beforeasync function llm( messages: ChatCompletionMessageParam[], tools: ChatCompletionTool[]): Promise<ChatCompletionMessage> { ...}
// Same tool calling function as beforeasync function callTool( toolCall: ChatCompletionMessageToolCall): Promise<string> { ...}
2. Add a handler for user input
Section titled “2. Add a handler for user input”We’re adding a handler for the userInputEvent
that emits tool call events if there are any tools to call, or a final response:
// Handler for processing user input and LLM responsesworkflow.handle([userInputEvent], async (context, { data }) => { const { sendEvent, stream } = context; const { messages } = data;
try { // Use our same llm() function const response = await llm(messages, tools);
// Add the assistant's response to the conversation const updatedMessages = [...messages, response];
// Check if the LLM wants to call tools if (response.tool_calls && response.tool_calls.length > 0) { // Send tool call events for each requested tool for (const toolCall of response.tool_calls) { if (toolCall.type !== "function") { throw new Error("Unsupported tool call type"); } sendEvent( toolCallEvent.with({ toolCall, }), ); } // TODO: Here we'll collect tool responses. Copy here the code from the 4. step. } else { sendEvent(finalResponseEvent.with(response.content || "")); } } catch (error) { console.error("Error calling LLM:", error); sendEvent(finalResponseEvent.with("Error processing request")); }});
Note that we’re missing the code to collect tool responses. We’ll add that in the 4. step.
3. Adding a handler for tool calls
Section titled “3. Adding a handler for tool calls”We’re adding a handler for the toolCallEvent
events sent in the previous step. The handler executes each tool and emits a tool response event for each result:
// Handler for executing tool callsworkflow.handle([toolCallEvent], async (context, { data }) => { const { sendEvent } = context; const { toolCall } = data;
try { // Use our same callTool() function const toolResponse = await callTool(toolCall);
// Send the tool response back sendEvent( toolResponseEvent.with({ role: "tool", content: toolResponse, tool_call_id: toolCall.id, }), ); } catch (error) { console.error(`Error executing tool ${toolCall.function.name}:`, error); sendEvent( toolResponseEvent.with({ role: "tool", content: `Error executing ${toolCall.function.name}: ${error}`, tool_call_id: toolCall.id, }), ); }});
4. Collecting tool response events
Section titled “4. Collecting tool response events”In step 2, we’ve been missing the code to collect tool responses sent out in step 3.
We’ll add that now, by updating the userInputEvent
handler (by adding code after sending out the toolCallEvent
events):
workflow.handle([userInputEvent], async (context, { data }) => { // Keep the existing code...
if (response.tool_calls && response.tool_calls.length > 0) { // Keep the existing code...
// Collect ALL tool responses before continuing const expectedToolCount = response.tool_calls.length; const toolResponses: Array<ToolResponseMessage> = [];
// Listen for tool responses until we have all of them await stream.filter(toolResponseEvent).forEach((responseEvent) => { toolResponses.push(responseEvent.data);
// Once we have all responses, continue the conversation if (toolResponses.length === expectedToolCount) { // Add tool response messages const finalMessages = [...updatedMessages, ...toolResponses];
// Continue the loop with the updated conversation sendEvent(userInputEvent.with({ messages: finalMessages })); return; // Exit the forEach to stop listening } }); } else { // Keep the existing code... } // Keep the existing code...});
The code is watching the event stream for toolResponseEvent
events.
If it receives an event, it adds the response to the toolResponses
array.
Once all tool responses have arrived, it sends a new userInputEvent
with the updated conversation history.
Note that it would be helpful to store the tool responses and add a dedicated handler to collect them. We’ll do that in the next step.
5. Running the workflow
Section titled “5. Running the workflow”Finally, to run the workflow, we create a context, seed the first userInputEvent
, and await the finalResponseEvent
.
const { stream, sendEvent } = workflow.createContext();
sendEvent( userInputEvent.with({ messages: [ { role: "user", content: "What's the weather in San Francisco?" }, ], }),);
const result = await stream.until(finalResponseEvent).toArray();console.log(result[result.length - 1].data);
Key Differences
Section titled “Key Differences”Notice how the workflow version accomplishes the same thing as our basic implementation, but with these key differences:
- Event-driven flow: Instead of a
while
loop, each step triggers the next through events - Separation of concerns: LLM reasoning and tool execution are handled by separate event handlers
- Async coordination: The workflow handles waiting for multiple tool responses before continuing
- Streaming capability: Events can be streamed in real-time to clients
- Same core logic: We kept the same
llm()
andcallTool()
functions, just integrated them into the workflow
The workflow approach makes it easier to:
- Add logging or monitoring at each step
- Handle errors at different points in the flow
- Stream partial results to users
- Scale individual components (e.g., run tool calls in parallel)
- Compose with other workflows
For the complete working example, see demo/express/2b-agent-loop-workflow.ts
.
Next Steps
Section titled “Next Steps”Next, we will cover adding state into our agent loop! This will help us keep track of errors, and even share state between tools.