-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Description
Summary
When using LlmAgent
with tools inside a custom agent's _run_async_impl
method, the agent loses all context and conversation history after receiving a tool response. This makes it impossible for the agent to properly analyze tool results or continue the conversation coherently.
Environment
- Google ADK version: 1.5.0
- Python version: 3.10+
- Framework: FastAPI with async orchestration
- Related packages:
- google-generativeai
- litellm 1.70.2
Problem Description
According to the Google ADK documentation, when implementing custom orchestrators, we should pass dynamic information to sub-agents through session state variables rather than appending Events to the session. This approach works well for simple LLM calls, but breaks down completely when the LLMAgent has tools.
Current Behavior
- The custom agent calls an LLMAgent with tools
- The agent correctly identifies which tool to call and executes it
- After receiving the tool response, the agent acts as if it has no memory of:
- The original user request
- Why it called the tool
- The tool response itself
- The agent keep looping, calling the same tool repeatedly without progressing
Expected Behavior
The LLMAgent should maintain context across tool calls, understanding:
- The original task/question
- Why it invoked the tool
- The tool's response
- How to proceed based on the tool result
Code Example
Here's a simplified version of how we're using LLMAgent in my custom agent:
class MyCustomAgent(BaseAgent):
async def _run_async_impl(self, ctx: InvocationContext) -> AsyncGenerator[Event, None]:
# Following ADK documentation: passing info via state, not events
ctx.session.state["task_info"] = "Find all calendar events for tomorrow"
# Call specialist agent with tools
calendar_agent = self.agents["calendar_specialist"] # LlmAgent with calendar tools
async for event in calendar_agent.run_async(ctx):
# The agent calls the tool correctly
# But after tool response, it's completely lost
yield event
The specialist agent configuration:
class CalendarSpecialistAgent(BaseSpecialistAgent):
def __init__(self):
super().__init__(
name="calendar_specialist",
model=LiteLlm(model="gpt-4o-mini"),
tools=[calendar_tool] # Tool for Google Calendar operations
)
Reproduction Steps
- Create a custom agent extending
BaseAgent
- In
_run_async_impl
, set task information inctx.session.state
(as per documentation) - Create an
LlmAgent
with at least one tool - Call the agent from the orchestrator
- Observe that after tool execution, the agent loses all context
Analysis
This behavior suggests that when using the pattern recommended in the documentation (passing info via state rather than events), the LLMAgent doesn't maintain conversation history between:
- Initial request → tool invocation
- Tool response → final analysis
It's as if each interaction is completely isolated, similar to calling a stateless LLM endpoint multiple times without passing conversation history.
Impact
This issue makes it impossible to build complex multi-agent systems where:
- Agents need to use tools to gather information
- The orchestrator needs to maintain control over the conversation flow
- Dynamic information needs to be passed to agents without polluting the event history
Possible Workarounds Attempted
Manually managing context via state: I attempted to circumvent the problem by modifying the system prompt to dynamically provide context through session state variables. These variables are updated via agent lifecycle callbacks. However, this approach is quite difficult to implement, and it undermines the entire value proposition of using Google's ADK if I end up having to manually code all the agent's behavioral logic.
Suggested Solution
The LLMAgent should internally maintain state and context when executing tools, even when the parent custom agent is using the state-based pattern for passing information.
Additional Context
This issue is critical for building production-ready AI assistants that need to Follow complex orchestration patterns
Related Issues
- Could not find similar issues in the repository
Labels Suggestion
bug
documentation
llm-agent
orchestrator
tools
Note: I'm happy to provide more detailed code examples or assist with testing potential fixes. This issue is blocking our production deployment of a multi-agent assistant system.