Thanks to visit codestin.com
Credit goes to github.com

Skip to content

fix: prevent panic when LLMAgent has no Model configured#280

Merged
rakyll merged 1 commit intogoogle:mainfrom
apzuk3:main
Nov 13, 2025
Merged

fix: prevent panic when LLMAgent has no Model configured#280
rakyll merged 1 commit intogoogle:mainfrom
apzuk3:main

Conversation

@apzuk3
Copy link
Contributor

@apzuk3 apzuk3 commented Nov 13, 2025

Fixes #279

Since there is a valid use-case to initiate without providing a model, it did not make sense to validate during initialization. Added a lazy validation.

@google-cla
Copy link

google-cla bot commented Nov 13, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @apzuk3, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a potential panic in the LLMAgent when it attempts to make a call without a language model (Model) being configured. Previously, this scenario could lead to a runtime error. The change introduces a lazy validation mechanism, ensuring that the model's presence is checked only when an actual LLM call is initiated, thereby preventing panics and providing a more robust error handling experience for users who might initialize agents without an immediate model.

Highlights

  • Panic Prevention: Implemented a check within the callLLM function to prevent a runtime panic if the LLMAgent is invoked without a Model configured.
  • Lazy Validation: Shifted the validation for the presence of a Model from initialization to the point of its first use (callLLM), allowing for valid use cases where an agent might be initialized without an immediate model.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to prevent a panic when an LLMAgent is used without a configured model by adding a lazy validation. While the intention is correct, the current implementation of the check prevents a valid use case where BeforeModelCallbacks (for example, to implement caching) could provide a response, making a model unnecessary. I've suggested moving the nil-check to be truly lazy—executing it just before the model is actually called. This ensures the original panic is fixed while preserving the flexibility of using model-less agents with callbacks.

Add nil check in Flow.callLLM() after BeforeModelCallbacks but before
accessing Model.GenerateContent(). This prevents a nil pointer dereference
panic when an llmagent is created without a Model configuration.

The check is positioned after BeforeModelCallbacks to allow callbacks that
return cached responses to short-circuit execution without requiring a Model.

This provides a clear error message instead of a cryptic segmentation fault:
'agent %q has no Model configured; ensure Model is set in llmagent.Config'

Preserves valid use cases:
- Testing agent metadata/structure without running the agent
- BeforeAgentCallbacks that short-circuit before agent execution
- BeforeModelCallbacks that return cached responses

Fixes panic: runtime error: invalid memory address or nil pointer
dereference when sub-agents without Models were invoked.
@rakyll rakyll merged commit f1d3cc5 into google:main Nov 13, 2025
1 check passed
@rakyll
Copy link
Member

rakyll commented Nov 13, 2025

Thank you!

davidli2010 pushed a commit to davidli2010/adk-go that referenced this pull request Feb 4, 2026
Add nil check in Flow.callLLM() after BeforeModelCallbacks but before
accessing Model.GenerateContent(). This prevents a nil pointer dereference
panic when an llmagent is created without a Model configuration.

The check is positioned after BeforeModelCallbacks to allow callbacks that
return cached responses to short-circuit execution without requiring a Model.

This provides a clear error message instead of a cryptic segmentation fault:
'agent %q has no Model configured; ensure Model is set in llmagent.Config'

Preserves valid use cases:
- Testing agent metadata/structure without running the agent
- BeforeAgentCallbacks that short-circuit before agent execution
- BeforeModelCallbacks that return cached responses

Fixes panic: runtime error: invalid memory address or nil pointer
dereference when sub-agents without Models were invoked.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Panic when model is not provided

2 participants