Thanks to visit codestin.com
Credit goes to github.com

Skip to content

LiteLLM giving error with OpenAI models and Grafana's MCP server #929

@umangk-2410

Description

@umangk-2410

The bug
I was integrating OpenAI's gpt-4o model ADK using LiteLLM, but it gave me error regarding the schema for one of tools for the Grafana's MCP server.

To Reproduce
Steps to reproduce the behavior:

  1. Defining the Agent
root_agent = LlmAgent(
            model=LiteLlm(model="openai/gpt-4o"),
            name='grafana_retrieval_agent',
            description="Helps user retrieve metrics and logs from Grafana Datasources.",
            instruction=REVISED_RETRIEVAL_PROMPT,
            tools=tools,
            output_key="retrieval_agent_output"
        )
  1. Defining the MCP server
async def grafana_mcp():
    try:
        print(f"Attempting to connect to Grafana MCP Server")
        tools, exit_stack = await MCPToolset.from_server(
            connection_params = StdioServerParameters(
                command="mcp-grafana",
                args=[],
                env={
                    "GRAFANA_URL": "http://localhost:3000",
                    "GRAFANA_API_KEY": GRAFANA_API_KEY
                }
            )
        )
        return tools, exit_stack
    except Exception as e:
        print(f"Failed to connect to Grafana MCP Server: {e}")
        return [], None
  1. Defining the run logic
    async def run(self, query: str):
        session_service = InMemorySessionService()
        session = session_service.create_session(
            state={}, app_name='mcp_grafana_app', user_id='user_fs'
        )

        print(f"\nUser Query for Retrieval Agent:\n '{query}'")
        content = types.Content(role='user', parts=[types.Part(text=query)])

        root_agent, exit_stack = await self.init_agent()
        runner = Runner(
            app_name='mcp_grafana_app',
            agent=root_agent,
            session_service=session_service,
        )

        print("Running Retrieval Agent...")
        events_async = runner.run_async(
            session_id=session.id, user_id=session.user_id, new_message=content
        )

        output = None
        async for event in events_async:
            # print(event.content.parts[0])
            text = event.content.parts[0].text
            function_response = event.content.parts[0].function_response
            # if text is not None:
                # print(text)
            if function_response is not None:
                print(function_response.response["result"])
            output = text

        print("Closing MCP server connection...")
        await exit_stack.aclose()
        print("Cleanup complete.")

        return o
  1. The error
Attempting to connect to Grafana MCP Server
time=2025-05-25T18:36:14.513+05:30 level=INFO msg="Starting Grafana MCP server using stdio transport"
time=2025-05-25T18:36:14.513+05:30 level=INFO msg="Using Grafana configuration" url=http://localhost:3000 api_key_set=true
Fetched 26 tools from MCP server.
Running Retrieval Agent...

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

an error occurred during closing of asynchronous generator <async_generator object stdio_client at 0x72332c9fbd80>
asyncgen: <async_generator object stdio_client at 0x72332c9fbd80>
  + Exception Group Traceback (most recent call last):
  |   File "/home/ubermenchh/work/AIOps-Agent/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 772, in __aexit__
  |     raise BaseExceptionGroup(
  | BaseExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/home/ubermenchh/work/AIOps-Agent/.venv/lib/python3.12/site-packages/mcp/client/stdio/__init__.py", line 173, in stdio_client
    |     yield read_stream, write_stream
    | GeneratorExit
    +------------------------------------

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubermenchh/work/AIOps-Agent/.venv/lib/python3.12/site-packages/mcp/client/stdio/__init__.py", line 167, in stdio_client
    anyio.create_task_group() as tg,
    ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubermenchh/work/AIOps-Agent/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 778, in __aexit__
    if self.cancel_scope.__exit__(type(exc), exc, exc.__traceback__):
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubermenchh/work/AIOps-Agent/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 457, in __exit__
    raise RuntimeError(
RuntimeError: Attempted to exit cancel scope in a different task than it was entered in
An error occurred: litellm.BadRequestError: OpenAIException - Invalid schema for function 'list_alert_rules': 'STRING' is not valid under any of the given schemas.

Expected behavior

  • When using gemini models with ADK, the process runs smoothly without any error. Only when i integrate using LiteLLM i am getting this error.
  • I also tried gemini models with LiteLLM and again it perfectly.
  • I also tried this in OpenAI's SDK and it worked with it perfectly too. The issue only occurs with ADK.

Desktop (please complete the following information):

  • OS: Arch Linux
  • Python version(python -V): 3.12.9
  • ADK version(pip show google-adk): 0.1.0

Metadata

Metadata

Assignees

Labels

models[Component] Issues related to model support

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions