MCP Tool Calls Fail Due to Unvalidated LLM Arguments (Concatenated JSON / Empty Inputs) #12953
Replies: 1 comment
-
|
I agree with adding validation at the execution boundary, but I would be careful with automatic normalization of The safe contract should probably be:
Blindly splitting concatenated objects can change semantics. In your example it might be fine, but for side-effecting tools it could accidentally turn one bad call into two real actions. Same with merging into A nice middle ground is an adapter policy per tool: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
When using MCP tools with strict schemas (e.g.
{ "text": "string" }), LibreChat forwards malformed tool arguments produced by the LLM without validation or normalization.This leads to downstream failures such as:
Environment
CORTEX_AGENT_RUNReproduction
User prompt (multi-intent)
Observed LLM tool call output
{ "tool_calls": [ { "function": { "arguments": "{\"text\": \"What was the gill mortality for last year?\"}{\"text\": \"How can I get access to Snowflake?\"}" } } ] }Problem
The
argumentsfield contains:This is:
LibreChat forwards this as-is to the MCP server, which rejects it due to schema mismatch.
Expected Behavior
LibreChat should not blindly forward malformed tool arguments.
Instead, it should:
Option A (preferred)
Detect concatenated JSON objects and split:
Then:
Option B
Attempt safe normalization:
Option C
Fail early with a clear validation error, e.g.:
Why this matters
LibreChat is the ideal layer to handle this safely.
Suggested Implementation
At tool execution boundary:
Validate
argumentsas JSONIf invalid:
{...}{...}pattern (regex)Either:
Proposal
Add a tool argument validation + normalization layer before MCP execution.
This would:
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions