Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Jan 30, 2026

Related GitHub Issue

Closes: #11071

Roo Code Task Context (Optional)

N/A

Description

This PR fixes the root cause of GLM4.5 (and other models) getting stuck in infinite file read loops when used via LM Studio or OpenAI-compatible endpoints.

Root Cause Analysis:

The NativeToolCallParser maintains two tracking systems:

  1. rawChunkTracker - tracks tool calls by stream index as chunks arrive
  2. streamingToolCalls - tracks active tool calls by ID after tool_call_start is emitted

When a model sends a tool call ID without a name (common with GLM4.5), the tool call is tracked in rawChunkTracker but never "started" (no tool_call_start event emitted, not added to streamingToolCalls).

The bug was in processFinishReason(): it emitted tool_call_end for ALL tracked calls in rawChunkTracker, including those that never had tool_call_start emitted. This caused finalizeStreamingToolCall() to fail with "Attempting to finalize unknown tool call" because the call was never in streamingToolCalls.

The Fix:

Added a hasStarted check in processFinishReason() to only emit tool_call_end events for tool calls that actually had their tool_call_start event emitted. Tool calls without names are now skipped with a diagnostic warning.

Test Procedure

  • Added 5 new test cases in NativeToolCallParser.spec.ts:
    1. Verifies tool_call_end is emitted only for started tool calls
    2. Verifies unstarted tool calls (no name) do NOT get tool_call_end events
    3. Tests mixed scenarios with both started and unstarted tool calls
    4. Tests non-tool_calls finish reasons return empty array
    5. Tests empty tracker returns empty array

Run tests:

cd src && npx vitest run core/assistant-message/__tests__/NativeToolCallParser.spec.ts

All 17 tests pass.

Pre-Submission Checklist

  • Issue Linked: This PR is linked to an approved GitHub Issue (see "Related GitHub Issue" above).
  • Scope: My changes are focused on the linked issue (one major feature/fix per PR).
  • Self-Review: I have performed a thorough self-review of my code.
  • Testing: New and/or updated tests have been added to cover my changes (if applicable).
  • Documentation Impact: I have considered if my changes require documentation updates (see "Documentation Updates" section below).
  • Contribution Guidelines: I have read and agree to the Contributor Guidelines.

Screenshots / Videos

N/A - Backend fix, no UI changes.

Documentation Updates

  • No documentation updates are required.

Additional Notes

This fix addresses the symptoms described in issue #11071 where users saw repeated console warnings:

[NativeToolCallParser] Attempting to finalize unknown tool call: call_...

The fix ensures proper synchronization between the dual tracking systems by respecting the hasStarted flag that indicates whether a tool call lifecycle was properly initiated.


Important

Fixes NativeToolCallParser to prevent tool_call_end events for unstarted tool calls, adding checks and tests for correct behavior.

  • Behavior:
    • Fixes processFinishReason() in NativeToolCallParser.ts to emit tool_call_end only for tool calls with hasStarted true.
    • Logs a warning for unstarted tool calls without names.
  • Tests:
    • Adds 5 test cases in NativeToolCallParser.spec.ts to verify correct tool_call_end emission and logging behavior.
    • Tests include scenarios for started, unstarted, mixed tool calls, and non-tool call finish reasons.

This description was created by Ellipsis for 24e8d92. You can customize this summary. It will automatically update as commits are pushed.

This fixes an issue where GLM4.5 and other models get stuck in infinite
tool call retry loops when using LM Studio or OpenAI-compatible endpoints.

Root cause: processFinishReason() was emitting tool_call_end events for
ALL tracked tool calls, even those that never had a tool_call_start
emitted (due to missing tool name). This caused finalizeStreamingToolCall
to fail silently, resulting in no tool_result being sent to the model.

The fix ensures processFinishReason() only emits tool_call_end for tool
calls where hasStarted=true, keeping the rawChunkTracker and
streamingToolCalls maps synchronized.

Also adds diagnostic logging when tool calls are tracked but never
started, helping debug malformed tool call patterns from various models.
@roomote
Copy link
Contributor Author

roomote bot commented Jan 30, 2026

Rooviewer Clock   See task on Roo Cloud

Review complete. No issues found.

The implementation correctly adds a hasStarted check in processFinishReason() to only emit tool_call_end events for tool calls that actually had their tool_call_start event emitted. This fix aligns with the existing behavior in finalizeRawChunks() and properly addresses the synchronization issue between the dual tracking systems (rawChunkTracker and streamingToolCalls).

Test coverage is comprehensive with 5 new test cases covering all relevant scenarios.

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

roomote bot pushed a commit that referenced this pull request Jan 30, 2026
…1071

This PR combines:
1. PR #11093 fix: NativeToolCallParser processFinishReason hasStarted check
2. GLM model detection utility for LM Studio and OpenAI-compatible providers
3. mergeToolResultText optimization for GLM models
4. Disable parallel_tool_calls for GLM models
5. GLM-4.7 thinking parameter support
6. Diagnostic logging for GLM detection

Closes #11071
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Triage

Development

Successfully merging this pull request may close these issues.

[BUG] GLM4.5 via LMStudio as well as via an OpenAI-compatible endpoint stuck repeating file reads

1 participant