Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@begelundmuller
Copy link
Contributor

@begelundmuller begelundmuller commented Nov 5, 2025

  • Adds instrumentation for tracking LLM input and output token usage
  • Emits a telemetry event name ai_completion when an LLM tool call loop finishes

Note that the this ai_completion event is different from the existing ai_message event:

  • ai_message: one event per user/LLM/tool/progress message; use this to understand conversation length/actions
  • ai_completion: one event per LLM interaction; use this to understand token usage

Checklist:

  • Covered by tests
  • Ran it and it works as intended
  • Reviewed the diff before requesting a review
  • Checked for unhandled edge cases
  • Linked the issues it closes
  • Checked if the docs need to be updated. If so, create a separate Linear DOCS issue
  • Intend to cherry-pick into the release branch
  • I'm proud of this work!

@begelundmuller begelundmuller self-assigned this Nov 5, 2025
Copy link
Contributor

@ericpgreen2 ericpgreen2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! One finding below.

)
defer func() {
s.logger.Debug("completion finished",
zap.Int("iterations", opts.MaxIterations),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this should log iterations, not opts.MaxIterations?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants