Thanks to visit codestin.com
Credit goes to github.com

Skip to content

fix(langchain): report token counts when trace content is enabled #2899

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 7, 2025

Conversation

galkleinman
Copy link
Contributor

@galkleinman galkleinman commented May 7, 2025

Important

_set_chat_response() in callback_handler.py now always calculates and sets token counts, independent of prompt content tracing.

  • Behavior:
    • _set_chat_response() in callback_handler.py now calculates token counts regardless of should_send_prompts() result.
    • Token counts (input_tokens, output_tokens, total_tokens, cache_read_tokens) are always set as span attributes if they are greater than zero.
  • Misc:
    • Removed early return in _set_chat_response() based on should_send_prompts().
    • Adjusted logic to ensure token details are processed independently of prompt content tracing.

This description was created by Ellipsis for 7be2996. You can customize this summary. It will automatically update as commits are pushed.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! πŸ‘

Reviewed everything up to 7be2996 in 1 minute and 54 seconds. Click for details.
  • Reviewed 132 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with πŸ‘ or πŸ‘Ž to teach Ellipsis.
1. packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/callback_handler.py:163
  • Draft comment:
    Removal of the early return (if not should_send_prompts()) ensures token counts are always recorded. Confirm this is intentional and that no sensitive prompt data is recorded when prompts should be suppressed.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.
2. packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/callback_handler.py:208
  • Draft comment:
    Consider using isinstance(generation.message.content, str) instead of 'is str' to accurately detect string content.
  • Reason this comment was not posted:
    Comment was on unchanged code.
3. packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/callback_handler.py:163
  • Draft comment:
    Removing the early return now always processes token counts even when prompts are disabled. Confirm this is the intended behavior.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
4. packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/callback_handler.py:209
  • Draft comment:
    Consider replacing 'if generation.message.content is str:' with isinstance(generation.message.content, str) for correct type checking.
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_TWTaKPpvbgPVJp9n

You can customize Ellipsis by changing your verbosity settings, reacting with πŸ‘ or πŸ‘Ž, replying to comments, or adding code review rules.

@galkleinman galkleinman closed this May 7, 2025
@galkleinman galkleinman reopened this May 7, 2025
@galkleinman galkleinman merged commit ed287f8 into main May 7, 2025
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants