Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

hassiebp
Copy link
Contributor

@hassiebp hassiebp commented Sep 19, 2025

Important

Fixes token calculation in parseUsageDetails in parseOpenAI.ts to ensure accurate token usage reporting by subtracting detailed token counts and preventing negative values.

  • Behavior:
    • Fixes token calculation in parseUsageDetails in parseOpenAI.ts by subtracting detailed token counts from input_tokens and output_tokens.
    • Ensures token counts do not go below zero using Math.max().
  • Functions:
    • Modifies parseUsageDetails to handle input_tokens_details and output_tokens_details correctly.

This description was created by Ellipsis for 3e3e24a. You can customize this summary. It will automatically update as commits are pushed.

Disclaimer: Experimental PR review

Greptile Summary

Updated On: 2025-09-19 12:49:24 UTC

This PR modifies the parseUsageDetails function in packages/openai/src/parseOpenAI.ts to handle OpenAI's response usage format more accurately. The change specifically addresses how token counts are calculated when using the newer input_tokens/output_tokens format versus the legacy prompt_tokens/completion_tokens format.

The key modification introduces subtraction logic for the input_tokens/output_tokens branch (lines 140-156), where detailed token breakdowns are subtracted from the main token counts before returning them as 'input' and 'output' fields. This suggests that OpenAI's newer API format includes overlapping counts where the main token count represents the sum of all components, including the detailed breakdowns, requiring subtraction to avoid double-counting.

However, this creates an asymmetric handling pattern between the two API response formats. The prompt_tokens/completion_tokens branch (lines 114-116) continues to return raw values without any subtraction, while the input_tokens/output_tokens branch now performs complex subtraction calculations. This divergent approach indicates the developer is responding to specific API behavior differences between OpenAI's legacy and newer response formats.

The function is part of the OpenAI integration layer that parses usage statistics from OpenAI API responses and transforms them into Langfuse's internal UsageDetails format, making accurate token counting critical for billing and analytics purposes.

Confidence score: 2/5

  • This PR introduces potentially risky asymmetric logic that could lead to inconsistent token counting across different OpenAI API response formats
  • Score reflects concern about the complex subtraction logic and lack of documentation explaining why different formats require different handling approaches
  • The parseUsageDetails function requires careful attention due to the introduction of asymmetric token calculation logic that differs significantly between API response formats

Copy link

vercel bot commented Sep 19, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Updated (UTC)
langfuse-js Ready Ready Preview Sep 19, 2025 0:49am

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, no comments

Edit Code Review Bot Settings | Greptile

@hassiebp hassiebp merged commit 882146b into main Sep 19, 2025
8 checks passed
@hassiebp hassiebp deleted the fix-openai-responses-usage branch September 19, 2025 14:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant