fix(openai): parse response usage #641
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Important
Fixes token calculation in
parseUsageDetails
inparseOpenAI.ts
to ensure accurate token usage reporting by subtracting detailed token counts and preventing negative values.parseUsageDetails
inparseOpenAI.ts
by subtracting detailed token counts frominput_tokens
andoutput_tokens
.Math.max()
.parseUsageDetails
to handleinput_tokens_details
andoutput_tokens_details
correctly.This description was created by
for 3e3e24a. You can customize this summary. It will automatically update as commits are pushed.
Disclaimer: Experimental PR review
Greptile Summary
Updated On: 2025-09-19 12:49:24 UTC
This PR modifies the
parseUsageDetails
function inpackages/openai/src/parseOpenAI.ts
to handle OpenAI's response usage format more accurately. The change specifically addresses how token counts are calculated when using the newerinput_tokens
/output_tokens
format versus the legacyprompt_tokens
/completion_tokens
format.The key modification introduces subtraction logic for the
input_tokens
/output_tokens
branch (lines 140-156), where detailed token breakdowns are subtracted from the main token counts before returning them as 'input' and 'output' fields. This suggests that OpenAI's newer API format includes overlapping counts where the main token count represents the sum of all components, including the detailed breakdowns, requiring subtraction to avoid double-counting.However, this creates an asymmetric handling pattern between the two API response formats. The
prompt_tokens
/completion_tokens
branch (lines 114-116) continues to return raw values without any subtraction, while theinput_tokens
/output_tokens
branch now performs complex subtraction calculations. This divergent approach indicates the developer is responding to specific API behavior differences between OpenAI's legacy and newer response formats.The function is part of the OpenAI integration layer that parses usage statistics from OpenAI API responses and transforms them into Langfuse's internal
UsageDetails
format, making accurate token counting critical for billing and analytics purposes.Confidence score: 2/5
parseUsageDetails
function requires careful attention due to the introduction of asymmetric token calculation logic that differs significantly between API response formats