fix(openai): handle async response api input and outputs #1301
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Important
Fixes compatibility with OpenAI's AsyncResponses API by updating conditional checks in
langfuse/openai.py
to handle both sync and async response objects.langfuse/openai.py
to handle bothResponses
andAsyncResponses
objects.max_completion_tokens
inmodelParameters
(lines 408-414, 451-453).This description was created by
for c61b70b. You can customize this summary. It will automatically update as commits are pushed.
Disclaimer: Experimental PR review
Greptile Summary
This PR fixes a compatibility issue with OpenAI's AsyncResponses API in the Langfuse OpenAI integration. The OpenAI Python SDK version 1.66.0 introduced separate
Responses
(sync) andAsyncResponses
(async) objects for response API endpoints, but the existing Langfuse code was only checking forResponses
objects.The fix extends four key conditional checks throughout the OpenAI integration to handle both object types:
Without this fix, async response API calls would incorrectly fall through to the default chat completion logic, leading to improper input/output extraction and potentially incorrect trace data in Langfuse. The change maintains backward compatibility while ensuring that both sync and async response API calls receive identical treatment in terms of prompt extraction, output processing, and streaming response handling.
This change fits into the broader OpenAI integration pattern where different API endpoints (chat completions vs. response APIs) require different handling logic, and the code uses the
resource.object
property to determine the appropriate processing path.Confidence score: 5/5