Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

hassiebp
Copy link
Contributor

@hassiebp hassiebp commented Aug 18, 2025

Important

Fixes compatibility with OpenAI's AsyncResponses API by updating conditional checks in langfuse/openai.py to handle both sync and async response objects.

  • Behavior:
    • Updates conditional checks in langfuse/openai.py to handle both Responses and AsyncResponses objects.
    • Ensures prompt extraction uses 'input' for async responses (lines 394-395).
    • Corrects output processing for async responses (lines 675-676).
    • Maintains consistent handling for sync and async response streaming (lines 924-926, 995-997).
  • Misc:
    • Adds handling for max_completion_tokens in modelParameters (lines 408-414, 451-453).

This description was created by Ellipsis for c61b70b. You can customize this summary. It will automatically update as commits are pushed.


Disclaimer: Experimental PR review

Greptile Summary

This PR fixes a compatibility issue with OpenAI's AsyncResponses API in the Langfuse OpenAI integration. The OpenAI Python SDK version 1.66.0 introduced separate Responses (sync) and AsyncResponses (async) objects for response API endpoints, but the existing Langfuse code was only checking for Responses objects.

The fix extends four key conditional checks throughout the OpenAI integration to handle both object types:

  • Prompt extraction logic (lines 394-395): Ensures async response API calls extract prompts from the 'input' parameter rather than treating them as chat completions
  • Response parsing logic (lines 675-676): Ensures async responses use the correct output processing specific to response APIs
  • Sync streaming finalization (lines 924-926): Maintains consistent handling for both sync and async response streaming
  • Async streaming finalization (lines 995-997): Ensures async streaming responses are processed with the same logic as sync ones

Without this fix, async response API calls would incorrectly fall through to the default chat completion logic, leading to improper input/output extraction and potentially incorrect trace data in Langfuse. The change maintains backward compatibility while ensuring that both sync and async response API calls receive identical treatment in terms of prompt extraction, output processing, and streaming response handling.

This change fits into the broader OpenAI integration pattern where different API endpoints (chat completions vs. response APIs) require different handling logic, and the code uses the resource.object property to determine the appropriate processing path.

Confidence score: 5/5

  • This PR is safe to merge with minimal risk
  • Score reflects a simple, targeted fix that addresses a specific compatibility issue without introducing complex logic changes
  • No files require special attention as the changes are straightforward conditional additions

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, no comments

Edit Code Review Bot Settings | Greptile

@hassiebp hassiebp linked an issue Aug 18, 2025 that may be closed by this pull request
@hassiebp hassiebp merged commit c1b8393 into main Aug 18, 2025
7 of 10 checks passed
@hassiebp hassiebp deleted the fix-openai-async-responses branch August 18, 2025 13:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bug: max-token are not displayed using pydantic-ai bug: langfuse show input, output are empty with AsyncOpenAI + responses api
1 participant