-
-
Notifications
You must be signed in to change notification settings - Fork 14.5k
π fix: request to gpt5 series should not with top_p, temperature when reasoning effort is not none
#10800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@cy948 is attempting to deploy a commit to the LobeHub OSS Team on Vercel. A member of the Team first needs to authorize it. |
Reviewer's guide (collapsed on small PRs)Reviewer's GuideAdjusts OpenAI GPT-5 reasoning payload construction so that temperature and top_p are omitted (instead of forced to 1) whenever reasoning_effort is not 'none', aligning with OpenAI parameter compatibility requirements. Flow diagram for updated GPT5 reasoning payload pruning logicflowchart TD
Start([Start pruneReasoningPayload])
A[Receive ChatStreamPayload payload]
B[Determine shouldStream from payload]
C[Determine isEffortNone from payload.reasoning_effort]
Start --> A --> B --> C
C -->|isEffortNone is true| D[Set temperature to payload.temperature]
C -->|isEffortNone is false| E[Set temperature to undefined]
C -->|isEffortNone is true| F[Set top_p to payload.top_p]
C -->|isEffortNone is false| G[Set top_p to undefined]
B --> H[Set stream to shouldStream]
B --> I{shouldStream and stream_options present?}
I -->|yes| J[Include stream_options]
I -->|no| K[Omit stream_options]
D --> L[Build pruned payload]
E --> L
F --> L
G --> L
H --> L
J --> L
K --> L
L --> End([Return pruned payload])
File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
β³ Processing in progress |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there - I've reviewed your changes - here's some feedback:
- The inline comment mentions
logprobsbut the pruning logic only handlestemperatureandtop_p; consider either also handlinglogprobshere (if applicable) or updating the comment to match the actual behavior. - If there are other code paths constructing payloads for GPT-5 series (e.g., non-stream or non-chat builders), ensure they apply the same
reasoning_effort-based pruning so behavior is consistent across request types.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The inline comment mentions `logprobs` but the pruning logic only handles `temperature` and `top_p`; consider either also handling `logprobs` here (if applicable) or updating the comment to match the actual behavior.
- If there are other code paths constructing payloads for GPT-5 series (e.g., non-stream or non-chat builders), ensure they apply the same `reasoning_effort`-based pruning so behavior is consistent across request types.
## Individual Comments
### Comment 1
<location> `packages/model-runtime/src/core/contextBuilders/openai.ts:160-164` </location>
<code_context>
+ /**
+ * In openai docs: https://platform.openai.com/docs/guides/latest-model#gpt-5-2-parameter-compatibility
+ * Fields like `top_p`, `temperature` and `logprobs` only supported to
+ * GPT-5 series (e.g. 5-mini 5-nano ) when reasoning effort is none
+ */
+ temperature: isEffortNone ? payload.temperature : undefined,
</code_context>
<issue_to_address>
**nitpick (typo):** Minor wording/punctuation tweak in the new OpenAI docs comment.
The parenthetical is missing punctuation. Please update to something like `GPT-5 series (e.g. 5-mini, 5-nano)` or `5-mini and 5-nano` for clarity.
```suggestion
/**
* In OpenAI docs: https://platform.openai.com/docs/guides/latest-model#gpt-5-2-parameter-compatibility
* Fields like `top_p`, `temperature` and `logprobs` are only supported
* for the GPT-5 series (e.g. 5-mini, 5-nano) when reasoning effort is none.
*/
```
</issue_to_address>Help me be more useful! Please click π or π on each comment and I'll use the feedback to improve your reviews.
Codecov Reportβ
All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## next #10800 +/- ##
=======================================
Coverage 80.31% 80.31%
=======================================
Files 980 980
Lines 66983 66983
Branches 8813 8813
=======================================
Hits 53800 53800
Misses 13183 13183
Flags with carried forward coverage won't be shown. Click here to find out more.
π New features to boost your workflow:
|
top_p, temperature when reasoning effort is not none
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
β€οΈ Great PR @cy948 β€οΈ The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world. |
## [Version 2.0.0-next.173](v2.0.0-next.172...v2.0.0-next.173) <sup>Released on **2025-12-16**</sup> #### π Bug Fixes - **misc**: Request to gpt5 series should not with `top_p`, temperature when reasoning effort is not none. <br/> <details> <summary><kbd>Improvements and Fixes</kbd></summary> #### What's fixed * **misc**: Request to gpt5 series should not with `top_p`, temperature when reasoning effort is not none, closes [#10800](#10800) ([b4ad470](b4ad470)) </details> <div align="right"> [](#readme-top) </div>
|
π This PR is included in version 2.0.0-next.173 π The release is available on: Your semantic-release bot π¦π |
### [Version 1.145.1](v1.145.0...v1.145.1) <sup>Released on **2025-12-16**</sup> #### π Bug Fixes - **misc**: Request to gpt5 series should not with `top_p`, temperature when reasoning effort is not none. #### π Styles - **misc**: Update GPT-5.2 models, update i18n. <br/> <details> <summary><kbd>Improvements and Fixes</kbd></summary> #### What's fixed * **misc**: Request to gpt5 series should not with `top_p`, temperature when reasoning effort is not none, closes [lobehub#10800](https://github.com/jaworldwideorg/OneJA-Bot/issues/10800) ([b4ad470](b4ad470)) #### Styles * **misc**: Update GPT-5.2 models, closes [lobehub#10749](https://github.com/jaworldwideorg/OneJA-Bot/issues/10749) ([0446127](0446127)) * **misc**: Update i18n, closes [lobehub#10759](https://github.com/jaworldwideorg/OneJA-Bot/issues/10759) ([24cae77](24cae77)) </details> <div align="right"> [](#readme-top) </div>
π» Change Type
π Related Issue
π Description of Change
packages/model-runtime/src/core/contextBuilders/openai.ts: prune parameter when enable reasoning effort. gpt-5-2-parameter-compatibility | openai.compackages/model-runtime/src/providers/github/index.test.ts: should not use temperature with reasoning modelπ§ͺ How to Test
πΈ Screenshots / Videos
π Additional Information
Summary by Sourcery
Bug Fixes: