Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@cy948
Copy link
Contributor

@cy948 cy948 commented Dec 16, 2025

πŸ’» Change Type

  • ✨ feat
  • πŸ› fix
  • ♻️ refactor
  • πŸ’„ style
  • πŸ‘· build
  • ⚑️ perf
  • βœ… test
  • πŸ“ docs
  • πŸ”¨ chore

πŸ”— Related Issue

πŸ”€ Description of Change

  • πŸ› packages/model-runtime/src/core/contextBuilders/openai.ts: prune parameter when enable reasoning effort. gpt-5-2-parameter-compatibility | openai.com
  • πŸ§ͺ packages/model-runtime/src/providers/github/index.test.ts: should not use temperature with reasoning model

πŸ§ͺ How to Test

  • Tested on preview
  • Added/updated tests
  • No tests needed

πŸ“Έ Screenshots / Videos

Before After
image image

πŸ“ Additional Information

Summary by Sourcery

Bug Fixes:

  • Avoid sending temperature and top_p parameters to GPT-5 series models when reasoning effort is not none, aligning with OpenAI API requirements.

@vercel
Copy link

vercel bot commented Dec 16, 2025

@cy948 is attempting to deploy a commit to the LobeHub OSS Team on Vercel.

A member of the Team first needs to authorize it.

@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Dec 16, 2025
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Dec 16, 2025

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

Adjusts OpenAI GPT-5 reasoning payload construction so that temperature and top_p are omitted (instead of forced to 1) whenever reasoning_effort is not 'none', aligning with OpenAI parameter compatibility requirements.

Flow diagram for updated GPT5 reasoning payload pruning logic

flowchart TD
  Start([Start pruneReasoningPayload])
  A[Receive ChatStreamPayload payload]
  B[Determine shouldStream from payload]
  C[Determine isEffortNone from payload.reasoning_effort]

  Start --> A --> B --> C

  C -->|isEffortNone is true| D[Set temperature to payload.temperature]
  C -->|isEffortNone is false| E[Set temperature to undefined]

  C -->|isEffortNone is true| F[Set top_p to payload.top_p]
  C -->|isEffortNone is false| G[Set top_p to undefined]

  B --> H[Set stream to shouldStream]
  B --> I{shouldStream and stream_options present?}
  I -->|yes| J[Include stream_options]
  I -->|no| K[Omit stream_options]

  D --> L[Build pruned payload]
  E --> L
  F --> L
  G --> L
  H --> L
  J --> L
  K --> L

  L --> End([Return pruned payload])
Loading

File-Level Changes

Change Details Files
Modify pruning of reasoning payload for GPT-5 models to drop temperature and top_p when reasoning effort is enabled.
  • Keep existing logic for stream and stream_options based on shouldStream.
  • Replace fallback values of 1 for temperature and top_p with omission (undefined) when reasoning effort is not 'none'.
  • Document the GPT-5 parameter compatibility constraint in a block comment with a link to the OpenAI docs.
packages/model-runtime/src/core/contextBuilders/openai.ts

Possibly linked issues

  • #(not provided): PR prunes top_p and temperature when reasoning effort is set, matching GPT-5.2 parameter compatibility issue.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@gru-agent
Copy link
Contributor

gru-agent bot commented Dec 16, 2025

⏳ Processing in progress

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • The inline comment mentions logprobs but the pruning logic only handles temperature and top_p; consider either also handling logprobs here (if applicable) or updating the comment to match the actual behavior.
  • If there are other code paths constructing payloads for GPT-5 series (e.g., non-stream or non-chat builders), ensure they apply the same reasoning_effort-based pruning so behavior is consistent across request types.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The inline comment mentions `logprobs` but the pruning logic only handles `temperature` and `top_p`; consider either also handling `logprobs` here (if applicable) or updating the comment to match the actual behavior.
- If there are other code paths constructing payloads for GPT-5 series (e.g., non-stream or non-chat builders), ensure they apply the same `reasoning_effort`-based pruning so behavior is consistent across request types.

## Individual Comments

### Comment 1
<location> `packages/model-runtime/src/core/contextBuilders/openai.ts:160-164` </location>
<code_context>
+    /**
+     *  In openai docs: https://platform.openai.com/docs/guides/latest-model#gpt-5-2-parameter-compatibility
+     *  Fields like `top_p`, `temperature` and `logprobs` only supported to
+     *  GPT-5 series (e.g. 5-mini 5-nano ) when reasoning effort is none
+     */
+    temperature: isEffortNone ? payload.temperature : undefined,
</code_context>

<issue_to_address>
**nitpick (typo):** Minor wording/punctuation tweak in the new OpenAI docs comment.

The parenthetical is missing punctuation. Please update to something like `GPT-5 series (e.g. 5-mini, 5-nano)` or `5-mini and 5-nano` for clarity.

```suggestion
    /**
     *  In OpenAI docs: https://platform.openai.com/docs/guides/latest-model#gpt-5-2-parameter-compatibility
     *  Fields like `top_p`, `temperature` and `logprobs` are only supported
     *  for the GPT-5 series (e.g. 5-mini, 5-nano) when reasoning effort is none.
     */
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click πŸ‘ or πŸ‘Ž on each comment and I'll use the feedback to improve your reviews.

@codecov
Copy link

codecov bot commented Dec 16, 2025

Codecov Report

βœ… All modified and coverable lines are covered by tests.
βœ… Project coverage is 80.31%. Comparing base (a32e0cc) to head (2752c27).
⚠️ Report is 6 commits behind head on next.

Additional details and impacted files
@@           Coverage Diff           @@
##             next   #10800   +/-   ##
=======================================
  Coverage   80.31%   80.31%           
=======================================
  Files         980      980           
  Lines       66983    66983           
  Branches     8813     8813           
=======================================
  Hits        53800    53800           
  Misses      13183    13183           
Flag Coverage Ξ”
app 73.10% <ΓΈ> (ΓΈ)
database 98.25% <ΓΈ> (ΓΈ)
packages/agent-runtime 98.08% <ΓΈ> (ΓΈ)
packages/context-engine 91.61% <ΓΈ> (ΓΈ)
packages/conversation-flow 98.05% <ΓΈ> (ΓΈ)
packages/electron-server-ipc 93.76% <ΓΈ> (ΓΈ)
packages/file-loaders 92.21% <ΓΈ> (ΓΈ)
packages/model-bank 100.00% <ΓΈ> (ΓΈ)
packages/model-runtime 89.60% <100.00%> (ΓΈ)
packages/prompts 79.17% <ΓΈ> (ΓΈ)
packages/python-interpreter 96.50% <ΓΈ> (ΓΈ)
packages/utils 95.31% <ΓΈ> (ΓΈ)
packages/web-crawler 96.81% <ΓΈ> (ΓΈ)

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Ξ”
Store 73.05% <ΓΈ> (ΓΈ)
Services 56.44% <ΓΈ> (ΓΈ)
Server 75.19% <ΓΈ> (ΓΈ)
Libs 39.30% <ΓΈ> (ΓΈ)
Utils 82.05% <ΓΈ> (ΓΈ)
πŸš€ New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • πŸ“¦ JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@arvinxx arvinxx changed the title πŸ› fix: request to gpt5 series should not with top_p, temperature when reasoning effort is not none πŸ› fix: request to gpt5 series should not with top_p, temperature when reasoning effort is not none Dec 16, 2025
@vercel
Copy link

vercel bot commented Dec 16, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
lobehub Ready Ready Preview, Comment Dec 16, 2025 4:30am

@arvinxx arvinxx merged commit b4ad470 into lobehub:next Dec 16, 2025
28 of 29 checks passed
@lobehubbot
Copy link
Member

❀️ Great PR @cy948 ❀️

The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world.

lobehubbot pushed a commit that referenced this pull request Dec 16, 2025
## [Version&nbsp;2.0.0-next.173](v2.0.0-next.172...v2.0.0-next.173)
<sup>Released on **2025-12-16**</sup>

#### πŸ› Bug Fixes

- **misc**: Request to gpt5 series should not with `top_p`, temperature when reasoning effort  is not none.

<br/>

<details>
<summary><kbd>Improvements and Fixes</kbd></summary>

#### What's fixed

* **misc**: Request to gpt5 series should not with `top_p`, temperature when reasoning effort  is not none, closes [#10800](#10800) ([b4ad470](b4ad470))

</details>

<div align="right">

[![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top)

</div>
@lobehubbot
Copy link
Member

πŸŽ‰ This PR is included in version 2.0.0-next.173 πŸŽ‰

The release is available on:

Your semantic-release bot πŸ“¦πŸš€

JamieStivala pushed a commit to jaworldwideorg/OneJA-Bot that referenced this pull request Dec 16, 2025
### [Version&nbsp;1.145.1](v1.145.0...v1.145.1)
<sup>Released on **2025-12-16**</sup>

#### πŸ› Bug Fixes

- **misc**: Request to gpt5 series should not with `top_p`, temperature when reasoning effort  is not none.

#### πŸ’„ Styles

- **misc**: Update GPT-5.2 models, update i18n.

<br/>

<details>
<summary><kbd>Improvements and Fixes</kbd></summary>

#### What's fixed

* **misc**: Request to gpt5 series should not with `top_p`, temperature when reasoning effort  is not none, closes [lobehub#10800](https://github.com/jaworldwideorg/OneJA-Bot/issues/10800) ([b4ad470](b4ad470))

#### Styles

* **misc**: Update GPT-5.2 models, closes [lobehub#10749](https://github.com/jaworldwideorg/OneJA-Bot/issues/10749) ([0446127](0446127))
* **misc**: Update i18n, closes [lobehub#10759](https://github.com/jaworldwideorg/OneJA-Bot/issues/10759) ([24cae77](24cae77))

</details>

<div align="right">

[![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top)

</div>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

released on @next size:S This PR changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants