-
-
Notifications
You must be signed in to change notification settings - Fork 14.5k
π fix(model-bank): fix ZenMux model IDs by adding provider prefixes #11947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@iBenzene is attempting to deploy a commit to the LobeHub OSS Team on Vercel. A member of the Team first needs to authorize it. |
Reviewer's guide (collapsed on small PRs)Reviewer's GuideUpdates ZenMux GPT-5.2 model configurations to use fully-qualified provider-prefixed model IDs and normalizes the GPT-5.2 Chat entry for consistent routing and naming. Sequence diagram for ZenMux routing with provider-prefixed model IDssequenceDiagram
actor Client
participant RoutingLayer
participant ZenMuxModelBank
participant OpenAIProvider
Client->>RoutingLayer: sendRequest(modelId: openai/gpt-5.2, payload)
RoutingLayer->>ZenMuxModelBank: resolveModelCard(modelId: openai/gpt-5.2)
ZenMuxModelBank-->>RoutingLayer: AIChatModelCard(provider: openai, id: openai/gpt-5.2)
RoutingLayer->>OpenAIProvider: forwardRequest(modelId: gpt-5.2, payload)
OpenAIProvider-->>RoutingLayer: response
RoutingLayer-->>Client: response
Client->>RoutingLayer: sendRequest(modelId: openai/gpt-5.2-chat, payload)
RoutingLayer->>ZenMuxModelBank: resolveModelCard(modelId: openai/gpt-5.2-chat)
ZenMuxModelBank-->>RoutingLayer: AIChatModelCard(provider: openai, id: openai/gpt-5.2-chat)
RoutingLayer->>OpenAIProvider: forwardRequest(modelId: gpt-5.2-chat, payload)
OpenAIProvider-->>RoutingLayer: response
RoutingLayer-->>Client: response
Class diagram for updated ZenMux GPT-5.2 chat model configurationsclassDiagram
class AIChatModelCard {
+string id
+string displayName
+string description
+boolean enabled
+number maxOutput
+number contextWindowTokens
}
class GPT_5_2_Flagship {
+id = openai/gpt-5.2
+displayName = GPT-5.2
+description = GPT-5.2 is a flagship model for coding and agentic workflows with stronger reasoning and long-context performance.
+enabled = true
+maxOutput = 128000
}
class GPT_5_2_Pro {
+id = openai/gpt-5.2-pro
+displayName = GPT-5.2 pro
+description = GPT-5.2 Pro: a smarter, more precise GPT-5.2 variant (Responses API only), suited for harder problems and longer multi-turn reasoning.
+enabled = true
+maxOutput = 128000
}
class GPT_5_2_Chat {
+id = openai/gpt-5.2-chat
+displayName = GPT-5.2 Chat
+description = GPT-5.2 Chat is the ChatGPT variant for experiencing the newest conversation improvements.
+enabled = true
+maxOutput = 16384
+contextWindowTokens = 128000
}
AIChatModelCard <|-- GPT_5_2_Flagship
AIChatModelCard <|-- GPT_5_2_Pro
AIChatModelCard <|-- GPT_5_2_Chat
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey - I've left some high level feedback:
- Since the GPT-5.2 Chat ID changed from
gpt-5.2-chat-latesttoopenai/gpt-5.2-chat, double-check any consumers that may still be referencing the old ID (e.g., routing config, presets, or feature flags) and consider providing a backward-compatible alias if those references are external or hard to update. - If provider prefixes like
openai/are used in multiple places, it may be worth centralizing them (e.g., as a constant or enum) to avoid divergence if provider naming ever changes.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Since the GPT-5.2 Chat ID changed from `gpt-5.2-chat-latest` to `openai/gpt-5.2-chat`, double-check any consumers that may still be referencing the old ID (e.g., routing config, presets, or feature flags) and consider providing a backward-compatible alias if those references are external or hard to update.
- If provider prefixes like `openai/` are used in multiple places, it may be worth centralizing them (e.g., as a constant or enum) to avoid divergence if provider naming ever changes.Help me be more useful! Please click π or π on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π‘ Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b7760c40ef
βΉοΈ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with π.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
|
β€οΈ Great PR @iBenzene β€οΈ The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world. |
### [Version 2.0.9](v2.0.8...v2.0.9) <sup>Released on **2026-01-29**</sup> #### π Bug Fixes - **model-bank**: Fix ZenMux model IDs by adding provider prefixes. <br/> <details> <summary><kbd>Improvements and Fixes</kbd></summary> #### What's fixed * **model-bank**: Fix ZenMux model IDs by adding provider prefixes, closes [#11947](#11947) ([17f8a5c](17f8a5c)) </details> <div align="right"> [](#readme-top) </div>
|
π This PR is included in version 2.0.9 π The release is available on: Your semantic-release bot π¦π |
### [Version 1.157.1](v1.157.0...v1.157.1) <sup>Released on **2026-01-29**</sup> #### π Bug Fixes - **model-bank**: Fix ZenMux model IDs by adding provider prefixes. - **misc**: Add ExtendParamsTypeSchema for enhanced model settings. #### π Styles - **misc**: Fix group task render. <br/> <details> <summary><kbd>Improvements and Fixes</kbd></summary> #### What's fixed * **model-bank**: Fix ZenMux model IDs by adding provider prefixes, closes [lobehub#11947](https://github.com/jaworldwideorg/OneJA-Bot/issues/11947) ([17f8a5c](17f8a5c)) * **misc**: Add ExtendParamsTypeSchema for enhanced model settings, closes [lobehub#11437](https://github.com/jaworldwideorg/OneJA-Bot/issues/11437) ([f58c980](f58c980)) #### Styles * **misc**: Fix group task render, closes [lobehub#11952](https://github.com/jaworldwideorg/OneJA-Bot/issues/11952) ([b8ef02e](b8ef02e)) </details> <div align="right"> [](#readme-top) </div>
π» Change Type
π Related Issue
π Description of Change
Currently, the model IDs defined in the ZenMux configuration (
zenmux.ts) do not include the required provider prefix.For example, model IDs are specified as:
gpt-5.2instead of the expected fully-qualified format:
openai/gpt-5.2As a result, the routing layer cannot reliably determine the underlying provider for these models, which may lead to incorrect provider resolution or failed requests.
This PR addresses the issue by:
provider/prefix (e.g.openai/) to all applicable ZenMux model IDs.π§ͺ How to Test
Start the application with the updated ZenMux configuration.
Verify that model routing correctly resolves the provider for prefixed model IDs (e.g.
openai/gpt-5.2).Send test requests to GPT-5.2 Chat models and confirm that requests are routed to the correct provider without errors.
Tested locally
Added/updated tests
No tests needed
πΈ Screenshots / Videos
N/A (no UI changes)
π Additional Information
Summary by Sourcery
Update ZenMux chat model identifiers to use fully-qualified provider-prefixed IDs for GPT-5.2 variants and align the GPT-5.2 Chat metadata with the new naming.
Bug Fixes:
Enhancements: