-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Add featherless integration #6510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Changes requested ❌
Reviewed everything up to 4709536 in 1 minute and 57 seconds. Click for details.
- Reviewed
82lines of code in2files - Skipped
1files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. extensions/engine-management-extension/models/featherless.json:1
- Draft comment:
Models are well structured; ensure each model's inference_params match Featherless API expectations. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
2. extensions/engine-management-extension/resources/featherless.json:13
- Draft comment:
Duplicated 'top_p' condition in transform_req template; consider removing the duplicate for clarity. - Reason this comment was not posted:
Comment looked like it was already resolved.
Workflow ID: wflow_aq5yUwoexLIPC2ep
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
| }, | ||
| "transform_resp": { | ||
| "chat_completions": { | ||
| "template": "{{tojson(input_request)}}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
transform_resp template uses 'input_request'; verify if this should process the API response instead.
| "template": "{{tojson(input_request)}}" | |
| "template": "{{tojson(input_response)}}" |
|
HI @RichelleJi, Thanks a lot for your contribution 🙌🙌 Could you please rebase your branch on dev and push again? Once that’s done, we’ll help run the build and review it again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds Featherless as a new model provider with API integration and support for multiple AI models.
- Add Featherless provider configuration with API endpoints for model retrieval and chat completions
- Include 3 initial model definitions: Kimi-K2-Instruct, Meta-Llama-3-8B, and QwQ-32B
- Add provider branding assets and utility functions for the new provider
Reviewed Changes
Copilot reviewed 6 out of 7 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| web/utils/modelEngine.ts | Adds Featherless provider support to logo, title, and description functions |
| web/utils/modelEngine.test.ts | Adds test coverage for the new Featherless provider utilities |
| extensions/engine-management-extension/resources/featherless.json | Defines Featherless API configuration with endpoints and request transformations |
| extensions/engine-management-extension/models/featherless.json | Defines 3 initial Featherless model configurations |
| extensions/engine-management-extension/engines.mjs | Imports and registers Featherless provider and models |
| core/src/types/model/modelEntity.ts | Adds Featherless to InferenceEngine enum and updates model type |
Comments suppressed due to low confidence (1)
web/utils/modelEngine.test.ts:1
- Test is returning a string literal instead of asserting the result. Should use
expect(result).toBe('images/ModelProvider/featherless.svg')to properly test the function output.
import { EngineManager, InferenceEngine, LocalOAIEngine } from '@janhq/core'
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| cortex_llamacpp = 'llama-cpp', | ||
| cortex_onnx = 'onnxruntime', | ||
| cortex_tensorrtllm = 'tensorrt-llm', | ||
| engine?: string |
Copilot
AI
Sep 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Invalid enum syntax - cannot have optional property declaration inside enum. This line should be removed as it appears to be leftover from a merge conflict or editing error.
| engine?: string |
| * The model engine. | ||
| */ | ||
| engine: string | ||
| engine: InferenceEngine |
Copilot
AI
Sep 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Breaking change: The engine property type changed from string to InferenceEngine enum. This could break existing code that uses string values not present in the enum. Consider using a union type like engine: InferenceEngine | string for backward compatibility.
Add Featherless as a new model provider with support for multiple AI models.
Changes
Important
Add Featherless as a new model provider with API integration and initial models.
featherless.jsoninmodelsfor Featherless provider with 3 models:Kimi-K2-Instruct,Meta-Llama-3-8B,QwQ-32B.resources/featherless.jsonwith endpoints for model retrieval and chat completions.Kimi-K2-Instruct: 1T parameter chat model with advanced reasoning.Meta-Llama-3-8B: 8B parameter model for text generation.QwQ-32B: 32B parameter model for question-answering.This description was created by
for 4709536. You can customize this summary. It will automatically update as commits are pushed.