-
-
Notifications
You must be signed in to change notification settings - Fork 46
feat: integrate ReqLLM with AshAI #137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Added `req_llm` dependency to `mix.exs`. - Refactored `AshAi` module to utilize `ReqLLM` for handling tools and model interactions. - Introduced new model configuration options for LLM usage. - Updated tool creation logic to support `ReqLLM.Tool` structures and callbacks. - Added tests for `ReqLLM.Tool` creation and callback execution. This update enhances the integration with the `ReqLLM` library, allowing for more flexible tool management and improved functionality in the `AshAi` module.
- Changed the prompt macro to accept model specifications as strings instead of LangChain model instances. - Updated README to reflect new model configuration format and examples. - Refactored prompt processing to support new message formats, including structured outputs and dynamic content. - Removed deprecated adapter files and streamlined the prompt adapter logic. - Added tests for new prompt handling features and ensured backward compatibility with legacy prompt formats. This update enhances the flexibility and usability of the prompt system within the AshAi framework.
- Introduced `AshAi.EmbeddingModels.ReqLLM` module for generating embeddings using the ReqLLM library. - Updated the `generate/2` callback to return a list of vectors instead of binaries. - Added configuration options for model selection and dimensions. - Implemented tests for dimensions retrieval, embedding generation, and error handling. This update enhances the embedding capabilities of the AshAi framework by integrating with the ReqLLM library.
|
This PR breaks the feature to use Bumblebee for inference. To solve that:
I would like to see a future where local models that are served with Bumblebee are used. |
|
@mikehostetler curious to hear if there is a roadmap for integration with bumblebee for ReqLLM? |
|
Added an issue: agentjido/req_llm#203 Just haven't gotten to it yet. There's been some great work to add a VLLM provider and I imagine Bumblebee isn't far behind that. |
|
Awesome π₯³ I don't think it's reasonable for us to maintain two backing LLM tools, so likely we'd want to wait until that support is added to move forward here. In my ideal world, we can lean pretty hard into the underlying ReqLLM abstraction as a universal adapter, so it will have to be about as universal as can be π |
WORK IN PROGRESS - FEEDBACK WELCOME
req_llmdependency tomix.exs.AshAimodule to utilizeReqLLMfor handling tools and model interactions.ReqLLM.Toolstructures and callbacks.ReqLLM.Toolcreation and callback execution.This update enhances the integration with the
ReqLLMlibrary, allowing for more flexible tool management and improved functionality in theAshAimodule.Contributor checklist
Leave anything that you believe does not apply unchecked.