Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@mikehostetler
Copy link

WORK IN PROGRESS - FEEDBACK WELCOME

  • Added req_llm dependency to mix.exs.
  • Refactored AshAi module to utilize ReqLLM for handling tools and model interactions.
  • Introduced new model configuration options for LLM usage.
  • Updated tool creation logic to support ReqLLM.Tool structures and callbacks.
  • Added tests for ReqLLM.Tool creation and callback execution.

This update enhances the integration with the ReqLLM library, allowing for more flexible tool management and improved functionality in the AshAi module.

Contributor checklist

Leave anything that you believe does not apply unchecked.

  • I accept the AI Policy, or AI was not used in the creation of this PR.
  • Bug fixes include regression tests
  • Chores
  • Documentation changes
  • Features include unit/acceptance tests
  • Refactoring
  • Update dependencies

- Added `req_llm` dependency to `mix.exs`.
- Refactored `AshAi` module to utilize `ReqLLM` for handling tools and model interactions.
- Introduced new model configuration options for LLM usage.
- Updated tool creation logic to support `ReqLLM.Tool` structures and callbacks.
- Added tests for `ReqLLM.Tool` creation and callback execution.

This update enhances the integration with the `ReqLLM` library, allowing for more flexible tool management and improved functionality in the `AshAi` module.
- Changed the prompt macro to accept model specifications as strings instead of LangChain model instances.
- Updated README to reflect new model configuration format and examples.
- Refactored prompt processing to support new message formats, including structured outputs and dynamic content.
- Removed deprecated adapter files and streamlined the prompt adapter logic.
- Added tests for new prompt handling features and ensured backward compatibility with legacy prompt formats.

This update enhances the flexibility and usability of the prompt system within the AshAi framework.
- Introduced `AshAi.EmbeddingModels.ReqLLM` module for generating embeddings using the ReqLLM library.
- Updated the `generate/2` callback to return a list of vectors instead of binaries.
- Added configuration options for model selection and dimensions.
- Implemented tests for dimensions retrieval, embedding generation, and error handling.

This update enhances the embedding capabilities of the AshAi framework by integrating with the ReqLLM library.
@marcnnn
Copy link

marcnnn commented Nov 8, 2025

This PR breaks the feature to use Bumblebee for inference.

To solve that:

  1. Langchain could use req_llm for inference entpoint support.
  2. req_llm could add Bumblee support.
  3. ash_ai can use both.

I would like to see a future where local models that are served with Bumblebee are used.

@zachdaniel
Copy link
Contributor

@mikehostetler curious to hear if there is a roadmap for integration with bumblebee for ReqLLM?

@mikehostetler
Copy link
Author

@zachdaniel @marcnnn

Added an issue: agentjido/req_llm#203

Just haven't gotten to it yet. There's been some great work to add a VLLM provider and I imagine Bumblebee isn't far behind that.

@zachdaniel
Copy link
Contributor

Awesome πŸ₯³ I don't think it's reasonable for us to maintain two backing LLM tools, so likely we'd want to wait until that support is added to move forward here. In my ideal world, we can lean pretty hard into the underlying ReqLLM abstraction as a universal adapter, so it will have to be about as universal as can be πŸ˜†

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants