A high-performance, modular AI chat solution for Jetson™ edge devices. It integrates Ollama with the Meta Llama 3.2 3B model for LLM inference, FastAPI-based Langchain middleware, and OpenWebUI.
-
Updated
Sep 1, 2025 - Shell
A high-performance, modular AI chat solution for Jetson™ edge devices. It integrates Ollama with the Meta Llama 3.2 3B model for LLM inference, FastAPI-based Langchain middleware, and OpenWebUI.
Focuses on running LLMs at the edge (on devices like Raspberry Pi). Why it works: Highlights the project’s edge-computing nature and AI capabilities.
Add a description, image, and links to the edgellm topic page so that developers can more easily learn about it.
To associate your repository with the edgellm topic, visit your repo's landing page and select "manage topics."