AI Copilot for Vim/NeoVim
-
Updated
Feb 28, 2025 - Python
AI Copilot for Vim/NeoVim
A high-performance API server that provides OpenAI-compatible endpoints for MLX models. Developed using Python and powered by the FastAPI framework, it provides an efficient, scalable, and user-friendly solution for running MLX-based vision and language models locally with an OpenAI-compatible interface.
Build an Autonomous Web3 AI Trading Agent (BASE + Uniswap V4 example)
Unified management and routing for llama.cpp and MLX models with web dashboard.
Various LLM resources and experiments
Add MLX support to Pydantic AI through LM Studio or mlx-lm, run MLX compatible HF models on Apple silicon.
Reinforcement learning for text generation on MLX (Apple Silicon)
Federated Fine-Tuning of LLMs on Apple Silicon with Flower.ai and MLX-LM
LLM model inference on Apple Silicon Mac using the Apple MLX Framework.
MLX inference service compatible with OpenAI API, built on MLX-LM and MLX-VLM.基于MLX-LM和MLX-VLM构建的OpenAI API兼容的MLX推理服务.
Fine-tuning open-source LLMs for Corerefence Resolution task using mlx-lm
Add a description, image, and links to the mlx-lm topic page so that developers can more easily learn about it.
To associate your repository with the mlx-lm topic, visit your repo's landing page and select "manage topics."