This repository provides a crowd-sourced collection of LLM/SLM proxy endpoints, powered by LiteLLM, to help everyday developers experiment with AI without worrying about token limits or infrastructure costs. These endpoints aggregate compute from volunteered machines — home PCs, laptops, edge devices, and more — to offer free (whenever possible) access for prototyping and learning.
Most developers cannot afford GPU instances or paid API credits just to learn or test ideas. This project solves that by:
- Crowd-sourcing inference capacity
- Providing open access API endpoints
- Allowing developers to prototype without worrying about token budgets
- Making compute a community resource, not a barrier
We welcome all contributions:
- Add new proxy endpoints
- Share scripts for running nodes
- Add support for vLLM / LM Studio / llama.cpp
- Improve routing logic
- Add monitoring or observability
- Submit documentation or examples
This project is meant ONLY for educational and experimental use. There is:
- No SLA or uptime guarantee
- No guarantee of response quality
- No guarantee of privacy
- No production reliability
- Use entirely at your own risk
Sensitive, private, or business-critical data must not be sent to these endpoints.