JavaScript implementation of LiteLLM.
npm install litellm
import { completion } from 'litellm';
process.env['OPENAI_API_KEY'] = 'your-openai-key';
const response = await completion({
model: 'gpt-3.5-turbo',
messages: [{ content: 'Hello, how are you?', role: 'user' }],
});
// or stream the results
const stream = await completion({
model: "gpt-3.5-turbo",
messages: [{ content: "Hello, how are you?", role: "user" }],
stream: true
});
for await (const part of stream) {
process.stdout.write(part.choices[0]?.delta?.content || "");
}We aim to support all features that LiteLLM python package supports.
- Standardised completions
- Standardised embeddings
- Standardised input params 🚧 - List is here
- Caching ❌
- Proxy ❌
| Provider | Completion | Streaming | Embedding |
|---|---|---|---|
| openai | ✅ | ✅ | ✅ |
| cohere | ✅ | ✅ | ❌ |
| anthropic | ✅ | ✅ | ❌ |
| ollama | ✅ | ✅ | ✅ |
| ai21 | ✅ | ✅ | ❌ |
| replicate | ✅ | ✅ | ❌ |
| deepinfra | ✅ | ✅ | ❌ |
| mistral | ✅ | ✅ | ✅ |
| huggingface | ❌ | ❌ | ❌ |
| together_ai | ❌ | ❌ | ❌ |
| openrouter | ❌ | ❌ | ❌ |
| vertex_ai | ❌ | ❌ | ❌ |
| palm | ❌ | ❌ | ❌ |
| baseten | ❌ | ❌ | ❌ |
| azure | ❌ | ❌ | ❌ |
| sagemaker | ❌ | ❌ | ❌ |
| bedrock | ❌ | ❌ | ❌ |
| vllm | ❌ | ❌ | ❌ |
| nlp_cloud | ❌ | ❌ | ❌ |
| aleph alpha | ❌ | ❌ | ❌ |
| petals | ❌ | ❌ | ❌ |
git clone https://github.com/zya/litellmjs.git
npm install
npm t
First copy the example env file.
cp .example.env .env
Then fill the variables with your API keys to be able to run the E2E tests.
OPENAI_API_KEY=<Your OpenAI API key>
....
Then run the command below to run the tests
npm run test:e2e