Tags: daureg/litellm
Tags
test: update test to not use gemini-pro google removed it
Gemini-2.5-flash improvements (BerriAI#10198) * fix(vertex_and_google_ai_studio_gemini.py): allow thinking budget = 0 Fixes BerriAI#10121 * fix(vertex_and_google_ai_studio_gemini.py): handle nuance in counting exclusive vs. inclusive tokens Addresses BerriAI#10141 (comment)
test(utils.py): handle scenario where text tokens + reasoning tokens … ( BerriAI#10165) * test(utils.py): handle scenario where text tokens + reasoning tokens set, but reasoning tokens not charged separately Addresses BerriAI#10141 (comment) * fix(vertex_and_google_ai_studio.py): only set content if non-empty str
[Feat] Support for all litellm providers on Responses API (works with… … Codex) - Anthropic, Bedrock API, VertexAI, Ollama (BerriAI#10132) * transform request * basic handler for LiteLLMCompletionTransformationHandler * complete transform litellm to responses api * fixes to test * fix stream=True * fix streaming iterator * fixes for transformation * fixes for anthropic codex support * fix pass response_api_optional_params * test anthropic responses api tools * update responses types * working codex with litellm * add session handler * fixes streaming iterator * fix handler * add litellm codex example * fix code quality * test fix * docs litellm codex * litellm codexdoc * docs openai codex with litellm * docs litellm openai codex * litellm codex * linting fixes for transforming responses API * fix import error * fix responses api test * add sync iterator support for responses api
fix(llm_http_handler.py): fix fake streaming (BerriAI#10061) * fix(llm_http_handler.py): fix fake streaming allows groq to work with llm_http_handler * fix(groq.py): migrate groq to openai like config ensures json mode handling works correctly
PreviousNext