Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Tags: snachx/litellm

Tags

v1.69.0.dev1

Toggle v1.69.0.dev1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[Fix]: /messages - allow using dynamic AWS params (BerriAI#10769)

* fix: dynamic AWS params added for messages routes

* Update tests/pass_through_unit_tests/test_anthropic_messages_passthrough.py

Co-authored-by: Copilot <[email protected]>

---------

Co-authored-by: Copilot <[email protected]>

v1.69.1-nightly

Toggle v1.69.1-nightly's commit message
bump: version 1.69.0 → 1.69.1

v1.69.0-stable

Toggle v1.69.0-stable's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Litellm staging 05 10 2025 - openai pdf url support + sagemaker chat …

…content length error fix (BerriAI#10724)

* Support pdf url's to openai (BerriAI#10640)

* fix(gpt_transformation.py): support pdf url input to openai

pass as base64 as openai doesn't support image url's

* fix(openai.py): support async message transformation

allows async get request to convert url to base64

* fix(gpt_transformation.py): fix linting errrors and use common components across sync + async flows

* fix: fix linting errors

* fix(openai.py): pop correct var

* Fix sagemaker chat calls - content length error  (BerriAI#10607)

* fix(sagemaker_chat/): support passing dynamic aws params

previously being ignored

* refactor(sagemaker/chat): more refactoring

* fix(sagemaker_chat/): make sure streaming is correctly handled post-refactor

* refactor: more refactoring to support using signed json str

* fix(sagemaker/chat): working sync streaming post refactor

* fix(sagemaker/chat): support async streaming post refactor

* fix(llm_http_handler.py): await async function

* fix: remove print statements

* test: update test

* test: update test

* fix(llm_http_handler.py): retain passing in data as json str

* test: update test

* fix(base_model_iterator.py): fix linting error

* test: test auth

* fix: fix linting error

* test: update test

* test: update translation test

* fix(gpt_transformation.py): handle awaitable/non-awaitable object

* fix: handle async flow for message transformation on openai compatible api's

* test: cleanup testing

* test: update test

* test(test_router.py): use model with higher quota

* test: simplify test

* test: update test

v1.69.0-nightly

Toggle v1.69.0-nightly's commit message
test: update test to handle rate limit error

v1.68.2.dev6

Toggle v1.68.2.dev6's commit message
fix local testing with enterprise pip

v1.68.2-nightly

Toggle v1.68.2-nightly's commit message
fix: test_team_update_sc_2

v1.68.1.dev4

Toggle v1.68.1.dev4's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Add New Perplexity Models (BerriAI#10652)

* add new perplexity models

* update backup json

* updated deep research

* updated backup json

v1.68.1.dev2

Toggle v1.68.1.dev2's commit message
fix debug logs

v1.68.1.dev1

Toggle v1.68.1.dev1's commit message
bump to 1.68.1

v1.68.1-nightly

Toggle v1.68.1-nightly's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Add bedrock llama4 pricing + handle llama4 templating on bedrock invo…

…ke route (BerriAI#10582)

* build(model_prices_and_context_window.json): add bedrock llama4 models to model cost map

* fix template conversion for Llama 4 models in Bedrock (BerriAI#10557)

* test: add testing to repro BerriAI#10557

* test: add unit testing

* test(test_main.py): refactor where test is kept

---------

Co-authored-by: aswny <[email protected]>