Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Tags: 7flash/litellm

Tags

v1.20.9

Toggle v1.20.9's commit message
(ci/cd) run again

v1.20.8

Toggle v1.20.8's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Merge pull request BerriAI#1729 from BerriAI/litellm_max_tokens_check

fix(utils.py): support checking if user defined max tokens exceeds model limit

v1.20.7

Toggle v1.20.7's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Update model_prices_and_context_window.json

v1.20.6

Toggle v1.20.6's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Merge pull request BerriAI#1693 from BerriAI/litellm_cache_controls_f…

…or_keys

Fixes for model cost check and streaming

v1.20.5

Toggle v1.20.5's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Merge pull request BerriAI#1657 from eslamkarim/patch-1

Change quota project to the correct project being used for the call

v1.20.3

Toggle v1.20.3's commit message
(test) fix

v1.20.2

Toggle v1.20.2's commit message
(docs) use admin UI

v1.20.1

Toggle v1.20.1's commit message
(fix) auth google

v1.20.0

Toggle v1.20.0's commit message
test(test_caching.py): fix cache test if embedding call is fast

v1.19.6

Toggle v1.19.6's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Update ghcr_deploy.yml