Tags: 7flash/litellm
Tags
Merge pull request BerriAI#1729 from BerriAI/litellm_max_tokens_check fix(utils.py): support checking if user defined max tokens exceeds model limit
Merge pull request BerriAI#1693 from BerriAI/litellm_cache_controls_f… …or_keys Fixes for model cost check and streaming
Merge pull request BerriAI#1657 from eslamkarim/patch-1 Change quota project to the correct project being used for the call
PreviousNext