Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit ac55773

Browse files
authored
fix: route bedrock/ model ids via LiteLLM Bedrock adapter (#89)
_resolve_llm_params treated any non-anthropic/openai prefix as an HF router model and wrapped it as 'openai/<id>' pointed at router.huggingface.co. 'bedrock/...' therefore fell through to HF router which rejected it with 'model does not exist'. Add a dedicated bedrock/ branch that passes the model id as-is so LiteLLM's Converse adapter picks it up with the standard AWS env creds.
1 parent 2fac9ff commit ac55773

1 file changed

Lines changed: 8 additions & 0 deletions

File tree

‎agent/core/llm_params.py‎

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,14 @@ def _resolve_llm_params(
154154
params["output_config"] = {"effort": level}
155155
return params
156156

157+
if model_name.startswith("bedrock/"):
158+
# LiteLLM routes ``bedrock/...`` through the Converse adapter, which
159+
# picks up AWS credentials from the standard env vars
160+
# (``AWS_ACCESS_KEY_ID`` / ``AWS_SECRET_ACCESS_KEY`` / ``AWS_REGION``).
161+
# The Anthropic thinking/effort shape is not forwarded through Converse
162+
# the same way, so we leave it off for now.
163+
return {"model": model_name}
164+
157165
if model_name.startswith("openai/"):
158166
params = {"model": model_name}
159167
if reasoning_effort:

0 commit comments

Comments
 (0)