Use Anthropic clients (like Claude Code) with Gemini or OpenAI backends. ๐ค
A proxy server that lets you use Anthropic clients with Gemini or OpenAI models via LiteLLM. ๐
- OpenAI API key ๐
- Google AI Studio (Gemini) API key (if using Google provider) ๐
- uv installed.
Install directly as a uv tool for easy command-line access:
# Install from GitHub
uv tool install git+https://github.com/samhavens/claude-code-proxy.git
# Or install from a local clone
git clone https://github.com/samhavens/claude-code-proxy.git
cd claude-code-proxy
uv tool install .This gives you the anthropic-proxy and claude-proxy commands globally!
-
Clone this repository:
git clone https://github.com/samhavens/claude-code-proxy.git cd claude-code-proxy -
Install uv (if you haven't already):
curl -LsSf https://astral.sh/uv/install.sh | sh(
uvwill handle dependencies based onpyproject.tomlwhen you run the server) -
Configure Environment Variables: Copy the example environment file:
cp .env.example .env
Edit
.envand fill in your API keys and model configurations:ANTHROPIC_API_KEY: (Optional) Needed only if proxying to Anthropic models.OPENAI_API_KEY: Your OpenAI API key (Required if using the default OpenAI preference or as fallback).GEMINI_API_KEY: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google).PREFERRED_PROVIDER(Optional): Set toopenai(default) orgoogle. This determines the primary backend for mappinghaiku/sonnet.BIG_MODEL(Optional): The model to mapsonnetrequests to. Defaults togpt-4.1(ifPREFERRED_PROVIDER=openai) orgemini-2.5-pro-preview-03-25.SMALL_MODEL(Optional): The model to maphaikurequests to. Defaults togpt-4.1-mini(ifPREFERRED_PROVIDER=openai) orgemini-2.0-flash.DEFAULT_SYSTEM_MESSAGE(Optional): A system message that will be appended to any existing system message. Useful for adding proxy-specific instructions or context.
Mapping Logic:
- If
PREFERRED_PROVIDER=openai(default),haiku/sonnetmap toSMALL_MODEL/BIG_MODELprefixed withopenai/. - If
PREFERRED_PROVIDER=google,haiku/sonnetmap toSMALL_MODEL/BIG_MODELprefixed withgemini/if those models are in the server's knownGEMINI_MODELSlist (otherwise falls back to OpenAI mapping).
If you installed as a tool:
# Start the server with default settings
anthropic-proxy
# Or with custom options
anthropic-proxy --port 8082 --reload
# See all options
anthropic-proxy --helpIf you're using manual setup:
uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload(--reload is optional, for development)
-
Install Claude Code (if you haven't already):
npm install -g @anthropic-ai/claude-code
-
Connect to your proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 claude
-
That's it! Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. ๐ฏ
For even faster access, add these aliases to your shell config (.bashrc, .zshrc, etc.):
# Add to your ~/.zshrc or ~/.bashrc
alias oai-claude='(PREFERRED_PROVIDER=openai anthropic-proxy &) && sleep 2 && ANTHROPIC_BASE_URL=http://localhost:8082 claude'
alias gemini-claude='(PREFERRED_PROVIDER=google anthropic-proxy &) && sleep 2 && ANTHROPIC_BASE_URL=http://localhost:8082 claude'# Add to your ~/.zshrc or ~/.bashrc
alias oai-claude='start-claude --provider openai'
alias gemini-claude='start-claude --provider google'
alias custom-claude='start-claude --provider openai --big-model gpt-4o --small-model gpt-4o-mini'# Start with OpenAI backend (waits for server to be ready)
start-claude --provider openai
# Start with Gemini backend
start-claude --provider google
# Start with custom models
start-claude --provider openai --big-model gpt-4o --small-model gpt-4o-mini
# Start with custom system message
start-claude --provider openai --system-message "You are running through a proxy. Be aware you're using OpenAI models."
# See all options
start-claude --helpUsage:
# Any of these work:
oai-claude
gemini-claude
custom-claude
start-claude --provider openaiNote: Method 2 and 3 use the start-claude command which waits for the server to be ready before launching Claude, making it more reliable than fixed delays.
The proxy automatically maps Claude models to either OpenAI or Gemini models based on the configured model:
| Claude Model | Default Mapping | When BIG_MODEL/SMALL_MODEL is a Gemini model |
|---|---|---|
| haiku | openai/gpt-4o-mini | gemini/[model-name] |
| sonnet | openai/gpt-4o | gemini/[model-name] |
The following OpenAI models are supported with automatic openai/ prefix handling:
- o3
- o3-mini
- o1
- o1-mini
- o1-pro
- gpt-4.5-preview
- gpt-4o
- gpt-4o-audio-preview
- chatgpt-4o-latest
- gpt-4o-mini
- gpt-4o-mini-audio-preview
- gpt-4.1
- gpt-4.1-mini
The following Gemini models are supported with automatic gemini/ prefix handling:
- gemini-2.5-pro-preview-03-25
- gemini-2.0-flash
The proxy automatically adds the appropriate prefix to model names:
- OpenAI models get the
openai/prefix - Gemini models get the
gemini/prefix - The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists
For example:
gpt-4obecomesopenai/gpt-4ogemini-2.5-pro-preview-03-25becomesgemini/gemini-2.5-pro-preview-03-25- When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to
gemini/[model-name]
Control the mapping using environment variables in your .env file or directly:
Example 1: Default (Use OpenAI)
No changes needed in .env beyond API keys, or ensure:
OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key" # Needed if PREFERRED_PROVIDER=google
# PREFERRED_PROVIDER="openai" # Optional, it's the default
# BIG_MODEL="gpt-4.1" # Optional, it's the default
# SMALL_MODEL="gpt-4.1-mini" # Optional, it's the defaultExample 2: Prefer Google
GEMINI_API_KEY="your-google-key"
OPENAI_API_KEY="your-openai-key" # Needed for fallback
PREFERRED_PROVIDER="google"
# BIG_MODEL="gemini-2.5-pro-preview-03-25" # Optional, it's the default for Google pref
# SMALL_MODEL="gemini-2.0-flash" # Optional, it's the default for Google prefExample 3: Use Specific OpenAI Models
OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key"
PREFERRED_PROVIDER="openai"
BIG_MODEL="gpt-4o" # Example specific model
SMALL_MODEL="gpt-4o-mini" # Example specific modelExample 4: Add Default System Messages
OPENAI_API_KEY="your-openai-key"
DEFAULT_SYSTEM_MESSAGE="You are running through a proxy that translates between Anthropic and OpenAI APIs. Please be aware that you're actually using OpenAI's models, not Claude."
SMALL_SYSTEM_MESSAGE="You are a helpful assistant running through a proxy. Keep responses concise and focused."Example 5: Enable Message History Logging
OPENAI_API_KEY="your-openai-key"
LOG_MESSAGE_HISTORY="true"
LOG_LEVEL="INFO"The proxy automatically selects the appropriate system message based on the model:
- Large Models (Claude Sonnet, GPT-4, etc.): Use
DEFAULT_SYSTEM_MESSAGE - Small Models (Claude Haiku, GPT-4o-mini, etc.): Use
SMALL_SYSTEM_MESSAGE
Model Detection Logic:
- Models with "haiku" or "mini" in the name use
SMALL_SYSTEM_MESSAGE - All other models use
DEFAULT_SYSTEM_MESSAGE
Examples:
claude-3-5-sonnetโ UsesDEFAULT_SYSTEM_MESSAGEclaude-3-5-haikuโ UsesSMALL_SYSTEM_MESSAGEgpt-4oโ UsesDEFAULT_SYSTEM_MESSAGEgpt-4o-miniโ UsesSMALL_SYSTEM_MESSAGE
| Variable | Description | Default |
|---|---|---|
ANTHROPIC_API_KEY |
Your Anthropic API key | Required |
OPENAI_API_KEY |
Your OpenAI API key | Required |
GEMINI_API_KEY |
Your Google Gemini API key | Required |
PREFERRED_PROVIDER |
Preferred provider (openai, anthropic, gemini) | openai |
BIG_MODEL |
Model to use for large requests | gpt-4.1 |
SMALL_MODEL |
Model to use for small requests | gpt-4.1-mini |
DEFAULT_SYSTEM_MESSAGE |
Default system message for large models (Claude Sonnet, GPT-4, etc.) | None |
SMALL_SYSTEM_MESSAGE |
Default system message for small models (Claude Haiku, GPT-4o-mini, etc.) | None |
LOG_MESSAGE_HISTORY |
Enable full message history logging | false |
LOG_LEVEL |
Logging level (DEBUG, INFO, WARN, ERROR) | WARN |
This proxy works by:
- Receiving requests in Anthropic's API format ๐ฅ
- Translating the requests to OpenAI format via LiteLLM ๐
- Sending the translated request to OpenAI ๐ค
- Converting the response back to Anthropic format ๐
- Returning the formatted response to the client โ
The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. ๐
If you notice missing spaces after periods or other spacing issues in the model's output (e.g., "yepโset" instead of "yepโset "), this is likely coming from the source model's output itself, not from the proxy processing. The proxy passes text content through directly without modification.
This can happen with:
- OpenAI models that have different text formatting than Claude
- Model-specific quirks in how certain models handle punctuation and spacing
- Tokenization differences between models
Workaround: You can add a system message to instruct the model about proper spacing and formatting.
The proxy includes comprehensive message history logging for debugging and monitoring:
- Full Request/Response Logging: Logs original requests, converted requests, and responses
- Request Summaries: Provides concise summaries with key metrics
- Response Summaries: Shows token usage, content length, and tool calls
- Truncation: Automatically truncates long content for readability
- Request IDs: Each request gets a unique ID for easy tracking
Set these environment variables to enable logging:
LOG_MESSAGE_HISTORY="true"
LOG_LEVEL="INFO"When enabled, you'll see logs like:
๐ [req_a1b2c3d4] Original Request: {"model": "claude-3.5-sonnet", "messages": [...]}
๐ [req_a1b2c3d4] Request Summary: {"request_id": "req_a1b2c3d4", "original_model": "claude-3.5-sonnet", ...}
๐ [req_a1b2c3d4] Converted Request: {"model": "openai/gpt-4.1", "messages": [...]}
๐ [req_a1b2c3d4] Response Summary: {"content_length": 150, "has_tool_calls": false, ...}
๐ [req_a1b2c3d4] Converted Response: {"id": "msg_...", "content": [...]}
- Debugging: Track request/response flow to identify issues
- Monitoring: Monitor token usage and response patterns
- Development: Understand how the proxy transforms requests
- Troubleshooting: Identify model-specific issues or formatting problems
Contributions are welcome! Please feel free to submit a Pull Request. ๐