Thanks to visit codestin.com
Credit goes to github.com

Skip to content

samhavens/claude-code-proxy

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

34 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Anthropic API Proxy for Gemini & OpenAI Models ๐Ÿ”„

Use Anthropic clients (like Claude Code) with Gemini or OpenAI backends. ๐Ÿค

A proxy server that lets you use Anthropic clients with Gemini or OpenAI models via LiteLLM. ๐ŸŒ‰

Anthropic API Proxy

Quick Start โšก

Prerequisites

  • OpenAI API key ๐Ÿ”‘
  • Google AI Studio (Gemini) API key (if using Google provider) ๐Ÿ”‘
  • uv installed.

Setup ๐Ÿ› ๏ธ

Option 1: Install as a Tool (Recommended) โšก

Install directly as a uv tool for easy command-line access:

# Install from GitHub
uv tool install git+https://github.com/samhavens/claude-code-proxy.git

# Or install from a local clone
git clone https://github.com/samhavens/claude-code-proxy.git
cd claude-code-proxy
uv tool install .

This gives you the anthropic-proxy and claude-proxy commands globally!

Option 2: Manual Setup ๐Ÿ”ง

  1. Clone this repository:

    git clone https://github.com/samhavens/claude-code-proxy.git
    cd claude-code-proxy
  2. Install uv (if you haven't already):

    curl -LsSf https://astral.sh/uv/install.sh | sh

    (uv will handle dependencies based on pyproject.toml when you run the server)

  3. Configure Environment Variables: Copy the example environment file:

    cp .env.example .env

    Edit .env and fill in your API keys and model configurations:

    • ANTHROPIC_API_KEY: (Optional) Needed only if proxying to Anthropic models.
    • OPENAI_API_KEY: Your OpenAI API key (Required if using the default OpenAI preference or as fallback).
    • GEMINI_API_KEY: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google).
    • PREFERRED_PROVIDER (Optional): Set to openai (default) or google. This determines the primary backend for mapping haiku/sonnet.
    • BIG_MODEL (Optional): The model to map sonnet requests to. Defaults to gpt-4.1 (if PREFERRED_PROVIDER=openai) or gemini-2.5-pro-preview-03-25.
    • SMALL_MODEL (Optional): The model to map haiku requests to. Defaults to gpt-4.1-mini (if PREFERRED_PROVIDER=openai) or gemini-2.0-flash.
    • DEFAULT_SYSTEM_MESSAGE (Optional): A system message that will be appended to any existing system message. Useful for adding proxy-specific instructions or context.

    Mapping Logic:

    • If PREFERRED_PROVIDER=openai (default), haiku/sonnet map to SMALL_MODEL/BIG_MODEL prefixed with openai/.
    • If PREFERRED_PROVIDER=google, haiku/sonnet map to SMALL_MODEL/BIG_MODEL prefixed with gemini/ if those models are in the server's known GEMINI_MODELS list (otherwise falls back to OpenAI mapping).

Running the Server ๐Ÿš€

If you installed as a tool:

# Start the server with default settings
anthropic-proxy

# Or with custom options
anthropic-proxy --port 8082 --reload

# See all options
anthropic-proxy --help

If you're using manual setup:

uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload

(--reload is optional, for development)

Using with Claude Code ๐ŸŽฎ

  1. Install Claude Code (if you haven't already):

    npm install -g @anthropic-ai/claude-code
  2. Connect to your proxy:

    ANTHROPIC_BASE_URL=http://localhost:8082 claude
  3. That's it! Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. ๐ŸŽฏ

Quick Start Aliases ๐Ÿš€

For even faster access, add these aliases to your shell config (.bashrc, .zshrc, etc.):

Method 1: Simple Aliases (Fixed Delay)

# Add to your ~/.zshrc or ~/.bashrc
alias oai-claude='(PREFERRED_PROVIDER=openai anthropic-proxy &) && sleep 2 && ANTHROPIC_BASE_URL=http://localhost:8082 claude'
alias gemini-claude='(PREFERRED_PROVIDER=google anthropic-proxy &) && sleep 2 && ANTHROPIC_BASE_URL=http://localhost:8082 claude'

Method 2: Smart Aliases (Waits for Server)

# Add to your ~/.zshrc or ~/.bashrc
alias oai-claude='start-claude --provider openai'
alias gemini-claude='start-claude --provider google'
alias custom-claude='start-claude --provider openai --big-model gpt-4o --small-model gpt-4o-mini'

Method 3: Direct Commands

# Start with OpenAI backend (waits for server to be ready)
start-claude --provider openai

# Start with Gemini backend
start-claude --provider google

# Start with custom models
start-claude --provider openai --big-model gpt-4o --small-model gpt-4o-mini

# Start with custom system message
start-claude --provider openai --system-message "You are running through a proxy. Be aware you're using OpenAI models."

# See all options
start-claude --help

Usage:

# Any of these work:
oai-claude
gemini-claude
custom-claude
start-claude --provider openai

Note: Method 2 and 3 use the start-claude command which waits for the server to be ready before launching Claude, making it more reliable than fixed delays.

Model Mapping ๐Ÿ—บ๏ธ

The proxy automatically maps Claude models to either OpenAI or Gemini models based on the configured model:

Claude Model Default Mapping When BIG_MODEL/SMALL_MODEL is a Gemini model
haiku openai/gpt-4o-mini gemini/[model-name]
sonnet openai/gpt-4o gemini/[model-name]

Supported Models

OpenAI Models

The following OpenAI models are supported with automatic openai/ prefix handling:

  • o3
  • o3-mini
  • o1
  • o1-mini
  • o1-pro
  • gpt-4.5-preview
  • gpt-4o
  • gpt-4o-audio-preview
  • chatgpt-4o-latest
  • gpt-4o-mini
  • gpt-4o-mini-audio-preview
  • gpt-4.1
  • gpt-4.1-mini

Gemini Models

The following Gemini models are supported with automatic gemini/ prefix handling:

  • gemini-2.5-pro-preview-03-25
  • gemini-2.0-flash

Model Prefix Handling

The proxy automatically adds the appropriate prefix to model names:

  • OpenAI models get the openai/ prefix
  • Gemini models get the gemini/ prefix
  • The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists

For example:

  • gpt-4o becomes openai/gpt-4o
  • gemini-2.5-pro-preview-03-25 becomes gemini/gemini-2.5-pro-preview-03-25
  • When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to gemini/[model-name]

Customizing Model Mapping

Control the mapping using environment variables in your .env file or directly:

Example 1: Default (Use OpenAI) No changes needed in .env beyond API keys, or ensure:

OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key" # Needed if PREFERRED_PROVIDER=google
# PREFERRED_PROVIDER="openai" # Optional, it's the default
# BIG_MODEL="gpt-4.1" # Optional, it's the default
# SMALL_MODEL="gpt-4.1-mini" # Optional, it's the default

Example 2: Prefer Google

GEMINI_API_KEY="your-google-key"
OPENAI_API_KEY="your-openai-key" # Needed for fallback
PREFERRED_PROVIDER="google"
# BIG_MODEL="gemini-2.5-pro-preview-03-25" # Optional, it's the default for Google pref
# SMALL_MODEL="gemini-2.0-flash" # Optional, it's the default for Google pref

Example 3: Use Specific OpenAI Models

OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key"
PREFERRED_PROVIDER="openai"
BIG_MODEL="gpt-4o" # Example specific model
SMALL_MODEL="gpt-4o-mini" # Example specific model

Example 4: Add Default System Messages

OPENAI_API_KEY="your-openai-key"
DEFAULT_SYSTEM_MESSAGE="You are running through a proxy that translates between Anthropic and OpenAI APIs. Please be aware that you're actually using OpenAI's models, not Claude."
SMALL_SYSTEM_MESSAGE="You are a helpful assistant running through a proxy. Keep responses concise and focused."

Example 5: Enable Message History Logging

OPENAI_API_KEY="your-openai-key"
LOG_MESSAGE_HISTORY="true"
LOG_LEVEL="INFO"

Model-Specific System Messages ๐ŸŽฏ

The proxy automatically selects the appropriate system message based on the model:

  • Large Models (Claude Sonnet, GPT-4, etc.): Use DEFAULT_SYSTEM_MESSAGE
  • Small Models (Claude Haiku, GPT-4o-mini, etc.): Use SMALL_SYSTEM_MESSAGE

Model Detection Logic:

  • Models with "haiku" or "mini" in the name use SMALL_SYSTEM_MESSAGE
  • All other models use DEFAULT_SYSTEM_MESSAGE

Examples:

  • claude-3-5-sonnet โ†’ Uses DEFAULT_SYSTEM_MESSAGE
  • claude-3-5-haiku โ†’ Uses SMALL_SYSTEM_MESSAGE
  • gpt-4o โ†’ Uses DEFAULT_SYSTEM_MESSAGE
  • gpt-4o-mini โ†’ Uses SMALL_SYSTEM_MESSAGE

Environment Variables Reference ๐Ÿ“‹

Variable Description Default
ANTHROPIC_API_KEY Your Anthropic API key Required
OPENAI_API_KEY Your OpenAI API key Required
GEMINI_API_KEY Your Google Gemini API key Required
PREFERRED_PROVIDER Preferred provider (openai, anthropic, gemini) openai
BIG_MODEL Model to use for large requests gpt-4.1
SMALL_MODEL Model to use for small requests gpt-4.1-mini
DEFAULT_SYSTEM_MESSAGE Default system message for large models (Claude Sonnet, GPT-4, etc.) None
SMALL_SYSTEM_MESSAGE Default system message for small models (Claude Haiku, GPT-4o-mini, etc.) None
LOG_MESSAGE_HISTORY Enable full message history logging false
LOG_LEVEL Logging level (DEBUG, INFO, WARN, ERROR) WARN

How It Works ๐Ÿงฉ

This proxy works by:

  1. Receiving requests in Anthropic's API format ๐Ÿ“ฅ
  2. Translating the requests to OpenAI format via LiteLLM ๐Ÿ”„
  3. Sending the translated request to OpenAI ๐Ÿ“ค
  4. Converting the response back to Anthropic format ๐Ÿ”„
  5. Returning the formatted response to the client โœ…

The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. ๐ŸŒŠ

Known Issues โš ๏ธ

Text Spacing Issues

If you notice missing spaces after periods or other spacing issues in the model's output (e.g., "yepโ€”set" instead of "yepโ€”set "), this is likely coming from the source model's output itself, not from the proxy processing. The proxy passes text content through directly without modification.

This can happen with:

  • OpenAI models that have different text formatting than Claude
  • Model-specific quirks in how certain models handle punctuation and spacing
  • Tokenization differences between models

Workaround: You can add a system message to instruct the model about proper spacing and formatting.

Message History Logging ๐Ÿ“

The proxy includes comprehensive message history logging for debugging and monitoring:

Features

  • Full Request/Response Logging: Logs original requests, converted requests, and responses
  • Request Summaries: Provides concise summaries with key metrics
  • Response Summaries: Shows token usage, content length, and tool calls
  • Truncation: Automatically truncates long content for readability
  • Request IDs: Each request gets a unique ID for easy tracking

Enabling Logging

Set these environment variables to enable logging:

LOG_MESSAGE_HISTORY="true"
LOG_LEVEL="INFO"

Log Output

When enabled, you'll see logs like:

๐Ÿ“ [req_a1b2c3d4] Original Request: {"model": "claude-3.5-sonnet", "messages": [...]}
๐Ÿ“Š [req_a1b2c3d4] Request Summary: {"request_id": "req_a1b2c3d4", "original_model": "claude-3.5-sonnet", ...}
๐Ÿ“ [req_a1b2c3d4] Converted Request: {"model": "openai/gpt-4.1", "messages": [...]}
๐Ÿ“Š [req_a1b2c3d4] Response Summary: {"content_length": 150, "has_tool_calls": false, ...}
๐Ÿ“ [req_a1b2c3d4] Converted Response: {"id": "msg_...", "content": [...]}

Use Cases

  • Debugging: Track request/response flow to identify issues
  • Monitoring: Monitor token usage and response patterns
  • Development: Understand how the proxy transforms requests
  • Troubleshooting: Identify model-specific issues or formatting problems

Contributing ๐Ÿค

Contributions are welcome! Please feel free to submit a Pull Request. ๐ŸŽ

About

Run Claude Code on OpenAI models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%