Thanks to visit codestin.com
Credit goes to github.com

Skip to content

DevenWen/claude-code-proxy

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Claude Code Proxy To DeepSeek

这是一个支持多种大语言模型的代理服务器,特别支持 DeepSeek 模型。主要是 DeepSeek 的价格比较便宜。

注意事项

  1. ClaudeCode 的启动默认需要完成 Claude 的账号登录,这部分需要自行解决。
  2. 启动完成后,选择 API 计费模式。

环境变量配置

API Keys

# DeepSeek API Key
DEEPSEEK_API_KEY=your_deepseek_api_key

# 其他可选的 API Keys
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GEMINI_API_KEY=your_gemini_api_key

模型配置

# 首选提供商 (默认: deepseek)
PREFERRED_PROVIDER=deepseek

# 大模型配置 (默认: deepseek-reasoner)
BIG_MODEL=deepseek-reasoner

# 小模型配置 (默认: deepseek-chat)
SMALL_MODEL=deepseek-chat

支持的 DeepSeek 模型

目前支持以下 DeepSeek 模型:

  • deepseek-chat: 通用对话模型
  • deepseek-reasoner: 高级推理模型

模型映射

系统会自动将 Anthropic 的模型名称映射到 DeepSeek 模型:

  • claude-3-haikudeepseek-chat
  • claude-3-sonnetdeepseek-reasoner

启动服务

uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload

注意事项

  1. 确保设置了正确的 API Key
  2. 默认使用 DeepSeek 作为首选提供商
  3. 可以通过环境变量修改模型映射
  4. 支持流式响应和工具调用
  5. 完全兼容 Anthropic API 格式

Anthropic API Proxy for Gemini & OpenAI Models 🔄

Use Anthropic clients (like Claude Code) with Gemini or OpenAI backends. 🤝

A proxy server that lets you use Anthropic clients with Gemini or OpenAI models via LiteLLM. 🌉

Anthropic API Proxy

Quick Start ⚡

Prerequisites

  • OpenAI API key 🔑
  • Google AI Studio (Gemini) API key (if using Google provider) 🔑
  • uv installed.

Setup 🛠️

  1. Clone this repository:

    git clone https://github.com/1rgs/claude-code-openai.git
    cd claude-code-openai
  2. Install uv (if you haven't already):

    curl -LsSf https://astral.sh/uv/install.sh | sh

    (uv will handle dependencies based on pyproject.toml when you run the server)

  3. Configure Environment Variables: Copy the example environment file:

    cp .env.example .env

    Edit .env and fill in your API keys and model configurations:

    • ANTHROPIC_API_KEY: (Optional) Needed only if proxying to Anthropic models.
    • OPENAI_API_KEY: Your OpenAI API key (Required if using the default OpenAI preference or as fallback).
    • GEMINI_API_KEY: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google).
    • PREFERRED_PROVIDER (Optional): Set to openai (default) or google. This determines the primary backend for mapping haiku/sonnet.
    • BIG_MODEL (Optional): The model to map sonnet requests to. Defaults to gpt-4.1 (if PREFERRED_PROVIDER=openai) or gemini-2.5-pro-preview-03-25.
    • SMALL_MODEL (Optional): The model to map haiku requests to. Defaults to gpt-4.1-mini (if PREFERRED_PROVIDER=openai) or gemini-2.0-flash.

    Mapping Logic:

    • If PREFERRED_PROVIDER=openai (default), haiku/sonnet map to SMALL_MODEL/BIG_MODEL prefixed with openai/.
    • If PREFERRED_PROVIDER=google, haiku/sonnet map to SMALL_MODEL/BIG_MODEL prefixed with gemini/ if those models are in the server's known GEMINI_MODELS list (otherwise falls back to OpenAI mapping).
  4. Run the server:

    uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload

    (--reload is optional, for development)

Using with Claude Code 🎮

  1. Install Claude Code (if you haven't already):

    npm install -g @anthropic-ai/claude-code
  2. Connect to your proxy:

    ANTHROPIC_BASE_URL=http://localhost:8082 claude
  3. That's it! Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. 🎯

Model Mapping 🗺️

The proxy automatically maps Claude models to either OpenAI or Gemini models based on the configured model:

Claude Model Default Mapping When BIG_MODEL/SMALL_MODEL is a Gemini model
haiku openai/gpt-4o-mini gemini/[model-name]
sonnet openai/gpt-4o gemini/[model-name]

Supported Models

OpenAI Models

The following OpenAI models are supported with automatic openai/ prefix handling:

  • o3-mini
  • o1
  • o1-mini
  • o1-pro
  • gpt-4.5-preview
  • gpt-4o
  • gpt-4o-audio-preview
  • chatgpt-4o-latest
  • gpt-4o-mini
  • gpt-4o-mini-audio-preview
  • gpt-4.1
  • gpt-4.1-mini

Gemini Models

The following Gemini models are supported with automatic gemini/ prefix handling:

  • gemini-2.5-pro-preview-03-25
  • gemini-2.0-flash

Model Prefix Handling

The proxy automatically adds the appropriate prefix to model names:

  • OpenAI models get the openai/ prefix
  • Gemini models get the gemini/ prefix
  • The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists

For example:

  • gpt-4o becomes openai/gpt-4o
  • gemini-2.5-pro-preview-03-25 becomes gemini/gemini-2.5-pro-preview-03-25
  • When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to gemini/[model-name]

Customizing Model Mapping

Control the mapping using environment variables in your .env file or directly:

Example 1: Default (Use OpenAI) No changes needed in .env beyond API keys, or ensure:

OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key" # Needed if PREFERRED_PROVIDER=google
# PREFERRED_PROVIDER="openai" # Optional, it's the default
# BIG_MODEL="gpt-4.1" # Optional, it's the default
# SMALL_MODEL="gpt-4.1-mini" # Optional, it's the default

Example 2: Prefer Google

GEMINI_API_KEY="your-google-key"
OPENAI_API_KEY="your-openai-key" # Needed for fallback
PREFERRED_PROVIDER="google"
# BIG_MODEL="gemini-2.5-pro-preview-03-25" # Optional, it's the default for Google pref
# SMALL_MODEL="gemini-2.0-flash" # Optional, it's the default for Google pref

Example 3: Use Specific OpenAI Models

OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key"
PREFERRED_PROVIDER="openai"
BIG_MODEL="gpt-4o" # Example specific model
SMALL_MODEL="gpt-4o-mini" # Example specific model

How It Works 🧩

This proxy works by:

  1. Receiving requests in Anthropic's API format 📥
  2. Translating the requests to OpenAI format via LiteLLM 🔄
  3. Sending the translated request to OpenAI 📤
  4. Converting the response back to Anthropic format 🔄
  5. Returning the formatted response to the client ✅

The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. 🌊

Contributing 🤝

Contributions are welcome! Please feel free to submit a Pull Request. 🎁

About

Run Claude Code on OpenAI models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%