Use Claude Code with OpenAI, Azure, Databricks and More 🤝
A proxy server that mirrors Claude Code's interface to OpenAI models, Azure OpenAI deployments, and other compatible providers. 🌉
Clone the repository and install locally:
# Clone the repository
git clone https://github.com/ericmichael/claude-mirror.git
cd claude-mirror
# Run the install script (recommended)
./install.shThe install script will:
- Install the package using your current Python environment
- Create a launcher script that works across Python environments
- Offer to install the launcher globally to make it available from any terminal
You can also install manually if preferred:
# With uv (recommended)
uv pip install -e .
# Or with pip
pip install -e .If you install manually, you may need to ensure your Python environment is activated when running commands, or use the Python module syntax:
python -m claude_mirror.cliOnce installed, run the interactive setup:
claude-mirror --setupThis will guide you through creating a configuration file with your API keys and model mappings. After setup, run:
claude-mirror- OpenAI API key 🔑
- Optional: Azure OpenAI or Databricks endpoints
-
Install the package:
# Clone the repository git clone https://github.com/ericmichael/claude-mirror.git cd claude-mirror # Run the install script ./install.sh
-
Run the interactive setup:
claude-mirror --setup
This will guide you through creating a configuration file at
~/.claude-mirror/config.yaml.Alternatively, you can manually create a configuration file:
# Provider configuration providers: # OpenAI configuration openai: api_key: your_openai_api_key_here # Optional: Azure OpenAI configuration # azure: # api_key: your_azure_api_key_here # endpoint: your-instance.openai.azure.com # api_version: 2023-05-15 # Optional: Databricks configuration # databricks: # token: your_databricks_token_here # host: adb-12345678901234.12.azuredatabricks.net # Direct mapping from model categories to provider-specific models model_categories: # Claude models will map to these categories large: openai/gpt-4o # Claude-3-Sonnet maps to this small: openai/gpt-4o-mini # Claude-3-Haiku maps to this
-
Run Claude Mirror:
claude-mirror
# Normal usage (no arguments needed)
claude-mirror
# Interactive setup to create/update config
claude-mirror --setup
# Debug mode with detailed logs
claude-mirror --debug
# Use a specific config file
claude-mirror --config /path/to/my/config.yaml-
Install Claude Code (if you haven't already):
npm install -g @anthropic-ai/claude-code
-
The claude-mirror command handles everything:
- Starts the proxy server
- Configures the environment
- Launches Claude Code connected to the proxy
- Automatically shuts down the proxy when you exit Claude
-
That's it! Your Claude Code client will now use your configured models through the proxy. 🎯
The proxy follows strict rules with no fallbacks:
-
Category-based Mapping:
largeandsmallcategories inconfig.yamlMUST be defined- Claude Sonnet models → map to the "large" category
- Claude Haiku models → map to the "small" category
-
Direct Model References:
- Each category maps directly to a provider/model in the format:
provider/model-name - For example:
large: openai/gpt-4omeans Claude-3-Sonnet requests will use OpenAI's GPT-4o - No default fallbacks or silent provider selection
- Each category maps directly to a provider/model in the format:
-
Error Conditions:
- Missing required categories results in clear errors
- Invalid model format results in clear errors
- Missing configuration values in config.yaml result in clear errors
The configuration is based on two simple concepts:
- Providers: Where to get models from (OpenAI, Azure, etc.)
- Model Categories: How Claude models map to your chosen provider models
The simplest setup uses OpenAI models:
providers:
openai:
api_key: your_openai_api_key_here
# Optional: base_url for alternative OpenAI-compatible APIs
# base_url: http://localhost:8000/v1
model_categories:
large: openai/gpt-4o
small: openai/gpt-4o-miniYou can use other OpenAI-compatible APIs by configuring a custom base URL:
providers:
openai:
api_key: your_api_key_here
base_url: http://localhost:8000/v1 # Example for LocalAI
model_categories:
large: openai/local-model-name
small: openai/smaller-local-modelThis works with services like LocalAI, LM Studio, or any other API that implements the OpenAI interface.
To use Azure OpenAI Service:
providers:
azure:
api_key: your_azure_api_key_here
endpoint: your-instance.openai.azure.com
api_version: 2023-05-15
model_categories:
large: azure/my-gpt4-deployment-name
small: azure/my-gpt35-deployment-nameTo use Databricks:
providers:
databricks:
token: your_databricks_token_here
host: adb-12345678901234.12.azuredatabricks.net
model_categories:
large: databricks/databricks-claude-3-sonnet
small: databricks/databricks-claude-3-haikuYou can also mix providers if needed, for example using OpenAI for one category and Databricks for another.
This proxy works by:
- Loading configuration directly from
config.yamlwith strict validation (no defaults) - Mapping model names according to explicit rules in the configuration
- Routing requests to the appropriate provider based on model prefix
- Converting responses back to Anthropic format
The proxy maintains compatibility with Claude Code while ensuring strict configuration control using a single configuration file with no environment variables.
Contributions are welcome! Please feel free to submit a Pull Request. 🎁