This plugin enables opencode to use OpenAI's Codex backend via ChatGPT Plus/Pro OAuth authentication, allowing you to use your ChatGPT subscription instead of OpenAI Platform API credits.
Maintained by Open Hax. Follow project updates at github.com/open-hax/codex and report issues or ideas there.
- Prerequisites: ChatGPT Plus or Pro subscription; OpenCode installed (opencode.ai); Node.js 18+.
Quick start (minimal provider config β one model):
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["@openhax/codex"],
"model": "openai/gpt-5.1-codex-max",
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
},
"models": {
"gpt-5.1-codex-max": {
"name": "GPT 5.1 Codex Max (OAuth)"
}
}
}
}
}- Save that to
~/.config/opencode/opencode.json(or project-specific.opencode.json). - Restart OpenCode (it installs plugins automatically). If prompted, run
opencode auth loginand finish the OAuth flow with your ChatGPT account. - In the TUI, choose
GPT 5.1 Codex Max (OAuth)and start chatting.
Need a full walkthrough or update/cleanup steps? See docs/getting-started.md and docs/index.md.
Prefer every preset? Copy config/full-opencode.json instead; it registers all GPT-5.1/GPT-5 Codex variants with recommended settings.
Need live stats? A local dashboard now starts automatically (binds to 127.0.0.1 on a random port) and shows cache/request metrics plus the last few transformed requests; check logs for the URL.
Want to customize? Jump to Configuration reference.
Set these in ~/.opencode/openhax-codex-config.json (applies to all models). Related env vars control runtime tweaks (e.g., request logging, env tail):
codexMode(defaulttrue): enable the Codex β OpenCode bridge prompt and tool remappingenablePromptCaching(defaulttrue): keep a stableprompt_cache_keyso Codex can reuse cached promptslogging(optional): override log defaults and related env vars (ENABLE_PLUGIN_REQUEST_LOGGING,DEBUG_CODEX_PLUGIN,CODEX_LOG_MAX_BYTES,CODEX_LOG_MAX_FILES,CODEX_LOG_QUEUE_MAX,CODEX_SHOW_WARNING_TOASTS,CODEX_LOG_WARNINGS_TO_CONSOLE). Fields:enableRequestLogging: force request log persistence even withoutENABLE_PLUGIN_REQUEST_LOGGING=1debug: force debug logging regardless of envshowWarningToasts: show warning-level toasts in the OpenCode UIlogWarningsToConsole: mirror warnings to console when toasts are offlogMaxBytes(default5_242_880bytes): rotate rolling log after this sizelogMaxFiles(default5): rotated log files to retain (plus the active log)logQueueMax(default1000): max buffered log entries before oldest entries drop
- Env tail (optional): set
CODEX_APPEND_ENV_CONTEXT=1to reattach env/files context as a trailing developer message (stripped from system prompts to keep the prefix stable). Default is unset/0 (env/files removed for maximum cache stability). - Log inspection helper:
node scripts/inspect-codex-logs.mjs [--dir <path>] [--limit N] [--id X] [--stage after-transform]summarizes cached request logs (shows model, prompt_cache_key, roles, etc.).
Example:
{
"codexMode": true,
"enablePromptCaching": true,
"logging": {
"enableRequestLogging": true,
"logMaxBytes": 5242880,
"logMaxFiles": 5,
"logQueueMax": 1000
}
}- β ChatGPT Plus/Pro OAuth authentication - Use your existing subscription
- β 20 pre-configured model variants - Adds GPT-5.1 Codex (low/med/high), GPT-5.1 Codex Mini, and GPT-5.1 general presets (none/low/medium/high) alongside the legacy gpt-5 lineup
- β Zero external dependencies - Lightweight with only @openauthjs/openauth
- β Auto-refreshing tokens - Handles token expiration automatically
- β
Prompt caching - Reuses responses across turns via stable
prompt_cache_key - β Smart auto-updating Codex instructions - Tracks latest stable release with ETag caching
- β Full tool support - write, edit, bash, grep, glob, and more
- β CODEX_MODE - Codex-OpenCode bridge prompt with Task tool & MCP awareness (enabled by default)
- β Automatic tool remapping - Codex tools β opencode tools
- β Configurable reasoning - Control effort, summary verbosity, and text output
- β Usage-aware errors - Shows clear guidance when ChatGPT subscription limits are reached
- β Type-safe & tested - Strict TypeScript with 160+ unit tests + 14 integration tests
- β Modular architecture - Easy to maintain and extend
Prompt caching is enabled by default to optimize your token usage and reduce costs.
Optional:
CODEX_APPEND_ENV_CONTEXT=1keeps env/files context by reattaching it as a trailing developer message while preserving a stable prefix. Leave unset to maximize cache stability.
- Enabled by default:
enablePromptCaching: true - GPT-5.1 models leverage OpenAI's extended 24-hour prompt cache retention window for cheaper follow-ups
- Maintains conversation context across multiple turns
- Reduces token consumption by reusing cached prompts
- Lowers costs significantly for multi-turn conversations
For the complete experience with all reasoning variants matching the official Codex CLI:
- Copy the full configuration from
config/full-opencode.jsonto your opencode config file:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["@openhax/codex"],
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
},
"models": {
"gpt-5.1-codex-max": {
"name": "GPT 5.1 Codex Max (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5.1-codex-low": {
"name": "GPT 5.1 Codex Low (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5.1-codex-medium": {
"name": "GPT 5.1 Codex Medium (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5.1-codex-high": {
"name": "GPT 5.1 Codex High (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5.1-codex-mini-medium": {
"name": "GPT 5.1 Codex Mini Medium (OAuth)",
"limit": {
"context": 200000,
"output": 100000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5.1-codex-mini-high": {
"name": "GPT 5.1 Codex Mini High (OAuth)",
"limit": {
"context": 200000,
"output": 100000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5.1-none": {
"name": "GPT 5.1 None (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "none",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5.1-low": {
"name": "GPT 5.1 Low (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "auto",
"textVerbosity": "low",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5.1-medium": {
"name": "GPT 5.1 Medium (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5.1-high": {
"name": "GPT 5.1 High (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "high",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5-codex-low": {
"name": "GPT 5 Codex Low (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5-codex-medium": {
"name": "GPT 5 Codex Medium (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5-codex-high": {
"name": "GPT 5 Codex High (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5-codex-mini-medium": {
"name": "GPT 5 Codex Mini Medium (OAuth)",
"limit": {
"context": 200000,
"output": 100000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5-codex-mini-high": {
"name": "GPT 5 Codex Mini High (OAuth)",
"limit": {
"context": 200000,
"output": 100000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5-minimal": {
"name": "GPT 5 Minimal (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "minimal",
"reasoningSummary": "auto",
"textVerbosity": "low",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5-low": {
"name": "GPT 5 Low (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "auto",
"textVerbosity": "low",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5-medium": {
"name": "GPT 5 Medium (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5-high": {
"name": "GPT 5 High (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "high",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5-mini": {
"name": "GPT 5 Mini (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "auto",
"textVerbosity": "low",
"include": ["reasoning.encrypted_content"],
"store": false
}
},
"gpt-5-nano": {
"name": "GPT 5 Nano (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "minimal",
"reasoningSummary": "auto",
"textVerbosity": "low",
"include": ["reasoning.encrypted_content"],
"store": false
}
}
}
}
}
}Global config: ~/.config/opencode/opencode.json
Project config: <project>/.opencode.json
This now gives you 22 model variants: the refreshed GPT-5.2 frontier preset, the GPT-5.1 lineup (with Codex Max as the default), plus every legacy gpt-5 preset for backwards compatibility.
All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5 High (OAuth)", etc.
When using config/full-opencode.json, you get these GPT-5.1 presets plus the original gpt-5 variants:
| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
|---|---|---|---|
gpt-5.2 |
GPT 5.2 (OAuth) | Low/Medium/High/Extra High | Latest frontier model with improved reasoning + general-purpose coding |
| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
|---|---|---|---|
gpt-5.1-codex-max |
GPT 5.1 Codex Max (OAuth) | Low/Medium/High/Extra High | Default flagship tier with xhigh reasoning for complex, multi-step problems |
gpt-5.1-codex-low |
GPT 5.1 Codex Low (OAuth) | Low | Fast code generation on the newest Codex tier |
gpt-5.1-codex-medium |
GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code + tooling workflows |
gpt-5.1-codex-high |
GPT 5.1 Codex High (OAuth) | High | Multi-step coding tasks with deep tool use |
gpt-5.1-codex-mini-medium |
GPT 5.1 Codex Mini Medium (OAuth) | Medium | Budget-friendly Codex runs (200k/100k tokens) |
gpt-5.1-codex-mini-high |
GPT 5.1 Codex Mini High (OAuth) | High | Cheaper Codex tier with maximum reasoning |
gpt-5.1-none |
GPT 5.1 None (OAuth) | None | Latency-sensitive chat/tasks using the "no reasoning" mode |
gpt-5.1-low |
GPT 5.1 Low (OAuth) | Low | Fast general-purpose chat with light reasoning |
gpt-5.1-medium |
GPT 5.1 Medium (OAuth) | Medium | Default adaptive reasoning for everyday work |
gpt-5.1-high |
GPT 5.1 High (OAuth) | High | Deep analysis when reliability matters most |
Extra High reasoning:
reasoningEffort: "xhigh"provides maximum computational effort for complex, multi-step problems and is honored ongpt-5.1-codex-maxandgpt-5.2. Other models automatically map that option tohighso their API calls remain valid.
| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
|---|
| gpt-5-codex-low | GPT 5 Codex Low (OAuth) | Low | Fast code generation |
| gpt-5-codex-medium | GPT 5 Codex Medium (OAuth) | Medium | Balanced code tasks |
| gpt-5-codex-high | GPT 5 Codex High (OAuth) | High | Complex code & tools |
| gpt-5-codex-mini-medium | GPT 5 Codex Mini Medium (OAuth) | Medium | Cheaper Codex tier (200k/100k) |
| gpt-5-codex-mini-high | GPT 5 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning |
| gpt-5-minimal | GPT 5 Minimal (OAuth) | Minimal | Quick answers, simple tasks |
| gpt-5-low | GPT 5 Low (OAuth) | Low | Faster responses with light reasoning |
| gpt-5-medium | GPT 5 Medium (OAuth) | Medium | Balanced general-purpose tasks |
| gpt-5-high | GPT 5 High (OAuth) | High | Deep reasoning, complex problems |
| gpt-5-mini | GPT 5 Mini (OAuth) | Low | Lightweight tasks |
| gpt-5-nano | GPT 5 Nano (OAuth) | Minimal | Maximum speed |
Usage: --model=openai/<CLI Model ID> (e.g., --model=openai/gpt-5-codex-low)
Display: TUI shows the friendly name (e.g., "GPT 5 Codex Low (OAuth)")
Note: All
gpt-5.1-codex-mini*and legacygpt-5-codex-mini*presets normalize to the ChatGPT sluggpt-5.1-codex-mini(200k input / 100k output tokens).
All accessed via your ChatGPT Plus/Pro subscription.
Important: Always include the openai/ prefix:
# β
Correct
model: openai/gpt-5-codex-low
# β Wrong - will fail
model: gpt-5-codex-lowSee Configuration Guide for advanced usage.
When no configuration is specified, the plugin uses these defaults for all GPT-5 models:
{
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium"
}reasoningEffort: "medium"- Balanced computational effort for reasoningreasoningSummary: "auto"- Automatically adapts summary verbositytextVerbosity: "medium"- Balanced output length
These defaults match the official Codex CLI behavior and can be customized (see Configuration below). GPT-5.1 requests automatically start at reasoningEffort: "none", while Codex/Codex Mini presets continue to clamp to their supported levels, and GPT-5.2 keeps reasoningEffort: "medium" but accepts xhigh while mapping none/minimal to low.
Already set up from Installation? You're all set. Use this section when you want to tweak defaults or build custom presets.
Use the smallest working provider config if you only need one flagship model:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["@openhax/codex"],
"model": "openai/gpt-5.1-codex-max",
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
},
"models": {
"gpt-5.1-codex-max": {
"name": "GPT 5.1 Codex Max (OAuth)"
}
}
}
}
}gpt-5.1-codex-max is the recommended default for balanced reasoning + tool use. Switch the model value if you prefer another preset.
The easiest way to get all presets is to use config/full-opencode.json, which provides:
- 22 pre-configured model variants matching the latest Codex CLI presets (GPT-5.2 + GPT-5.1 Codex lineup + GPT-5)
- Optimal settings for each reasoning level
- All variants visible in the opencode model selector
See Installation for setup instructions.
If you want to customize settings yourself, you can configure options at provider or model level.
| Setting | GPT-5 / GPT-5.1 Values | GPT-5-Codex / Codex Mini Values | Plugin Default |
|---|---|---|---|
reasoningEffort |
none, minimal, low, medium, high |
low, medium, high, xhighβ |
medium |
reasoningSummary |
auto, detailed |
auto, detailed |
auto |
textVerbosity |
low, medium, high |
medium only |
medium |
include |
Array of strings | Array of strings | ["reasoning.encrypted_content"] |
Note:
minimaleffort is auto-normalized tolowfor gpt-5-codex (not supported by the API).noneis only supported on GPT-5.1 general models; when used with legacy gpt-5 it is normalized tominimal, andgpt-5.2automatically bumpsnone/minimaltolow.xhighis honored ongpt-5.1-codex-maxandgpt-5.2βother presets automatically map it tohigh.β Extra High reasoning:
reasoningEffort: "xhigh"provides maximum computational effort for complex, multi-step problems and is only available ongpt-5.1-codex-maxandgpt-5.2.
Apply settings to all models:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["@openhax/codex"],
"model": "openai/gpt-5-codex",
"provider": {
"openai": {
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed"
}
}
}
}Create your own named variants in the model selector:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["@openhax/codex"],
"provider": {
"openai": {
"models": {
"codex-fast": {
"name": "My Fast Codex",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low"
}
},
"gpt-5-smart": {
"name": "My Smart GPT-5",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"textVerbosity": "high"
}
}
}
}
}
}Config key (e.g., codex-fast) is used in CLI: --model=openai/codex-fast
name field (e.g., "My Fast Codex") appears in model selector
Model type is auto-detected from the key (contains "codex" β gpt-5-codex, else β gpt-5)
For advanced options, custom presets, and troubleshooting:
π Configuration Guide - Complete reference with examples
This plugin respects the same rate limits enforced by OpenAI's official Codex CLI:
- Rate limits are determined by your ChatGPT subscription tier (Plus/Pro)
- Limits are enforced server-side through OAuth tokens
- The plugin does NOT and CANNOT bypass OpenAI's rate limits
- β Use for individual coding tasks, not bulk processing
- β Avoid rapid-fire automated requests
- β Monitor your usage to stay within subscription limits
- β Consider the OpenAI Platform API for higher-volume needs
- β Do not use for commercial services without proper API access
- β Do not share authentication tokens or credentials
Note: Excessive usage or violations of OpenAI's terms may result in temporary throttling or account review by OpenAI.
OpenCode caches plugins under ~/.cache/opencode and stores Codex-specific assets (prompt-warm files, instruction caches, logs) under ~/.opencode. When this plugin ships a new release, clear both locations so OpenCode reinstalls the latest bits and the warmed prompts align with the new version.
- Remove the cached plugin copy (required for every upgrade):
Then restart OpenCode (
(cd ~ && sed -i.bak '/"@openhax\/codex"/d' .cache/opencode/package.json && rm -rf .cache/opencode/node_modules/@openhax/codex)
opencode) so it reinstalls@openhax/codex. - Reset warmed prompts & analyzer caches (optional but recommended when behavior drifts):
Removing
rm -rf ~/.opencode/cache/ rm -rf ~/.opencode/logs/codex-plugin/
~/.opencode/cacheforces the plugin to refetch Codex instructions and prompt metadata; clearing the log directory removes old request captures that might contain stale secrets. See docs/privacy.md for the full list of files stored under~/.opencode.
Common Issues:
- 401 Unauthorized: Run
opencode auth loginagain - Model not found: Add
openai/prefix (e.g.,--model=openai/gpt-5-codex-low) - "Item not found" errors: Update to latest plugin version
Full troubleshooting guide: docs/troubleshooting.md
Enable detailed logging:
DEBUG_CODEX_PLUGIN=1 opencode run "your prompt"For full request/response logs:
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "your prompt"Logs saved to: ~/.opencode/logs/codex-plugin/
See Troubleshooting Guide for details.
This plugin uses OpenAI's official OAuth authentication (the same method as their official Codex CLI). It's designed for personal coding assistance with your own ChatGPT subscription.
However, users are responsible for ensuring their usage complies with OpenAI's Terms of Use. This means:
- Personal use for your own development
- Respecting rate limits
- Not reselling access or powering commercial services
- Following OpenAI's acceptable use policies
No. This plugin is intended for personal development only.
For commercial applications, production systems, or services serving multiple users, you must obtain proper API access through the OpenAI Platform API.
Using OAuth authentication for personal coding assistance aligns with OpenAI's official Codex CLI use case. However, violating OpenAI's terms could result in account action:
Safe use:
- Personal coding assistance
- Individual productivity
- Legitimate development work
- Respecting rate limits
Risky use:
- Commercial resale of access
- Powering multi-user services
- High-volume automated extraction
- Violating OpenAI's usage policies
Critical distinction:
- β This plugin: Uses official OAuth authentication through OpenAI's authorization server
- β Session scraping: Extracts cookies/tokens from browsers (clearly violates TOS)
OAuth is a proper, supported authentication method. Session token scraping and reverse-engineering private APIs are explicitly prohibited by OpenAI's terms.
This is not a "free API alternative."
This plugin allows you to use your existing ChatGPT subscription for terminal-based coding assistance (the same use case as OpenAI's official Codex CLI).
If you need API access for applications, automation, or commercial use, you should purchase proper API access from OpenAI Platform.
No. This is an independent open-source project. It uses OpenAI's publicly available OAuth authentication system but is not endorsed, sponsored by, or affiliated with OpenAI.
ChatGPT, GPT-5, and Codex are trademarks of OpenAI.
Prompt caching is enabled by default to save you money:
- Reduces token usage by reusing conversation context across turns
- Lowers costs significantly for multi-turn conversations
- Maintains context so the AI remembers previous parts of your conversation
You can disable it by creating ~/.opencode/openhax-codex-config.json with:
{
"enablePromptCaching": false
}Warning: Disabling caching will dramatically increase your token usage and costs.
This plugin implements OAuth authentication for OpenAI's Codex backend, using the same authentication flow as:
- OpenAI's official Codex CLI
- OpenAI's OAuth authorization server (https://chatgpt.com/oauth)
Based on research and working implementations from:
- ben-vargas/ai-sdk-provider-chatgpt-oauth
- ben-vargas/ai-opencode-chatgpt-auth
- openai/codex OAuth flow
- sst/opencode
Not affiliated with OpenAI. ChatGPT, GPT-5, GPT-4, GPT-3, Codex, and OpenAI are trademarks of OpenAI, L.L.C. This is an independent open-source project and is not endorsed by, sponsored by, or affiliated with OpenAI.
π Documentation:
- Installation - Get started in 2 minutes
- Configuration reference - Customize your setup
- Troubleshooting - Common issues
- GitHub Pages Docs - Extended guides
- Developer Docs - Technical deep dive
Important: This plugin is designed for personal development use only with your own ChatGPT Plus/Pro subscription. By using this tool, you agree to:
- β Use only for individual productivity and coding assistance
- β Respect OpenAI's rate limits and usage policies
- β Not use to power commercial services or resell access
- β Comply with OpenAI's Terms of Use and Usage Policies
This tool uses OpenAI's official OAuth authentication (the same method as OpenAI's official Codex CLI). However, users are responsible for ensuring their usage complies with OpenAI's terms.
- Commercial API resale or white-labeling
- High-volume automated extraction beyond personal use
- Applications serving multiple users with one subscription
- Any use that violates OpenAI's acceptable use policies
For production applications or commercial use, use the OpenAI Platform API with proper API keys.
GPL-3.0 β see LICENSE for details.
This section is auto-generated by scripts/package-doc-matrix.ts. Do not edit manually.
None (external-only).
None (external-only).
Last updated: 2025-11-16T11:25:38.889Z