A curated collection of configurations and custom prompts for OpenAI Codex CLI, designed to enhance your development workflow with various model providers and reusable prompt templates.
For Claude Code settings, agents and custom commands, please refer feiskyer/claude-code-settings.
This repository provides:
- Flexible Configuration: Support for multiple model providers (LiteLLM, GitHub Copilot, DeepSeek, Ollama)
- Custom Prompts: Reusable prompt templates for common development tasks
- Best Practices: Pre-configured settings optimized for development workflows
- Easy Setup: Simple installation and configuration process
# Backup existing Codex configuration (if any)
mv ~/.codex ~/.codex.bak
# Clone this repository to ~/.codex
git clone https://github.com/feiskyer/codex-settings.git ~/.codex
# Or symlink if you prefer to keep it elsewhere
ln -s /path/to/codex-settings ~/.codexThe default config.toml uses LiteLLM as a gateway. To use it:
-
Install LiteLLM and Codex CLI:
pip install -U 'litellm[proxy]' npm install -g @openai/codex -
Create a LiteLLM config file (full example litellm_config.yaml):
general_settings: master_key: sk-dummy litellm_settings: drop_params: true model_list: - model_name: gpt-5 litellm_params: model: github_copilot/gpt-5 extra_headers: editor-version: "vscode/1.85.1" # Editor version editor-plugin-version: "copilot/1.155.0" # Plugin version Copilot-Integration-Id: "vscode-chat" # Integration ID user-agent: "GithubCopilot/1.155.0" # User agent
-
Start LiteLLM proxy:
litellm --config ~/.codex/litellm_config.yaml # Runs on http://localhost:4000 by default
-
Run Codex:
codex
- config.toml: Default configuration using LiteLLM gateway
- Model:
gpt-5 - Approval policy:
on-request - Reasoning: High effort with detailed summaries
- Model:
Located in configs/ directory:
- OpenAI ChatGPT: Use ChatGPT subscription provider
- Azure OpenAI: Use Azure OpenAI service provider
- Github Copilot: Use Github Copilot via LiteLLM proxy
- OpenRouter: Use OpenRouter provider
- Model Scope: Use ModelScope provider
- Kimi: Use Moonshot Kimi provider
To use an alternative config:
# Take ChatGPT for example
cp ~/.codex/configs/chatgpt.toml config.toml
codexCustom prompts are stored in the prompts/ directory. Access them via the /prompts: slash menu in Codex.
Github Spec Kit - Unified interface for Spec-Driven Development.
To use it, run the following command to initialize your project:
uvx --from git+https://github.com/github/spec-kit.git specify init .Alternatively, you can also copy .specify to your project root directory.
Available commands:
/prompts:constitution- Create or update governing principles and development guidelines./prompts:specify- Define requirements and user stories for the desired outcome./prompts:clarify- Resolve underspecified areas (run before/prompts:planunless explicitly skipped)./prompts:plan- Generate a technical implementation plan for the chosen stack./prompts:tasks- Produce actionable task lists for implementation./prompts:analyze- Check consistency and coverage after/prompts:tasksand before/prompts:implement./prompts:implement- Execute all tasks to build the feature according to the plan.
Kiro Workflow - Complete feature development from spec to execution. The Kiro commands provide a structured workflow for feature development:
/prompts:kiro-spec-creator [feature]- Create requirements and acceptance criteria/prompts:kiro-feature-designer [feature]- Develop architecture and component design/prompts:kiro-task-planner [feature]- Generate implementation task lists/prompts:kiro-task-executor [feature] [task]- Execute specific implementation tasks/prompts:kiro-assistant [question]- Quick development assistance
/prompts:deep-reflector- Capture session retrospectives, user preferences, and follow-up actions./prompts:insight-documenter- Record technical breakthroughs and update the breakthroughs index./prompts:instruction-reflector- Audit and refine the AGENTS.md instructions for Codex./prompts:github-issue-fixer 1234- Run the full GitHub issue workflow from planning through PR creation./prompts:github-pr-reviewer 1234- Conduct structured PR reviews focused on actionable findings./prompts:ui-engineer [requirements]- Deliver or review frontend implementations with modern UI standards./prompts:prompt-creator [requirements]- Scaffold new Codex prompts following best-practice structure.
- Create a new
.mdfile in~/.codex/prompts/ - Use argument placeholders:
$1to$9: Positional arguments$ARGUMENTS: All arguments joined by spaces$$: Literal dollar sign
- Restart Codex to load new prompts
untrusted: Prompt for untrusted commands (recommended)on-failure: Only prompt when sandbox commands failon-request: Model decides when to asknever: Auto-approve all commands (use with caution)
read-only: Can read files, no writes or networkworkspace-write: Can write to workspace, network configurabledanger-full-access: Full system access (use in containers only)
For reasoning-capable models (o3, gpt-5):
- Effort:
minimal,low,medium,high - Summary:
auto,concise,detailed,none
Control which environment variables are passed to subprocesses:
[shell_environment_policy]
inherit = "all" # all, core, none
exclude = ["AWS_*", "AZURE_*"] # Exclude patterns
set = { CI = "1" } # Force-set valuesDefine multiple configuration profiles:
[profiles.fast]
model = "gpt-4o-mini"
approval_policy = "never"
model_reasoning_effort = "low"
[profiles.reasoning]
model = "o3"
approval_policy = "on-failure"
model_reasoning_effort = "high"Use with: codex --profile reasoning
Extend Codex with Model Context Protocol servers:
[mcp_servers.context7]
command = "npx"
args = ["-y", "@upstash/context7-mcp@latest"]
[mcp_servers.claude]
command = "claude"
args = ["mcp", "serve"]Codex automatically reads AGENTS.md files in your project to understand context. Please always create one in your project root with /init command on your first codex run.
Contributions welcome! Feel free to:
- Add new custom prompts
- Share alternative configurations
- Improve documentation
- Report issues and suggest features
This project is released under MIT License - See LICENSE for details.