Multi-agent AI development templates for opencode.
A ready-to-use template for setting up multi-agent AI development workflows with opencode. Instead of a single AI assistant doing everything, work is delegated to specialized agents—each optimized for their role.
| Agent | Role | Key Trait |
|---|---|---|
| Oscar | Orchestrator | Coordinates, delegates, synthesizes—never does the work himself |
| Scout | Researcher + Planner | Digs deep into codebases, creates actionable implementation plans |
| Ivan | Implementor | Writes code, runs tests, follows specs precisely |
| Jester | Truth-Teller (default) | Challenges assumptions, finds blind spots (called for risky changes) |
| Agent | Model | Use Case |
|---|---|---|
| jester | Claude Opus | Default truth-teller |
| jester_opus | Claude Opus | Explicit Opus variant |
| jester_qwen | Qwen3 Coder | Code-focused analysis |
| jester_grok | Grok | Alternative perspective |
User Request
│
▼
Oscar ─────────────────────────────┐
│ │
├──→ Scout (research + plan) │
│ │ │
│ ├──→ Jester (challenge)│ ← optional
│ │ │
│ ▼ │
└──→ Ivan (implement) ──→ Done ◄─┘
Why this pattern?
- Context efficiency — Oscar stays lean, delegating heavy lifting to specialists
- Separation of concerns — Research, planning, and implementation are distinct phases
- Quality gates — Jester provides adversarial review for risky changes
- Parallel execution — Independent tasks can run simultaneously
For high-stakes decisions, run all three Jester variants in parallel and synthesize their feedback:
Oscar
│
├──→ @jester_opus ──┐
├──→ @jester_qwen ──┼──→ Synthesize → Decision
└──→ @jester_grok ──┘
When to use Jester Consensus:
- Major architectural decisions — Changing core abstractions, adding new patterns
- Risky refactors — Changes touching >5 files or critical paths
- Diverse perspectives needed — When you want multiple AI viewpoints on a problem
- Breaking ties — When the team is stuck or going in circles
How it works:
- Oscar dispatches the same question to all three Jesters in parallel
- Each Jester analyzes independently using their underlying model
- Oscar synthesizes the responses, looking for:
- Agreement — All three flag the same issue = high confidence
- Disagreement — Different concerns = explore each angle
- Unique insights — One Jester sees something others miss = investigate
Most of what any single Jester says is noise, but consensus across models is signal.
curl -fsSL https://opencode.ai/install | bashOr see opencode installation docs.
# Clone this repo
git clone https://github.com/yourusername/opencode-agents.git
cd opencode-agents
# Run the installer script
./install.shThe installer copies agent definitions to ~/.config/opencode/agent/.
# Copy the example configuration
cp opencode.json.example ~/.config/opencode/opencode.json
# Edit to customize models (optional)
nano ~/.config/opencode/opencode.json# Copy and customize the template AGENTS.md
cp AGENTS.md /path/to/your/project/Edit AGENTS.md in your project to add project-specific context.
# In your project directory
opencodeThen talk to Oscar:
@oscar: I need to add user authentication to the app
The opencode.json.example file contains the full agent configuration:
{
"model": "zen/claude-opus-4-5",
"default_agent": "oscar",
"agent": {
"oscar": { ... },
"scout": { ... },
"ivan": { ... },
"jester": { "model": "zen/claude-opus-4-5", ... },
"jester_opus": { "model": "zen/claude-opus-4-5", ... },
"jester_qwen": { "model": "zen/qwen3-coder-480b", ... },
"jester_grok": { "model": "zen/grok-3", ... }
}
}Edit ~/.config/opencode/opencode.json to:
- Change the default model — Update the top-level
"model"field - Use different Jester models — Swap model providers for each variant
- Add new variants — Create additional Jester entries with different models
Different AI models have different strengths and blind spots:
- Claude Opus — Strong reasoning, good at finding logical flaws
- Qwen3 Coder — Code-focused, catches implementation issues
- Grok — Alternative perspective, different training data
Running all three in parallel for critical decisions gives you diverse viewpoints.
opencode-agents/
├── .opencode/
│ ├── agent/
│ │ ├── oscar.md # Orchestrator
│ │ ├── scout.md # Researcher + Planner
│ │ ├── ivan.md # Implementor
│ │ └── jester.md # Truth-Teller
│ └── skills/
│ ├── python-code-review/ # Python code review checklist
│ ├── python-testing/ # pytest patterns and best practices
│ ├── python-venv/ # Virtual environment management
│ ├── pr-review/ # Pull request review guidelines
│ ├── git-commit/ # Commit message conventions
│ ├── issue-triage/ # GitHub issue triage workflow
│ ├── prompt-engineering/ # LLM prompt design patterns
│ ├── data-pipeline/ # Data pipeline best practices
│ ├── ml-experiment/ # ML experiment tracking
│ └── agent-tuning/ # Agent prompt optimization
├── AGENTS.md # Template for project-specific context
├── README.md # This file
├── install.sh # Installer script
└── opencode.json.example # Example configuration
Skills are reusable knowledge modules that agents can load on-demand using the Skill tool. Each skill contains domain-specific expertise in a SKILL.md file.
| Skill | Description |
|---|---|
| python-code-review | Comprehensive Python code review checklist covering style, types, error handling, and performance |
| python-testing | pytest patterns, fixtures, mocking strategies, and test organization |
| python-venv | Virtual environment setup, dependency management, and common pitfalls |
| pr-review | Pull request review guidelines for thorough, constructive feedback |
| git-commit | Conventional commit message format and best practices |
| issue-triage | GitHub issue triage workflow for prioritization and labeling |
| prompt-engineering | LLM prompt design patterns, few-shot examples, and optimization techniques |
| data-pipeline | Data pipeline architecture, validation, and monitoring patterns |
| ml-experiment | ML experiment tracking, reproducibility, and model versioning |
| agent-tuning | Agent prompt optimization and behavior refinement techniques |
Agents with skill: true in their frontmatter can load skills dynamically:
---
tools: [Read, Write, Glob, Grep, Bash, Task]
skill: true
---When an agent needs specialized knowledge, they call the Skill tool:
Agent: I need to review this Python code thoroughly.
[Loads skill: python-code-review]
Agent: Now applying the checklist...
- Create a directory under
.opencode/skills/with your skill name - Add a
SKILL.mdfile with the skill content - Skills are automatically available to agents with
skill: true
mkdir -p ~/.config/opencode/skills/my-custom-skill
echo "# My Custom Skill\n\nSkill content here..." > ~/.config/opencode/skills/my-custom-skill/SKILL.md- Oscar delegates everything — He coordinates but never reads files or writes code
- Scout digs deep, plans lean — Research flows naturally into actionable tasks
- Ivan follows specs — No improvisation; if the plan is unclear, ask
- Jester challenges — Called for complex refactors (>5 files) or risky changes
Jester runs at high temperature (0.8) intentionally—he's a wildcard oracle. Call him when:
- Complex refactors touching >5 files
- Risky architectural changes
- The team is stuck or going in circles
- A plan feels "correct" but dead
- Everyone agrees too quickly (dangerous!)
Most of what Jester says is noise, but buried in there is golden insight. Pan for gold.
The agent files are designed to be project-agnostic. Customize them by:
- Adjusting tool permissions in the frontmatter
- Adding project-specific rules to
AGENTS.md - Modifying code standards in Ivan's file for your language/framework
MIT