νκ΅μ΄ |English | ΰΉΰΈΰΈ’ | ζ₯ζ¬θͺ | δΈζ | ΰ€Ήΰ€Ώΰ€¨ΰ₯ΰ€¦ΰ₯
MoAI-ADK delivers a seamless development workflow that naturally connects SPEC β TEST (TDD) β CODE β DOCUMENTATION with AI.
MoAI-ADK transforms AI-powered development with three core principles. Use the navigation below to jump to the section that matches your needs.
If you're new to MoAI-ADK, start with "What is MoAI-ADK?". If you want to get started quickly, jump straight to "5-Minute Quick Start". If you've already installed it and want to understand the concepts, we recommend "5 Key Concepts".
| Question | Jump To |
|---|---|
| First time hereβwhat is it? | What is MoAI-ADK? |
| How do I get started? | 5-Minute Quick Start |
| What's the basic flow? | Core Workflow (0 β 3) |
| What do Plan/Run/Sync commands do? | Command Cheat Sheet |
| What are SPEC, TDD, TAG? | 5 Key Concepts |
| Tell me about agents/Skills | Sub-agents & Skills Overview |
| Want to dive deeper? | Additional Resources |
Today, countless developers want help from Claude or ChatGPT, but can't shake one fundamental doubt: "Can I really trust the code this AI generates?"
The reality looks like this. Ask an AI to "build a login feature" and you'll get syntactically perfect code. But these problems keep repeating:
- Unclear Requirements: The basic question "What exactly should we build?" remains unanswered. Email/password login? OAuth? 2FA? Everything relies on guessing.
- Missing Tests: Most AIs only test the "happy path". Wrong password? Network error? Three months later, bugs explode in production.
- Documentation Drift: Code gets modified but docs stay the same. The question "Why is this code here?" keeps repeating.
- Context Loss: Even within the same project, you have to explain everything from scratch each time. Project structure, decision rationale, previous attemptsβnothing gets recorded.
- Impact Tracking Impossible: When requirements change, you can't track which code is affected.
MoAI-ADK (MoAI Agentic Development Kit) is an open-source framework designed to systematically solve these problems.
The core principle is simple yet powerful:
"No tests without code, no SPEC without tests"
More precisely, it's the reverse order:
"SPEC comes first. No tests without SPEC. No complete documentation without tests and code."
When you follow this order, magical things happen:
1οΈβ£ Clear Requirements
Write SPECs first with the /alfred:1-plan command. A vague request like "login feature" transforms into clear requirements like "WHEN valid credentials are provided, the system SHALL issue a JWT token". Alfred's spec-builder uses EARS syntax to create professional SPECs in just 3 minutes.
2οΈβ£ Test Guarantee
/alfred:2-run automatically performs Test-Driven Development (TDD). It proceeds in RED (failing test) β GREEN (minimal implementation) β REFACTOR (cleanup) order, guaranteeing 85%+ test coverage. No more "testing later". Tests drive code creation.
3οΈβ£ Automatic Documentation Sync
A single /alfred:3-sync command synchronizes all code, tests, and documentation. README, CHANGELOG, API docs, and Living Documents all update automatically. Six months later, code and docs still match.
4οΈβ£ Tracking with @TAG System
Every piece of code, test, and documentation gets a @TAG:ID. When requirements change later, one commandβrg "@SPEC:AUTH-001"βfinds all related tests, implementations, and docs. You gain confidence during refactoring.
5οΈβ£ Alfred Remembers Context A team of AI agents collaborate to remember your project's structure, decision rationale, and work history. No need to repeat the same questions.
For beginners to remember easily, MoAI-ADK's value simplifies to three things:
First, SPEC comes before code Start by clearly defining what to build. Writing SPEC helps discover problems before implementation. Communication costs with teammates drop dramatically.
Second, tests drive code (TDD) Write tests before implementation (RED). Implement minimally to pass tests (GREEN). Then clean up the code (REFACTOR). Result: fewer bugs, confidence in refactoring, code anyone can understand.
Third, documentation and code always match
One /alfred:3-sync command auto-updates all documentation. README, CHANGELOG, API docs, and Living Documents always sync with code. No more despair when modifying six-month-old code.
Modern AI-powered development faces various challenges. MoAI-ADK systematically solves all these problems:
| Concern | Traditional Approach Problem | MoAI-ADK Solution |
|---|---|---|
| "Can't trust AI code" | Implementation without tests, unclear verification | Enforces SPEC β TEST β CODE order, guarantees 85%+ coverage |
| "Repeating same explanations" | Context loss, unrecorded project history | Alfred remembers everything, 19 AI team members collaborate |
| "Hard to write prompts" | Don't know how to write good prompts | /alfred commands provide standardized prompts automatically |
| "Documentation always outdated" | Forget to update docs after code changes | /alfred:3-sync auto-syncs with one command |
| "Don't know what changed where" | Hard to search code, unclear intent | @TAG chain connects SPEC β TEST β CODE β DOC |
| "Team onboarding takes forever" | New members can't grasp code context | Reading SPEC makes intent immediately clear |
From the moment you adopt MoAI-ADK, you'll feel:
- Faster Development: Clear SPEC reduces round-trip explanation time
- Fewer Bugs: SPEC-based tests catch issues early
- Better Code Understanding: @TAG and SPEC make intent immediately clear
- Lower Maintenance Costs: Code and docs always match
- Efficient Team Collaboration: Clear communication through SPEC and TAG
Now let's start your first project with MoAI-ADK. Follow these 5 steps and in just 5 minutes you'll have a project with SPEC, TDD, and documentation all connected.
First, install uv. uv is an ultra-fast Python package manager written in Rust. It's 10+ times faster than traditional pip and works perfectly with MoAI-ADK.
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell)
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# Verify installation
uv --version
# Output: uv 0.x.xWhy uv? MoAI-ADK is optimized to leverage uv's fast installation speed and stability. Perfect project isolation means no impact on other Python environments.
Install MoAI-ADK as a global tool. This won't affect your project dependencies.
# Install in tool mode (recommended: runs in isolated environment)
uv tool install moai-adk
# Verify installation
moai-adk --version
# Output: MoAI-ADK v1.0.0Once installed, you can use the moai-adk command anywhere.
To start a new project:
moai-adk init my-project
cd my-projectTo add to an existing project:
cd your-existing-project
moai-adk init .This one command automatically generates:
my-project/
βββ .moai/ # MoAI-ADK project configuration
β βββ config.json # Project settings (language, mode, owner)
β βββ project/ # Project information
β β βββ product.md # Product vision and goals
β β βββ structure.md # Directory structure
β β βββ tech.md # Tech stack and architecture
β βββ memory/ # Alfred's knowledge base (8 files)
β β βββ CLAUDE-AGENTS-GUIDE.md # Sub-agent collaboration guide
β β βββ CLAUDE-RULES.md # Decision rules and standards
β β βββ CLAUDE-PRACTICES.md # Workflow patterns and examples
β β βββ CONFIG-SCHEMA.md # .moai/config.json schema
β β βββ DEVELOPMENT-GUIDE.md # SPEC-First TDD workflow guide
β β βββ GITFLOW-PROTECTION-POLICY.md # Git branch protection
β β βββ SKILLS-DESCRIPTION-POLICY.md # Skills management policy
β β βββ SPEC-METADATA.md # SPEC YAML frontmatter standard
β βββ specs/ # SPEC files
β β βββ SPEC-XXX-001/ # Each SPEC in its own folder
β β βββ spec.md # EARS-format specification
β βββ reports/ # Analysis reports
βββ .claude/ # Claude Code automation
β βββ agents/ # 12 Sub-agents
β β βββ alfred/
β β βββ project-manager.md # Project initialization
β β βββ spec-builder.md # SPEC authoring (EARS)
β β βββ implementation-planner.md # Architecture & TAG design
β β βββ tdd-implementer.md # RED-GREEN-REFACTOR loop
β β βββ doc-syncer.md # Documentation sync
β β βββ quality-gate.md # TRUST 5 verification
β β βββ tag-agent.md # TAG chain validation
β β βββ trust-checker.md # Code quality checks
β β βββ debug-helper.md # Error analysis & fixes
β β βββ git-manager.md # GitFlow & PR management
β β βββ cc-manager.md # Claude Code optimization
β β βββ skill-factory.md # Skills creation & updates
β βββ commands/ # 4 Alfred commands
β β βββ alfred/
β β βββ 0-project.md # Project initialization
β β βββ 1-plan.md # SPEC authoring
β β βββ 2-run.md # TDD implementation
β β βββ 3-sync.md # Documentation sync
β βββ skills/ # 58 Claude Skills
β β βββ moai-foundation-* # 6 Foundation tier
β β βββ moai-essentials-* # 4 Essentials tier
β β βββ moai-alfred-* # 7 Alfred tier
β β βββ moai-domain-* # 10 Domain tier
β β βββ moai-lang-* # 18 Language tier
β β βββ moai-cc-* # 8 Claude Code tier
β β βββ moai-skill-factory # 1 Skill Factory
β β βββ moai-spec-authoring # 1 SPEC authoring
β βββ hooks/ # Event-driven automation
β β βββ alfred/
β β βββ alfred_hooks.py # 5 hooks (Session, PreTool, etc.)
β βββ output-styles/ # Response styles
β β βββ alfred/
β β βββ agentic-coding.md # Professional development mode
β β βββ moai-adk-learning.md # Educational explanations mode
β β βββ study-with-alfred.md # Interactive learning mode
β βββ settings.json # Claude Code settings
βββ src/ # Implementation code
βββ tests/ # Test code
βββ docs/ # Auto-generated documentation
βββ CLAUDE.md # Alfred's core directives
βββ README.md
Run Claude Code and invoke the Alfred SuperAgent:
# Run Claude Code
claudeThen enter this in Claude Code's command input:
/alfred:0-project
This command performs:
- Collect Project Info: "Project name?", "Goals?", "Main language?"
- Auto-detect Tech Stack: Automatically recognizes Python/JavaScript/Go, etc.
- Deploy Skill Packs: Prepares necessary Skills for your project
- Generate Initial Report: Project structure, suggested next steps
After project initialization completes, write your first feature as a SPEC:
/alfred:1-plan "User registration feature"
Automatically generated:
@SPEC:USER-001- Unique ID assigned.moai/specs/SPEC-USER-001/spec.md- Professional SPEC in EARS formatfeature/spec-user-001- Git branch auto-created
Once SPEC is written, implement using TDD:
/alfred:2-run USER-001
This command handles:
- π΄ RED: Automatically write failing test (
@TEST:USER-001) - π’ GREEN: Minimal implementation to pass test (
@CODE:USER-001) - β»οΈ REFACTOR: Improve code quality
Finally, auto-sync all documentation:
/alfred:3-sync
Automatically generated/updated:
- Living Document (API documentation)
- README updates
- CHANGELOG generation
- @TAG chain validation
After these 7 steps, everything is ready:
β Requirements specification (SPEC) β Test code (85%+ coverage) β Implementation code (tracked with @TAG) β API documentation (auto-generated) β Change history (CHANGELOG) β Git commit history (RED/GREEN/REFACTOR)
Everything completes in 15 minutes!
Check if the generated results were properly created:
# 1. Check TAG chain (SPEC β TEST β CODE β DOC)
rg '@(SPEC|TEST|CODE):USER-001' -n
# 2. Run tests
pytest tests/ -v
# 3. Check generated documentation
cat docs/api/user.md
cat README.mdπ Verification Command:
moai-adk doctorβ Checks if Python/uv versions,.moai/structure, and agent/Skills configuration are all ready.moai-adk doctorAll green checkmarks mean perfect readiness!
MoAI-ADK's AI coordination is powered by Alfred, the MoAI SuperAgent. Alfred's behavior and decision-making are guided by a set of internal configuration documents in the .claude/ directory.
When you run MoAI-ADK, Alfred loads configuration from 4 coordinated documents (stored in your .claude/ directory):
| Document | Size | Purpose | When Alfred Reads It |
|---|---|---|---|
| CLAUDE.md | ~7kb | Alfred's identity, core directives, project metadata | At session start (bootstrap) |
| CLAUDE-AGENTS-GUIDE.md | ~14kb | Sub-agent roster (19 members), Skills distribution (55 packs), team structure | When selecting which agent to invoke |
| CLAUDE-RULES.md | ~17kb | Decision-making rules (Skill invocation, Interactive Questions, TAG validation), commit templates, TRUST 5 gates | During each decision point (e.g., when to ask user questions) |
| CLAUDE-PRACTICES.md | ~8kb | Practical workflows, context engineering (JIT retrieval), on-demand agent patterns, real examples | During implementation phase |
For Developers: These documents define how Alfred interprets your requirements and orchestrates development. Understanding them helps you:
- Write clearer specifications that Alfred understands better
- Know which agent/Skill will be invoked for your request
- Understand decision points where Alfred might ask you questions
For AI: Progressive disclosure means:
- Session Start: Load only CLAUDE.md (7kb) β minimal overhead
- On-Demand: Load CLAUDE-AGENTS-GUIDE.md, CLAUDE-RULES.md, CLAUDE-PRACTICES.md only when needed
- Result: Faster session boot, cleaner context, clear decision logic
- CLAUDE.md is already loaded β Alfred knows its role and project context
- Alfred checks CLAUDE-RULES.md β "Should I ask user questions? Which Skill applies here?"
- If implementing code: Alfred loads CLAUDE-AGENTS-GUIDE.md β "Which agent executes TDD?"
- During implementation: Alfred loads CLAUDE-PRACTICES.md β "How do I structure the RED β GREEN β REFACTOR workflow?"
Most developers never modify these files. MoAI-ADK ships with optimized defaults.
If you need to customize Alfred's behavior (rare), edit these documents in your project's .claude/ directory:
- Add new decision rules in CLAUDE-RULES.md
- Adjust agent selection logic in CLAUDE-AGENTS-GUIDE.md
- Document team-specific workflows in CLAUDE-PRACTICES.md
β οΈ Important: These are internal configuration files for Alfred, not user guides. Keep them concise and decision-focused. Most teams don't modify them.
Alfred's knowledge base consists of 14 memory files stored in .moai/memory/. These files define standards, rules, and guidelines that Alfred and Sub-agents reference during development.
Core Guides (3 files):
| File | Size | Purpose | Who Uses It |
|---|---|---|---|
CLAUDE-AGENTS-GUIDE.md |
~15KB | Sub-agent selection & collaboration | Alfred, Developers |
CLAUDE-PRACTICES.md |
~12KB | Real-world workflow examples & patterns | Alfred, All Sub-agents |
CLAUDE-RULES.md |
~19KB | Skill/TAG/Git rules & decision standards | Alfred, All Sub-agents |
Standards (4 files):
| File | Size | Purpose | Who Uses It |
|---|---|---|---|
CONFIG-SCHEMA.md |
~12KB | .moai/config.json schema definition |
project-manager |
DEVELOPMENT-GUIDE.md |
~14KB | SPEC-First TDD workflow guide | All Sub-agents, Developers |
GITFLOW-PROTECTION-POLICY.md |
~6KB | Git branch protection policy | git-manager |
SPEC-METADATA.md |
~9KB | SPEC YAML frontmatter standard (SSOT) | spec-builder, doc-syncer |
Implementation Analysis (7 files): Internal reports and policy documents for Skills management, workflow improvements, and team integration analysis.
Session Start (Always):
CLAUDE.mdCLAUDE-AGENTS-GUIDE.mdCLAUDE-RULES.md
Just-In-Time (Command Execution):
/alfred:1-planβSPEC-METADATA.md,DEVELOPMENT-GUIDE.md/alfred:2-runβDEVELOPMENT-GUIDE.md/alfred:3-syncβDEVELOPMENT-GUIDE.md
Conditional (On-Demand):
- Config changes β
CONFIG-SCHEMA.md - Git operations β
GITFLOW-PROTECTION-POLICY.md - Skill creation β
SKILLS-DESCRIPTION-POLICY.md
- Single Source of Truth (SSOT): Each standard is defined exactly once, eliminating conflicts
- Context Efficiency: JIT loading reduces initial session overhead (only 3 files at start)
- Consistent Decisions: All Sub-agents follow the same rules from
CLAUDE-RULES.md - Traceability: SPEC metadata, @TAG rules, and Git standards all documented
| Priority | Files | Usage Pattern |
|---|---|---|
| Very High | CLAUDE-RULES.md |
Every decision |
| High | DEVELOPMENT-GUIDE.md, SPEC-METADATA.md |
All commands |
| Medium | CLAUDE-AGENTS-GUIDE.md, CLAUDE-PRACTICES.md |
Agent coordination |
| Low | CONFIG-SCHEMA.md, GITFLOW-PROTECTION-POLICY.md |
Specific operations |
π Complete Analysis: See
.moai/memory/MEMORY-FILES-USAGE.mdfor comprehensive documentation on who uses each file, when they're loaded, where they're referenced, and why they're needed.
# Check currently installed version
moai-adk --version
# Check latest version on PyPI
uv tool list # Check current version of moai-adkMoAI-ADK's update command provides automatic tool detection and intelligent 3-stage workflow with 70-80% performance improvement for templates already synchronized:
Basic 3-Stage Workflow (automatic tool detection):
# Stage 1: Package version check
# Shows version comparison, upgrades if needed
moai-adk update
# Stage 2: Config version comparison (NEW in v0.6.3)
# Compares package template version with project config
# If already synchronized, exits early (70-80% faster!)
# Stage 3: Template sync (only if needed)
# Creates backup β Syncs templates β Updates config
# Message: "β Templates synced!" or "Templates are up to date!"Check for updates without applying them:
# Preview available updates (shows package & config versions)
moai-adk update --checkTemplates-only mode (skip package upgrade, useful for manual upgrades):
# If you manually upgraded the package, sync templates only
# Still performs Stage 2 config comparison for accuracy
moai-adk update --templates-onlyCI/CD mode (auto-confirm all prompts):
# Auto-confirms all prompts - useful in automated pipelines
# Runs all 3 stages automatically
moai-adk update --yesForce mode (skip backup creation):
# Update without creating backup (use with caution)
# Still performs config version comparison
moai-adk update --forceHow the 3-Stage Workflow Works (v0.6.3):
| Stage | Condition | Action | Performance |
|---|---|---|---|
| Stage 1 | Package: current < latest | Detects installer β Upgrades package | ~20-30s |
| Stage 2 | Config: compare versions | Reads template_version from config.json | ~1s β‘ NEW! |
| Stage 3 | Config: package > project | Creates backup β Syncs templates (if needed) | ~10-15s |
Performance Improvement (v0.6.3):
-
Same version case: 12-18s β 3-4s (70-80% faster! β‘)
- Stage 1: ~1s (version check)
- Stage 2: ~1s (config comparison)
- Stage 3: skipped (already synchronized)
-
CI/CD repeated runs: -30% cost reduction
- First run: Full sync
- Subsequent runs: Only version checks (~3-4s)
Why 3 stages? Python processes cannot upgrade themselves while running. The 3-stage workflow is necessary for safety AND performance:
- Stage 1: Package upgrade detection (compares with PyPI)
- Stage 2: Template sync necessity detection (compares config versions) - NEW v0.6.3
- Stage 3: Templates and configuration sync (only if necessary)
Key Improvement in v0.6.3: Previously, all updates would sync templates even if nothing changed. Now, config version comparison (Stage 2) detects when templates are already current, skipping Stage 3 entirely (saves 10-15 seconds!)
Config Version Tracking:
{
"project": {
"template_version": "0.6.3" // Tracks last synchronized template version
}
}This field allows MoAI-ADK to accurately determine if templates need synchronization without re-syncing everything.
Upgrade specific tool (recommended)
# Upgrade only moai-adk to latest version
uv tool upgrade moai-adkUpgrade all installed tools
# Upgrade all uv tool installations to latest versions
uv tool updateInstall specific version
# Reinstall specific version (e.g., 0.4.2)
uv tool install moai-adk==0.4.2# 1. Check installed version
moai-adk --version
# 2. Verify project works correctly
moai-adk doctor
# 3. Check updated features in Alfred
cd your-project
claude
/alfred:0-project # Verify new features like language selectionπ‘ New 2-Stage Update Workflow:
- Stage 1:
moai-adk updatedetects installer (uv tool, pipx, or pip) and upgrades package- Stage 2:
moai-adk updateagain to sync templates, config, and agent/Skills- Smart detection: Auto-detects whether package upgrade is needed based on version comparison
- CI/CD ready: Use
moai-adk update --yesfor fully automated updates in pipelines- Manual upgrade path: Use
moai-adk update --templates-onlyafter manually upgrading the package- Rollback safe: Automatic backups in
.moai-backups/before template sync
Alfred iteratively develops projects with four commands.
%%{init: {'theme':'neutral'}}%%
graph TD
Start([User Request]) --> Init[0. Init<br/>/alfred:0-project]
Init --> Plan[1. Plan & SPEC<br/>/alfred:1-plan]
Plan --> Run[2. Run & TDD<br/>/alfred:2-run]
Run --> Sync[3. Sync & Docs<br/>/alfred:3-sync]
Sync --> Plan
Sync -.-> End([Release])
- Questions about project introduction, target, language, mode (locale)
- Auto-generates
.moai/config.json,.moai/project/*5 documents - Language detection and recommended Skill Pack deployment (Foundation + Essentials + Domain/Language)
- Template cleanup, initial Git/backup checks
- Write SPEC with EARS template (includes
@SPEC:ID) - Organize Plan Board, implementation ideas, risk factors
- Auto-create branch/initial Draft PR in Team mode
- Phase 1
implementation-planner: Design libraries, folders, TAG layout - Phase 2
tdd-implementer: RED (failing test) β GREEN (minimal implementation) β REFACTOR (cleanup) - quality-gate verifies TRUST 5 principles, coverage changes
- Sync Living Document, README, CHANGELOG, etc.
- Validate TAG chain and recover orphan TAGs
- Generate Sync Report, transition Draft β Ready for Review, support
--auto-mergeoption
| Command | What it does | Key Outputs |
|---|---|---|
/alfred:0-project |
Collect project description, create config/docs, recommend Skills | .moai/config.json, .moai/project/*, initial report |
/alfred:1-plan <description> |
Analyze requirements, draft SPEC, write Plan Board | .moai/specs/SPEC-*/spec.md, plan/acceptance docs, feature branch |
/alfred:2-run <SPEC-ID> |
Execute TDD, test/implement/refactor, verify quality | tests/, src/ implementation, quality report, TAG connection |
/alfred:3-sync |
Sync docs/README/CHANGELOG, organize TAG/PR status | docs/, .moai/reports/sync-report.md, Ready PR |
/alfred:9-feedback |
Interactively create GitHub Issues (type β title β description β priority) | GitHub Issue with auto labels, priority, URL |
β All commands maintain Phase 0 (optional) β Phase 1 β Phase 2 β Phase 3 cycle structure. Alfred automatically reports execution status and next-step suggestions.
π‘ New in v0.7.0+:
/alfred:9-feedbackenables instant GitHub Issue creation during development, keeping your workflow uninterrupted while keeping issues tracked and visible to the team.
MoAI-ADK now provides automatic GitHub Issue synchronization from SPEC documents, seamlessly integrating requirements with issue tracking in team mode.
When you create a SPEC document using /alfred:1-plan and push it to a feature branch:
- GitHub Actions Workflow automatically triggers on PR events
- SPEC Metadata (ID, version, status, priority) is extracted from YAML frontmatter
- GitHub Issue is created with full SPEC content and metadata table
- PR Comment is added with a link to the created issue
- Labels are automatically applied based on priority (critical, high, medium, low)
From SPEC to GitHub Issue:
- SPEC ID: Unique identifier (e.g., AUTH-001, USER-001)
- Version: Semantic versioning (v0.1.0, v1.0.0)
- Status: draft, in-review, in-progress, completed, stable
- Priority: critical, high, medium, low (becomes GitHub label)
- Full Content: EARS requirements, acceptance criteria, dependencies
GitHub Issue Format:
# [SPEC-AUTH-001] User Authentication (v1.0.0)
## SPEC Metadata
| Field | Value |
|-------|-------|
| **ID** | AUTH-001 |
| **Version** | v1.0.0 |
| **Status** | in-progress |
| **Priority** | high |
## SPEC Document
[Full SPEC content from .moai/specs/SPEC-AUTH-001/spec.md]
---
π **Branch**: `feature/AUTH-001`
π **PR**: #123
π **Auto-synced**: This issue is automatically synchronized from the SPEC documentβ Automatic Issue Creation: GitHub Issue created on every PR with SPEC file changes β Metadata Extraction: ID, version, status, priority automatically parsed from YAML frontmatter β PR Integration: Issue linked to PR via automatic comment β Label Management: Priority-based labels (critical, high, medium, low) auto-applied β CodeRabbit Review (local only): AI-powered SPEC quality validation in local development
GitHub Actions Workflow: .github/workflows/spec-issue-sync.yml
GitHub Issue Template: .github/ISSUE_TEMPLATE/spec.yml
GitHub Labels: spec, planning, critical, high, medium, low
All templates are automatically installed with MoAI-ADK and synced during moai-adk init.
When working in your local development environment, CodeRabbit provides automatic SPEC quality review:
What CodeRabbit Reviews:
- β All 7 required metadata fields (id, version, status, created, updated, author, priority)
- β HISTORY section formatting and chronological order
- β EARS requirements structure (Ubiquitous, Event-driven, State-driven, Constraints, Optional)
- β Acceptance criteria in Given-When-Then format
- β @TAG system compliance for traceability
CodeRabbit Configuration: .coderabbit.yaml (local only, not distributed in packages)
Note: CodeRabbit integration is available only in local development environments. Package users receive core GitHub Issue automation without CodeRabbit review.
# 1. Create SPEC
/alfred:1-plan "User authentication feature"
# 2. SPEC file created at .moai/specs/SPEC-AUTH-001/spec.md
# 3. Feature branch created: feature/SPEC-AUTH-001
# 4. Draft PR created (team mode)
# 5. GitHub Actions automatically:
# - Parses SPEC metadata
# - Creates GitHub Issue #45
# - Adds PR comment: "β
SPEC GitHub Issue Created - Issue: #45"
# - Applies labels: spec, planning, high
# 6. CodeRabbit reviews SPEC (local only):
# - Validates metadata
# - Checks EARS requirements
# - Provides quality score
# 7. Continue with TDD implementation
/alfred:2-run AUTH-001- Centralized Tracking: All SPEC requirements tracked as GitHub Issues
- Team Visibility: Non-technical stakeholders can follow progress via Issues
- Automated Workflow: No manual issue creationβfully automated from SPEC to Issue
- Traceability: Direct link between SPEC files, Issues, PRs, and implementation
- Quality Assurance: CodeRabbit validates SPEC quality before implementation (local only)
MoAI-ADK v0.7.0+ includes the Quick Issue Creation feature, allowing developers to instantly create GitHub Issues without interrupting their development workflow.
During development, you frequently encounter:
- π Bugs that need immediate reporting
- β¨ Feature ideas that come to mind
- β‘ Performance improvements to suggest
- β Architecture questions that need team discussion
The old way: Stop coding, go to GitHub, manually fill issue form, remember what you were working on. The new way: Type one command, GitHub Issue is created instantly, continue coding.
When you run /alfred:9-help, Alfred guides you through an interactive multi-step dialog:
Step 1: Select Issue Type
Alfred: What type of issue do you want to create?
[ ] π Bug Report - Something isn't working
[ ] β¨ Feature Request - Suggest new functionality
[ ] β‘ Improvement - Enhance existing features
[ ] β Question/Discussion - Ask the team
Step 2: Enter Issue Title
Alfred: What's the issue title?
Your input: "Login button not responding to clicks"
Step 3: Enter Description (Optional)
Alfred: Provide a detailed description (optionalβpress Enter to skip)
Your input: "When I click the login button on iPhone 15, it freezes for 5 seconds then crashes"
Step 4: Select Priority Level
Alfred: What's the priority level?
[ ] π΄ Critical - System down, data loss, security breach
[ ] π High - Major feature broken, significant impact
[β] π‘ Medium - Normal priority (default)
[ ] π’ Low - Minor issues, nice-to-have
Step 5: Automatic Issue Creation
Alfred automatically:
1. Determines appropriate labels based on issue type and priority
2. Formats title with emoji: "π [BUG] Login button not responding..."
3. Creates GitHub Issue with all information
4. Returns the issue number and URL
- β‘ Instant Creation: Create GitHub Issues in seconds
- π·οΈ Automatic Labels: Issue type + priority automatically labeled
- π― Priority Selection: Choose from Critical/High/Medium/Low
- π Team Visibility: Issues immediately visible and discussable
- π Standardized Format: All issues follow consistent structure
# During code review, you notice a critical issue and want to report it instantly
$ /alfred:9-feedback
Alfred: What type of issue do you want to create?
> π Bug Report
Alfred: What's the issue title?
> Login button crash on mobile devices
Alfred: Provide a detailed description (optionalβpress Enter to skip)
> Tapping the login button on iPhone 15 causes app to freeze for 5 seconds then crash.
> Tested on iOS 17.2, Chrome 120 on macOS 14.2.
> Expected: Login modal should appear
> Actual: No response then crash
Alfred: What's the priority level?
> π High
β
GitHub Issue #234 created successfully!
π Title: π [BUG] Login button crash on mobile devices
π Priority: High
π·οΈ Labels: bug, reported, priority-high
π URL: https://github.com/owner/repo/issues/234
π‘ Next: Continue with your workβthe issue is now tracked!- During Development: Use
/alfred:9-helpto report bugs/ideas instantly - In Code Review: Convert improvement suggestions to tracked issues
- When Planning: Reference created issues in SPEC documents
- During Sync: Link issues to SPEC requirements with
/alfred:3-sync
- GitHub CLI (
gh) installed and authenticated - Repository initialized with Git
See .moai/docs/quick-issue-creation-guide.md for comprehensive documentation including:
- Detailed usage examples
- Best practices and tips
- Troubleshooting guide
- Integration with SPEC documents
MoAI-ADK consists of 5 key concepts. Each concept connects to the others, and together they create a powerful development system.
Metaphor: Like building a house without an architect, you shouldn't code without a blueprint.
Core Idea: Before implementation, clearly define "what to build". This isn't just documentationβit's an executable spec that both teams and AI can understand.
EARS Syntax 5 Patterns:
- Ubiquitous (basic function): "The system SHALL provide JWT-based authentication"
- Event-driven (conditional): "WHEN valid credentials are provided, the system SHALL issue a token"
- State-driven (during state): "WHILE the user is authenticated, the system SHALL allow access to protected resources"
- Optional (optional): "WHERE a refresh token exists, the system MAY issue a new token"
- Constraints (constraints): "Token expiration time SHALL NOT exceed 15 minutes"
How? The /alfred:1-plan command automatically creates professional SPECs in EARS format.
What You Get:
- β Clear requirements everyone on the team understands
- β SPEC-based test cases (what to test is already defined)
- β
When requirements change, track all affected code with
@SPEC:IDTAG
Metaphor: Like finding the route after setting a destination, you set goals with tests, then write code.
Core Idea: Write tests before implementation. Like checking ingredients before cooking, this clarifies requirements before implementation.
3-Step Cycle:
-
π΄ RED: Write a failing test first
- Each SPEC requirement becomes a test case
- Must fail because implementation doesn't exist yet
- Git commit:
test(AUTH-001): add failing test
-
π’ GREEN: Minimal implementation to pass the test
- Make it pass using the simplest approach
- Passing comes before perfection
- Git commit:
feat(AUTH-001): implement minimal solution
-
β»οΈ REFACTOR: Clean up and improve code
- Apply TRUST 5 principles
- Remove duplication, improve readability
- Tests must still pass
- Git commit:
refactor(AUTH-001): improve code quality
How? The /alfred:2-run command automatically executes these 3 steps.
What You Get:
- β Guaranteed 85%+ coverage (no code without tests)
- β Refactoring confidence (always verifiable with tests)
- β Clear Git history (trace RED β GREEN β REFACTOR process)
Metaphor: Like package tracking numbers, you should be able to trace code's journey.
Core Idea: Add @TAG:ID to all SPECs, tests, code, and documentation to create one-to-one correspondence.
TAG Chain:
@SPEC:AUTH-001 (requirements)
β
@TEST:AUTH-001 (test)
β
@CODE:AUTH-001 (implementation)
β
@DOC:AUTH-001 (documentation)
TAG ID Rules: <Domain>-<3 digits>
- AUTH-001, AUTH-002, AUTH-003...
- USER-001, USER-002...
- Once assigned, never change
How to Use? When requirements change:
# Find everything related to AUTH-001
rg '@TAG:AUTH-001' -n
# Result: Shows all SPEC, TEST, CODE, DOC at once
# β Clear what needs modificationHow? The /alfred:3-sync command validates TAG chains and detects orphan TAGs (TAGs without correspondence).
What You Get:
- β Clear intent for all code (reading SPEC explains why this code exists)
- β Instantly identify all affected code during refactoring
- β Code remains understandable 3 months later (trace TAG β SPEC)
Metaphor: Like a healthy body, good code must satisfy all 5 elements.
Core Idea: All code must follow these 5 principles. /alfred:3-sync automatically verifies them.
-
π§ͺ Test First (tests come first)
- Test coverage β₯ 85%
- All code protected by tests
- Adding feature = adding test
-
π Readable (easy-to-read code)
- Functions β€ 50 lines, files β€ 300 lines
- Variable names reveal intent
- Pass linters (ESLint/ruff/clippy)
-
π― Unified (consistent structure)
- Maintain SPEC-based architecture
- Same patterns repeat (reduces learning curve)
- Type safety or runtime validation
-
π Secured (security)
- Input validation (defend against XSS, SQL Injection)
- Password hashing (bcrypt, Argon2)
- Protect sensitive information (environment variables)
-
π Trackable (traceability)
- Use @TAG system
- Include TAG in Git commits
- Document all decisions
How? The /alfred:3-sync command automatically performs TRUST verification.
What You Get:
- β Production-quality code guaranteed
- β Entire team develops with same standards
- β Fewer bugs, prevent security vulnerabilities in advance
Metaphor: Like a personal assistant, Alfred handles all the complex work.
Core Idea: 19 AI agents collaborate to automate the entire development process:
Agent Composition:
- Alfred SuperAgent: Overall orchestration (1)
- Core Sub-agents: Specialized tasks like SPEC writing, TDD implementation, documentation sync (10)
- Zero-project Specialists: Project initialization, language detection, etc. (6)
- Built-in Agents: General questions, codebase exploration (2)
55 Claude Skills:
- Foundation (6): TRUST/TAG/SPEC/Git/EARS principles
- Essentials (4): Debugging, performance, refactoring, code review
- Alfred (7): Workflow automation
- Domain (10): Backend, frontend, security, etc.
- Language (18): Python, JavaScript, Go, Rust, Java, Kotlin, Swift, Dart, C/C#, Ruby, PHP, SQL, Shell, and more
- Ops (1): Claude Code session management
- Other (2): Skill factory, Spec authoring
How? /alfred:* commands automatically activate the right expert team.
What You Get:
- β No prompt writing needed (use standardized commands)
- β Automatically remember project context (no repeating same questions)
- β Auto-assemble optimal expert team (activate appropriate Sub-agents)
Want to learn more? Check detailed rules in
.moai/memory/development-guide.md.
Let's now experience MoAI-ADK's complete workflow firsthand. We'll build a simple "Todo Management API" and see how SPEC, TDD, and documentation connect.
/alfred:1-plan "Todo add, view, update, delete API"Execution Result:
Alfred's spec-builder automatically generates:
# .moai/specs/SPEC-TODO-001/spec.md
---
id: TODO-001
version: 0.0.1
status: draft
created: 2025-10-22
updated: 2025-10-22
author: @user
priority: high
---
# @SPEC:TODO-001: Todo Management API
## Ubiquitous Requirements
- The system SHALL be able to add todos
- The system SHALL be able to view all todos
- The system SHALL be able to update specific todos
- The system SHALL be able to delete todos
## Event-driven Requirements
- WHEN a new todo is requested via POST /todos, the system SHALL save the todo and return a 201 response
- WHEN GET /todos/{id} is requested with an existing todo ID, the system SHALL return that todo
- WHEN GET is requested with a non-existent todo ID, the system SHALL return a 404 error
## Constraints
- Todo title SHALL be minimum 1 character, maximum 200 characters
- Each todo SHALL automatically record creation timeAlso auto-generated:
- π
Plan Board: Implementation ideas, risk factors, solution strategies - β
Acceptance Criteria: Verification standards - πΏ
feature/spec-todo-001Git branch
/alfred:2-run TODO-001Phase 1: Establish Implementation Strategy
The implementation-planner Sub-agent decides:
- π Libraries: FastAPI + SQLAlchemy
- π Folder structure:
src/todo/,tests/todo/ - π·οΈ TAG design:
@CODE:TODO-001:API,@CODE:TODO-001:MODEL,@CODE:TODO-001:REPO
Phase 2: RED β GREEN β REFACTOR
π΄ RED: Write Tests First
# tests/test_todo_api.py
# @TEST:TODO-001 | SPEC: SPEC-TODO-001.md
import pytest
from src.todo.api import create_todo, get_todos
def test_create_todo_should_return_201_with_todo_id():
"""WHEN a new todo is requested via POST /todos,
the system SHALL save the todo and return a 201 response"""
response = create_todo({"title": "Buy groceries"})
assert response.status_code == 201
assert "id" in response.json()
assert response.json()["title"] == "Buy groceries"
def test_get_todos_should_return_all_todos():
"""The system SHALL be able to view all todos"""
create_todo({"title": "Task 1"})
create_todo({"title": "Task 2"})
response = get_todos()
assert response.status_code == 200
assert len(response.json()) >= 2
def test_get_todo_with_invalid_id_should_return_404():
"""WHEN GET is requested with a non-existent todo ID,
the system SHALL return a 404 error"""
response = get_todo(999)
assert response.status_code == 404Execute: pytest tests/test_todo_api.py -v
Result: β All fail (create_todo not defined)
Git Commit:
git commit -m "π΄ test(TODO-001): add failing API tests"π’ GREEN: Minimal Implementation
# src/todo/api.py
# @CODE:TODO-001:API | SPEC: SPEC-TODO-001.md | TEST: tests/test_todo_api.py
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import uuid
app = FastAPI()
todos_db = {}
class TodoRequest(BaseModel):
title: str
@app.post("/todos", status_code=201)
def create_todo(todo: TodoRequest):
"""@CODE:TODO-001:API - POST endpoint"""
todo_id = str(uuid.uuid4())
todos_db[todo_id] = {"id": todo_id, "title": todo.title}
return todos_db[todo_id]
@app.get("/todos")
def get_todos():
"""@CODE:TODO-001:API - GET all endpoint"""
return list(todos_db.values())
@app.get("/todos/{todo_id}")
def get_todo(todo_id: str):
"""@CODE:TODO-001:API - GET by ID endpoint"""
if todo_id not in todos_db:
raise HTTPException(status_code=404, detail="Todo not found")
return todos_db[todo_id]Execute: pytest tests/test_todo_api.py -v
Result: β
All pass!
Git Commit:
git commit -m "π’ feat(TODO-001): implement minimal Todo API"β»οΈ REFACTOR: Code Improvement
# src/todo/models.py
# @CODE:TODO-001:MODEL | SPEC: SPEC-TODO-001.md
from datetime import datetime
from sqlalchemy import Column, String, DateTime
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Todo(Base):
"""@CODE:TODO-001:MODEL - Todo data model"""
__tablename__ = "todos"
id = Column(String, primary_key=True)
title = Column(String(200), nullable=False, index=True) # Constraint: max 200 chars
created_at = Column(DateTime, default=datetime.utcnow) # Auto creation time
def validate(self):
"""@CODE:TODO-001:MODEL - Validation"""
if not self.title or len(self.title) > 200:
raise ValueError("Title must be 1-200 characters")Execute: pytest tests/test_todo_api.py -v
Result: β
Still all pass!
Git Commit:
git commit -m "β»οΈ refactor(TODO-001): add database models and validation"Phase 3: Quality Verification
# TRUST verification
β
Test First: 87% coverage
β
Readable: All functions < 50 lines
β
Unified: Consistent API patterns
β
Secured: Input validation complete
β
Trackable: All code has @TAG:TODO-001/alfred:3-syncAutomatically Performed:
-
TAG Chain Validation
β @SPEC:TODO-001 β .moai/specs/SPEC-TODO-001/spec.md β @TEST:TODO-001 β tests/test_todo_api.py β @CODE:TODO-001 β src/todo/ (3 files) β @DOC:TODO-001 β docs/api/todo.md (auto-generated) TAG Chain Integrity: 100% Orphan TAGs: None
-
Living Document Generation
# @DOC:TODO-001: Todo Management API ## Overview REST API for managing tasks with CRUD operations. ## Endpoints ### Create Todo - Method: POST - URL: /todos - Request: {"title": "string (1-200 chars)"} - Response: 201 Created with todo object - Implemented in: @CODE:TODO-001:API - Tested in: @TEST:TODO-001 ### Get All Todos - Method: GET - URL: /todos - Response: 200 OK with array of todos [... etc ...]
-
README Update
## Features - β Todo Management API (TODO-001)
-
CHANGELOG Generation
# Changelog ## [0.1.0] - 2025-10-22 ### Added - Todo Management API with CRUD operations (@SPEC:TODO-001) - Create new todos - List all todos - Update existing todos - Delete todos ### Implementation Details - SPEC: .moai/specs/SPEC-TODO-001/spec.md - Tests: tests/test_todo_api.py (87% coverage) - Code: src/todo/ with models, API, repository layers
Let's verify everything generated is properly connected:
# 1οΈβ£ Check TAG chain
rg '@(SPEC|TEST|CODE|DOC):TODO-001' -n
# Output:
# .moai/specs/SPEC-TODO-001/spec.md:1: # @SPEC:TODO-001: Todo Management API
# tests/test_todo_api.py:2: # @TEST:TODO-001 | SPEC: SPEC-TODO-001.md
# src/todo/api.py:5: # @CODE:TODO-001:API | SPEC: SPEC-TODO-001.md
# src/todo/models.py:5: # @CODE:TODO-001:MODEL | SPEC: SPEC-TODO-001.md
# docs/api/todo.md:1: # @DOC:TODO-001: Todo Management API
# 2οΈβ£ Run tests
pytest tests/test_todo_api.py -v
# β
test_create_todo_should_return_201_with_todo_id PASSED
# β
test_get_todos_should_return_all_todos PASSED
# β
test_get_todo_with_invalid_id_should_return_404 PASSED
# β
3 passed in 0.05s
# 3οΈβ£ Check generated documentation
cat docs/api/todo.md # API documentation auto-generated
cat README.md # Todo API added
cat CHANGELOG.md # Change history recorded
# 4οΈβ£ Check Git history
git log --oneline | head -5
# a1b2c3d β
sync(TODO-001): update docs and changelog
# f4e5d6c β»οΈ refactor(TODO-001): add database models
# 7g8h9i0 π’ feat(TODO-001): implement minimal API
# 1j2k3l4 π΄ test(TODO-001): add failing tests
# 5m6n7o8 πΏ Create feature/spec-todo-001 branchβ
SPEC written (3 minutes)
ββ @SPEC:TODO-001 TAG assigned
ββ Clear requirements in EARS format
β
TDD implementation (5 minutes)
ββ π΄ RED: Tests written first
ββ π’ GREEN: Minimal implementation
ββ β»οΈ REFACTOR: Quality improvement
ββ @TEST:TODO-001, @CODE:TODO-001 TAGs assigned
ββ 87% coverage, TRUST 5 principles verified
β
Documentation sync (1 minute)
ββ Living Document auto-generated
ββ README, CHANGELOG updated
ββ TAG chain validation complete
ββ @DOC:TODO-001 TAG assigned
ββ PR status: Draft β Ready for Review
Result:
- π Clear SPEC (SPEC-TODO-001.md)
- π§ͺ 85%+ test coverage (test_todo_api.py)
- π Production-quality code (src/todo/)
- π Auto-generated API documentation (docs/api/todo.md)
- π Change history tracking (CHANGELOG.md)
- π Everything connected with TAGs
This is MoAI-ADK's true power. Not just a simple API implementation, but a complete development artifact with everything from SPEC through tests, code, and documentation consistently connected!
Alfred works by combining multiple specialized agents with Claude Skills.
| Sub-agent | Model | Role |
|---|---|---|
| project-manager π | Sonnet | Project initialization, metadata interviews |
| spec-builder ποΈ | Sonnet | Plan board, EARS SPEC authoring |
| code-builder π | Sonnet | Performs complete TDD with implementation-planner + tdd-implementer |
| doc-syncer π | Haiku | Living Doc, README, CHANGELOG sync |
| tag-agent π·οΈ | Haiku | TAG inventory, orphan detection |
| git-manager π | Haiku | GitFlow, Draft/Ready, Auto Merge |
| debug-helper π | Sonnet | Failure analysis, fix-forward strategy |
| trust-checker β | Haiku | TRUST 5 quality gate |
| quality-gate π‘οΈ | Haiku | Coverage change and release blocker review |
| cc-manager π οΈ | Sonnet | Claude Code session optimization, Skill deployment |
Alfred organizes Claude Skills in a 4-tier architecture using Progressive Disclosure to load Just-In-Time only when needed. Each Skill is a production-grade guide stored in .claude/skills/ directory.
Core skills containing fundamental TRUST/TAG/SPEC/Git/EARS/Language principles
| Skill | Description |
|---|---|
moai-foundation-trust |
TRUST 5-principles (Test 85%+, Readable, Unified, Secured, Trackable) verification |
moai-foundation-tags |
@TAG markers scan and inventory generation (CODE-FIRST principle) |
moai-foundation-specs |
SPEC YAML frontmatter validation and HISTORY section management |
moai-foundation-ears |
EARS (Easy Approach to Requirements Syntax) requirements writing guide |
moai-foundation-git |
Git workflow automation (branching, TDD commits, PR management) |
moai-foundation-langs |
Project language/framework auto-detection (package.json, pyproject.toml, etc.) |
Core tools needed for daily development work
| Skill | Description |
|---|---|
moai-essentials-debug |
Stack trace analysis, error pattern detection, quick diagnosis support |
moai-essentials-perf |
Performance profiling, bottleneck detection, tuning strategies |
moai-essentials-refactor |
Refactoring guide, design patterns, code improvement strategies |
moai-essentials-review |
Automated code review, SOLID principles, code smell detection |
MoAI-ADK internal workflow orchestration skills
| Skill | Description |
|---|---|
moai-alfred-ears-authoring |
EARS syntax validation and requirement pattern guidance |
moai-alfred-git-workflow |
MoAI-ADK conventions (feature branch, TDD commits, Draft PR) automation |
moai-alfred-language-detection |
Project language/runtime detection and test tool recommendations |
moai-alfred-spec-metadata-validation |
SPEC YAML frontmatter and HISTORY section consistency validation |
moai-alfred-tag-scanning |
Complete @TAG marker scan and inventory generation (CODE-FIRST principle) |
moai-alfred-trust-validation |
TRUST 5-principles compliance verification |
moai-alfred-interactive-questions |
Claude Code Tools AskUserQuestion TUI menu standardization |
Specialized domain expertise
| Skill | Description |
|---|---|
moai-domain-backend |
Backend architecture, API design, scaling guide |
moai-domain-cli-tool |
CLI tool development, argument parsing, POSIX compliance, user-friendly help messages |
moai-domain-data-science |
Data analysis, visualization, statistical modeling, reproducible research workflows |
moai-domain-database |
Database design, schema optimization, indexing strategies, migration management |
moai-domain-devops |
CI/CD pipelines, Docker containerization, Kubernetes orchestration, IaC |
moai-domain-frontend |
React/Vue/Angular development, state management, performance optimization, accessibility |
moai-domain-ml |
Machine learning model training, evaluation, deployment, MLOps workflows |
moai-domain-mobile-app |
Flutter/React Native development, state management, native integration |
moai-domain-security |
OWASP Top 10, static analysis (SAST), dependency security, secrets management |
moai-domain-web-api |
REST API, GraphQL design patterns, authentication, versioning, OpenAPI documentation |
Programming language-specific best practices
| Skill | Description |
|---|---|
moai-lang-python |
pytest, mypy, ruff, black, uv package management |
moai-lang-typescript |
Vitest, Biome, strict typing, npm/pnpm |
moai-lang-javascript |
Jest, ESLint, Prettier, npm package management |
moai-lang-go |
go test, golint, gofmt, standard library |
moai-lang-rust |
cargo test, clippy, rustfmt, ownership/borrow checker |
moai-lang-java |
JUnit, Maven/Gradle, Checkstyle, Spring Boot patterns |
moai-lang-kotlin |
JUnit, Gradle, ktlint, coroutines, extension functions |
moai-lang-swift |
XCTest, SwiftLint, iOS/macOS development patterns |
moai-lang-dart |
flutter test, dart analyze, Flutter widget patterns |
moai-lang-csharp |
xUnit, .NET tooling, LINQ, async/await patterns |
moai-lang-cpp |
Google Test, clang-format, modern C++ (C++17/20) |
moai-lang-c |
Unity test framework, cppcheck, Make build system |
moai-lang-scala |
ScalaTest, sbt, functional programming patterns |
moai-lang-ruby |
RSpec, RuboCop, Bundler, Rails patterns |
moai-lang-php |
PHPUnit, Composer, PSR standards |
moai-lang-sql |
Test frameworks, query optimization, migration management |
moai-lang-shell |
bats, shellcheck, POSIX compliance |
moai-lang-r |
testthat, lintr, data analysis patterns |
Claude Code session management
| Skill | Description |
|---|---|
moai-claude-code |
Claude Code agents, commands, skills, plugins, settings scaffolding and monitoring |
v0.4.6 New Feature: Claude Skills organized in 4-tier architecture (100% complete in v0.4.6). Each Skill loads via Progressive Disclosure only when needed to minimize context cost. Organized in Foundation β Essentials β Alfred β Domain/Language/Ops tiers, with all skills including production-grade documentation and executable TDD examples.
| Scenario | Default Model | Why |
|---|---|---|
| Specifications, design, refactoring, problem solving | Claude 4.5 Sonnet | Strong in deep reasoning and structured writing |
| Document sync, TAG checks, Git automation | Claude 4.5 Haiku | Strong in rapid iteration, string processing |
- Start with Haiku for patterned tasks; switch to Sonnet when complex judgment is needed.
- If you manually change models, noting "why switched" in logs helps collaboration.
MoAI-ADK provides 4 main Claude Code Hooks that seamlessly integrate with your development workflow. These hooks enable automatic checkpoints, JIT context loading, and session monitoringβall happening transparently in the background.
Hooks are event-driven scripts that trigger automatically at specific points in your Claude Code session. Think of them as safety guardrails and productivity boosters that work behind the scenes without interrupting your flow.
Triggers: When you start a Claude Code session in your project Purpose: Display project status at a glance
What You See:
π MoAI-ADK Session Started
Language: Python
Branch: develop
Changes: 2 files
SPEC Progress: 12/25 (48%)
Why It Matters: Instantly understand your project's current state without running multiple commands.
Triggers: Before executing file edits, Bash commands, or MultiEdit operations Purpose: Detect risky operations and automatically create safety checkpoints + TAG Guard
Protection Against:
rm -rf(file deletion)git merge,git reset --hard(Git dangerous operations)- Editing critical files (
CLAUDE.md,config.json) - Mass edits (10+ files at once via MultiEdit)
TAG Guard (New in v0.4.11): Automatically detects missing @TAG annotations in changed files:
- Scans staged, modified, and untracked files
- Warns when SPEC/TEST/CODE/DOC files lack required @TAG markers
- Configurable rules via
.moai/tag-rules.json - Non-blocking (gentle reminder, doesn't stop execution)
What You See:
π‘οΈ Checkpoint created: before-delete-20251023-143000
Operation: delete
Or when TAGs are missing:
β οΈ TAG λλ½ κ°μ§: μμ±/μμ ν νμΌ μ€ @TAGκ° μλ νλͺ©μ΄ μμ΅λλ€.
- src/auth/service.py β κΈ°λ νκ·Έ: @CODE:
- tests/test_auth.py β κΈ°λ νκ·Έ: @TEST:
κΆμ₯ μ‘°μΉ:
1) SPEC/TEST/CODE/DOC μ νμ λ§λ @TAGλ₯Ό νμΌ μλ¨ μ£Όμμ΄λ ν€λμ μΆκ°
2) rgλ‘ νμΈ: rg '@(SPEC|TEST|CODE|DOC):' -n <κ²½λ‘>
Why It Matters: Prevents data loss from mistakes and ensures @TAG traceability. You can always restore from the checkpoint if something goes wrong.
Triggers: When you submit a prompt to Claude Purpose: JIT (Just-In-Time) context loadingβautomatically add relevant files
How It Works:
- You type: "Fix AUTH bug"
- Hook scans for AUTH-related files
- Auto-loads: SPEC, tests, implementation, docs related to AUTH
- Claude receives full context without you manually specifying files
Why It Matters: Saves time and ensures Claude has all the relevant context for your request.
Triggers: When you close your Claude Code session Purpose: Cleanup tasks and state preservation
Why It Matters: Ensures clean session transitions and proper state management.
- Location:
.claude/hooks/alfred/ - Environment Variable:
$CLAUDE_PROJECT_DIR(dynamically references project root) - Performance: Each hook executes in <100ms
- Logging: Errors output to stderr (stdout reserved for JSON payloads)
If you need to temporarily disable hooks, edit .claude/settings.json:
{
"hooks": {
"SessionStart": [], // Disabled
"PreToolUse": [...] // Still active
}
}Problem: Hook doesn't execute
- β
Verify
.claude/settings.jsonis properly configured - β
Check
uvis installed:which uv - β
Ensure hook script has execute permissions:
chmod +x .claude/hooks/alfred/alfred_hooks.py
Problem: Performance degradation
- β Check if any hook exceeds 100ms execution time
- β Disable unnecessary hooks
- β Review error messages in stderr output
Problem: Too many checkpoints created
- β Review PreToolUse trigger conditions
- β
Adjust detection thresholds in
core/checkpoint.pyif needed
| Hook | Status | Feature |
|---|---|---|
| SessionStart | β Active | Project status summary (language, Git, SPEC progress, checkpoints) |
| PreToolUse | β Active | Risk detection + auto checkpoint (critical-delete, delete, merge, script) + TAG Guard (missing @TAG detection) |
| UserPromptSubmit | β Active | JIT context loading (auto-load related SPEC, tests, code, docs) |
| PostToolUse | β Active | Auto-run tests after code changes (9 languages: Python, TS, JS, Go, Rust, Java, Kotlin, Swift, Dart) |
| SessionEnd | β Active | Session cleanup and state saving |
- Notification: Important event alerts (logging, notifications)
- Stop/SubagentStop: Cleanup when agents terminate
- Advanced security:
ddcommands, supply chain checks
- Comprehensive analysis:
.moai/reports/hooks-analysis-and-implementation.md - PostToolUse implementation:
.moai/reports/phase3-posttool-implementation-complete.md - Security enhancements:
.moai/reports/security-enhancement-critical-delete.md - Hook implementation:
.claude/hooks/alfred/ - Hook tests:
tests/hooks/
- Q. Can I install on an existing project?
- A. Yes. Run
moai-adk init .to add only the.moai/structure without touching existing code.
- A. Yes. Run
- Q. How do I run tests?
- A.
/alfred:2-runruns them first; rerunpytest,pnpm test, etc. per language as needed.
- A.
- Q. How do I ensure documentation stays current?
- A.
/alfred:3-syncgenerates a Sync Report. Check the report in Pull Requests.
- A.
- Q. Can I work manually?
- A. Yes, but keep the SPEC β TEST β CODE β DOC order and always leave TAGs.
| Version | Key Features | Date |
|---|---|---|
| v0.5.7 | π― SPEC β GitHub Issue automation + CodeRabbit integration + Auto PR comments | 2025-10-27 |
| v0.4.11 | β¨ TAG Guard system + CLAUDE.md formatting improvements + Code cleanup | 2025-10-23 |
| v0.4.10 | π§ Hook robustness improvements + Bilingual documentation + Template language config | 2025-10-23 |
| v0.4.9 | π― Hook JSON schema validation fixes + Comprehensive tests (468/468 passing) | 2025-10-23 |
| v0.4.8 | π Release automation + PyPI deployment + Skills refinement | 2025-10-23 |
| v0.4.7 | π Korean language optimization + SPEC-First principle documentation | 2025-10-22 |
| v0.4.6 | π Complete Skills v2.0 (100% Production-Ready) + 85,000 lines official docs + 300+ TDD examples | 2025-10-22 |
π¦ Install Now:
uv tool install moai-adk==0.4.11orpip install moai-adk==0.4.11
| Purpose | Resource |
|---|---|
| Skills detailed structure | .claude/skills/ directory (58 Skills) |
| Sub-agent details | .claude/agents/alfred/ directory (12 agents) |
| Workflow guide | .claude/commands/alfred/ (4 commands: 0-project ~ 3-sync) |
| Documentation | Coming soon (see .moai/, .claude/, docs/ in your project) |
| Release notes | GitHub Releases: https://github.com/modu-ai/moai-adk/releases |
| Channel | Link |
|---|---|
| GitHub Repository | https://github.com/modu-ai/moai-adk |
| Issues & Discussions | https://github.com/modu-ai/moai-adk/issues |
| PyPI Package | https://pypi.org/project/moai-adk/ (Latest: v0.4.11) |
| Latest Release | https://github.com/modu-ai/moai-adk/releases/tag/v0.4.11 |
| Documentation | See .moai/, .claude/, docs/ within project |
"No CODE without SPEC"
MoAI-ADK is not simply a code generation tool. Alfred SuperAgent with its 19-member team and 56 Claude Skills together guarantee:
- β SPEC β TEST (TDD) β CODE β DOCS consistency
- β Complete history tracking with @TAG system
- β Guaranteed 87.84%+ coverage
- β Iterative development with 4-stage workflow (0-project β 1-plan β 2-run β 3-sync)
- β Collaborate with AI transparently and traceably
Start a new experience of trustworthy AI development with Alfred! π€
MoAI-ADK β SPEC-First TDD with AI SuperAgent & Complete Skills + TAG Guard
- π¦ PyPI: https://pypi.org/project/moai-adk/
- π GitHub: https://github.com/modu-ai/moai-adk
- π License: MIT
- β Skills: 55+ Production-Ready Guides
- β Tests: 467/476 Passing (85.60% coverage)
- π·οΈ TAG Guard: Automatic @TAG validation in PreToolUse Hook