Production-ready Claude Skill implementing the Plan-Do-Check-Act framework for AI-assisted code generation.
Based on Ken Judy's InfoQ article - a research-backed methodology that reduces debugging time by 80% while maintaining code quality.
π Getting Started Guide | π€ Contributing | π Changelog | π Security
This skill helps you write better code with AI assistants (Claude Code, Cline, etc.) by providing:
- β Structured workflow preventing common AI coding pitfalls
- β Test-driven development discipline
- β Quality metrics tracking
- β Continuous improvement through retrospectives
- β Prevention of code duplication and regressions
From the InfoQ article's experiment:
- 10% fewer tokens used
- 34% less production code (more maintainable)
- 30% more test coverage
- 80% less troubleshooting time
- Better developer experience
Option 1: Download Release
- Go to Releases
- Download
pdca-ai-coding.skill - Upload to Claude.ai:
- Open Claude.ai
- Click Skills menu
- Select "Upload Skill"
- Choose the downloaded
.skillfile
- β Ready to use!
Option 2: Build from Source
git clone https://github.com/YOUR_USERNAME/pdca-ai-coding-skill.git
cd pdca-ai-coding-skill
# Upload pdca-ai-coding.skill to Claude.aiI need to implement [your feature]. Let's use the PDCA framework.
Claude will guide you through:
- Analysis - Search existing patterns, propose approaches
- Planning - Break into TDD steps
- Implementation - Red-Green-Refactor with human oversight
- Check - Verify quality and completeness
- Retrospective - Learn and improve
.
βββ pdca-ai-coding.skill # Upload this to Claude.ai
βββ SKILL.md # Main skill documentation
βββ references/ # Prompt templates
β βββ working-agreements.md
β βββ analysis-prompt.md
β βββ planning-prompt.md
β βββ implementation-prompt.md
β βββ completion-prompt.md
β βββ retrospective-prompt.md
βββ scripts/ # Automation tools
β βββ track_metrics.py # Quality metrics tracking
β βββ init_session.py # Session initialization
βββ assets/
β βββ session-template.md # Session logging template
βββ docs/ # Additional documentation
βββ README.md
βββ GETTING-STARTED.md
βββ PROJECT-CONFIGURATION.md
βββ REFINEMENTS-V1.1.md
- GETTING-STARTED.md - Your 5-minute quick start guide
- PROJECT-CONFIGURATION.md - Guide for project-specific setup
- README.md - Complete package overview
- REFINEMENTS-V1.1.md - Technical refinements and validation
- Test-driven development discipline
- Small, atomic commits (<100 lines, <5 files)
- Respect for existing architecture
- Human accountability for all AI-generated code
- Plan (Analysis) - 2-10 min: Search patterns, propose approaches
- Plan (Breakdown) - 2 min: Create TDD execution plan
- Do - <3 hours: Implement with red-green-refactor
- Check - 5 min: Verify quality and process adherence
- Act - 2-10 min: Retrospective and continuous improvement
Track your progress:
- Large commits: <20% (>100 lines)
- Sprawling commits: <10% (>5 files)
- Test-first discipline: >50%
- Avg files per commit: <5
- Avg lines per commit: <100
# Track quality metrics
python scripts/track_metrics.py --repo /path/to/repo --since "7 days ago"
# Initialize session with logging
python scripts/init_session.py "Feature name" --objective "What you're building"- Implementing new features (1-3 hour tasks)
- Refactoring existing code
- Adding integrations
- Any task requiring quality and maintainability
- Quick prototypes or experiments
- Trivial changes
- Simple bug fixes (use lightweight version)
From the article's research:
The Problem:
- AI code generation increases output but decreases delivery stability
- 10x increase in duplicated code
- Quality issues and integration problems
The Solution:
- Structured prompting outperforms ad-hoc by 1-74%
- PDCA reduces software defects by 61%
- Human-in-the-loop with clear intervention points
After 1 Week:
- Comfortable with workflow
- Catching AI errors early
- Smaller, better commits
After 1 Month:
- Metrics trending positive
- Fewer regressions
- Faster code reviews
- Less debugging time
After 3 Months:
- Significantly better code quality
- Faster delivery
- Team wants to adopt it
The PDCA skill works globally across all projects. For project-specific tech stack and conventions, you can optionally create a .claude/instructions.md file in your project root. This tells Claude about your specific tech choices without modifying the skill itself.
See docs/PROJECT-CONFIGURATION.md for complete guide on when and how to use project-specific configuration.
The skill is designed to be customized:
# Extract and modify
unzip pdca-ai-coding.skill -d custom-pdca/
# Edit prompts in references/
# Update working agreements
# Adjust quality targets
# Repackage (requires skill-creator tools)
python package_skill.py custom-pdca/Does this work with Claude Code and Cline?
Yes! The skill works with any Claude-based coding assistant including Claude Code, Cline, and the Claude.ai web interface.
How long does a PDCA session take?
Typical sessions are 1-3 hours. The framework helps you break larger tasks into these manageable chunks. You can also use the lightweight version for 15-30 minute tasks.
Can I customize the prompts?
Absolutely! The prompts are designed as starting points. Extract the skill, modify the references/ files, and repackage. The retrospective process will help you refine them based on your needs.
Do I need to follow all 5 phases every time?
For best results, yes. However, the skill includes lightweight versions for simple tasks. At minimum, always do TDD implementation and retrospectives.
What if my team doesn't use TDD?
The framework still provides value through analysis, planning, and retrospectives. However, TDD is core to preventing AI-generated regressions. Consider adopting TDD at least for AI-assisted coding.
How do I track metrics without GitHub Actions?
Use the included track_metrics.py script locally:
python scripts/track_metrics.py --repo . --since "7 days ago"Do I need a .claude/instructions.md for every project?
No! Only create it for projects with specific conventions or when you find yourself repeating the same context. See PROJECT-CONFIGURATION.md for guidance.
If you find this useful, please star the repository! It helps others discover the framework.
- Discussions: GitHub Discussions - Share experiences, ask questions
- Issues: Bug reports and feature requests
- Twitter: Share your results with
#PDCACoding
We welcome contributions! See CONTRIBUTING.md for detailed guidelines.
Quick ways to contribute:
- β Star the repo - Help others discover it
- π Report bugs - Use our issue templates
- π‘ Suggest improvements - Based on your retrospectives
- π Improve docs - Fix typos, add examples
- π§ Submit PRs - Share your prompt refinements
Areas we need help:
- Framework-specific adaptations (React, Django, etc.)
- Language-specific variations (Python, TypeScript, Go, etc.)
- Real-world case studies
- Video tutorials
- Translations
See our roadmap in CHANGELOG.md for planned features.
MIT License - feel free to adapt for your needs
- Framework: Ken Judy's PDCA methodology from InfoQ article
- Implementation: Skill created by Claude (Anthropic) based on the article
- Validation: Multiple iterations with article cross-referencing
Issues? Ideas? Open an issue or discussion in this repository.
Want to share your experience? We'd love to hear how PDCA is working for you!
Version: 1.1
Last Updated: 2025
Status: Production Ready β