A CLI tool that fetches GitHub Pull Request reviews, analyzes them using AI, and generates actionable tasks for developers to address feedback systematically.
- π€ Cursor CLI - Cursor's AI with automatic model selection (recommended)
- π€ Claude Code - Anthropic's Claude via command-line interface
- π€ Auto-detection - Automatically finds and uses available providers
- β Standard GitHub Reviews - Direct comment processing
- β
CodeRabbit (
coderabbitai[bot]) - With nitpick comment detection - β
Codex (
chatgpt-codex-connector) - Parses embedded comments with P1/P2/P3 priority badges
All review sources are automatically detected and processed without configuration!
- π PR Review Fetching: Automatically retrieves reviews from GitHub API with nested comment structure
- π€ AI Analysis: Supports multiple AI providers (Claude Code, Cursor CLI) for generating structured, actionable tasks
- πΎ Local Storage: Stores data in structured JSON format under
.pr-review/directory - π Task Management: Full lifecycle management with status tracking (todo/doing/done/pending/cancel)
- β Task Cancellation: Cancel tasks with GitHub comment posting and proper error propagation for CI/CD
- π Thread Resolution: Manually or automatically resolve GitHub review threads when tasks complete
- β Task Verification: Automated verification checks before task completion with configurable commands
- π Done Command Automation: Complete workflow automation with verification β commit β thread resolution β next task suggestion
- π§ AI Impact Assessment: Automatically assigns TODO/PENDING status based on implementation complexity
- TODO: Small changes (<30min: typos, renaming, simple fixes)
- PENDING: Large changes (design decisions, architecture, major refactoring)
- π Comprehensive Comment Analysis: Analyzes ALL review comments (including nitpicks, questions, suggestions)
- π― Priority-based Analysis: Customizable priority rules for task generation
- π Smart Deduplication: AI-powered task deduplication with similarity threshold control
- β Task Validation: AI-powered validation with configurable quality thresholds and retry logic
- β‘ Worker Pool Pattern: Fixed-size worker pool for predictable resource usage (89-90% goroutine reduction)
- π Pagination Support: Complete data fetching for PRs with 100+ comments or threads (prevents 59% data loss)
- π¨ API Optimization: Batch GraphQL fetching reduces API calls by 96% (N+1 β 3-4 calls per PR)
- π§Ή Process Cleanup: Robust defer-based cleanup prevents child process leaks and CPU exhaustion
- β±οΈ Smart Performance: Automatic optimization based on PR size with no configuration needed
- π Auto-Resume: Seamlessly continues from where it left off if interrupted
- π‘οΈ JSON Recovery: Automatic recovery from incomplete Claude API responses with partial task extraction
- π Intelligent Retry: Smart retry strategies with pattern detection and prompt size adjustment
- π Response Monitoring: Performance analytics and optimization recommendations for API usage
- π¨ Modern UI: Clean, intuitive interface with visual progress indicators
- π Unresolved Comment Detection: Automatically identifies and categorizes review comments by resolution status:
- Unanalyzed comments: Exist on GitHub but tasks not yet generated
- In-progress comments: Tasks generated but not completed
- Resolved comments: All tasks completed and GitHub threads resolved
- Completion status: Integrated task and comment status for accurate completion detection
- π Thread Resolution Guidance: Intelligent reminders after task cancellation to resolve review threads
- π Enhanced Status Display: Rich task status visualization with color-coded priorities
- π¬ Interactive Guidance: Context-aware next steps and workflow recommendations
- π₯οΈ Verbose Mode: Detailed logging and debugging output for development and troubleshooting
- π Silent Mode: Respects VerboseMode setting for clean, quiet CLI operation
- π Extensible AI Provider Support: Architecture designed for easy integration of multiple AI providers
- ποΈ AI Provider Transparency: Displays current AI provider and model at the start of every command
- π Authentication: Multi-source token detection with interactive setup
- π§ Debug Commands: Test specific phases independently for troubleshooting
- π Prompt Size Optimization: Automatic chunking for large comments (>20KB) and pre-validation size checks
- π Task State Preservation: Maintains existing task statuses during subsequent runs
- π UUID-based Task IDs: Unique task identification to eliminate duplication issues
- π§ AI Prompt Preservation: Preserves "π€ Prompt for AI Agents" blocks from CodeRabbit while removing verbose metadata
- π¦ File Size Optimization: Achieves up to 66% reduction in reviews.json size (200KB β 67KB) while maintaining essential content
- π€ HTML Entity Processing: Properly handles Unicode HTML entities and GitHub API response variations
- π€ Multi-Source Review Support: Automatically detects and processes CodeRabbit and Codex (chatgpt-codex-connector) reviews
- π·οΈ Priority Badge Detection: Parses P1/P2/P3 priority badges from Codex embedded comments
- π GitHub Thread Auto-Resolution: Automatically resolves review threads when tasks are marked as done (opt-in)
Unix/Linux/macOS:
curl -fsSL https://raw.githubusercontent.com/biwakonbu/reviewtask/main/scripts/install/install.sh | bashWindows (PowerShell):
iwr -useb https://raw.githubusercontent.com/biwakonbu/reviewtask/main/scripts/install/install.ps1 | iex- Unix/Linux/macOS:
~/.local/bin(user's local directory, no sudo required) - Windows:
%USERPROFILE%\bin(e.g.,C:\Users\username\bin)
The installation script will automatically detect your shell and provide specific instructions. If ~/.local/bin is not in your PATH, you'll see instructions like:
For Bash users:
# Add to ~/.bashrc
export PATH="$HOME/.local/bin:$PATH"
# Reload configuration
source ~/.bashrcFor Zsh users:
# Add to ~/.zshrc
export PATH="$HOME/.local/bin:$PATH"
# Reload configuration
source ~/.zshrcFor Fish users:
# Add to ~/.config/fish/config.fish
set -gx PATH $HOME/.local/bin $PATH
# Reload configuration
source ~/.config/fish/config.fishFor system-wide installation (requires sudo):
curl -fsSL https://raw.githubusercontent.com/biwakonbu/reviewtask/main/scripts/install/install.sh | sudo bash -s -- --bin-dir /usr/local/binFor detailed installation information including PATH configuration and troubleshooting, see Installation Guide.
Install specific version:
# Unix/Linux/macOS
curl -fsSL https://raw.githubusercontent.com/biwakonbu/reviewtask/main/scripts/install/install.sh | bash -s -- --version v1.2.3
# Windows
iwr -useb https://raw.githubusercontent.com/biwakonbu/reviewtask/main/scripts/install/install.ps1 | iex -ArgumentList "-Version", "v1.2.3"Install to custom directory:
# Unix/Linux/macOS
curl -fsSL https://raw.githubusercontent.com/biwakonbu/reviewtask/main/scripts/install/install.sh | bash -s -- --bin-dir ~/bin
# Windows
iwr -useb https://raw.githubusercontent.com/biwakonbu/reviewtask/main/scripts/install/install.ps1 | iex -ArgumentList "-BinDir", "C:\tools"Force overwrite existing installation:
# Unix/Linux/macOS
curl -fsSL https://raw.githubusercontent.com/biwakonbu/reviewtask/main/scripts/install/install.sh | bash -s -- --force
# Windows
iwr -useb https://raw.githubusercontent.com/biwakonbu/reviewtask/main/scripts/install/install.ps1 | iex -ArgumentList "-Force"Download Release Binary:
Download the latest release for your platform:
# Download latest release (Linux/macOS/Windows)
curl -L https://github.com/biwakonbu/reviewtask/releases/latest/download/reviewtask-<version>-<os>-<arch>.tar.gz | tar xz
# Make executable and move to PATH
chmod +x reviewtask-<version>-<os>-<arch>
sudo mv reviewtask-<version>-<os>-<arch> /usr/local/bin/reviewtaskInstall with Go:
go install github.com/biwakonbu/reviewtask@latest- Clone the repository:
git clone https://github.com/biwakonbu/reviewtask.git
cd reviewtask- Build the binary:
go build -o reviewtask- Install AI Provider CLI (required for AI analysis):
# For Claude Code (default)
# Follow Claude Code installation instructions
# https://docs.anthropic.com/en/docs/claude-code
# For other providers (future support)
# Install the respective provider's CLI tool# Check version and build information
reviewtask version# Interactive setup wizard
./reviewtask initThe wizard will:
- Ask for your preferred language (English/Japanese)
- Auto-detect available AI providers (Cursor CLI, Claude Code)
- Create a minimal 2-line configuration
- Set up
.gitignoreentries - Check GitHub authentication
Example session:
Welcome to reviewtask setup!
What language do you prefer? [English/Japanese]: English
Detecting AI providers...
Found: Cursor CLI
Use Cursor CLI as AI provider? [Y/n]: Y
β Minimal configuration created at .pr-review/config.json
# Login with GitHub token
./reviewtask auth login
# Check authentication status
./reviewtask auth status
# Logout
./reviewtask auth logoutAuthentication sources (in order of preference):
GITHUB_TOKENenvironment variable- Local config file (
.pr-review/auth.json) - GitHub CLI (
gh auth token)
# Analyze current branch's PR
./reviewtask
# Analyze specific PR
./reviewtask 123
# The tool will:
# - Fetch PR reviews and comments from multiple sources:
# β’ Standard GitHub reviews
# β’ CodeRabbit reviews (automatic nitpick detection)
# β’ Codex reviews (embedded comment parsing with P1/P2/P3 priorities)
# - Analyze ALL comments (including nitpicks, questions, suggestions)
# - AI-powered impact assessment:
# β’ TODO: Small changes (<30min fixes, typos, renaming)
# β’ PENDING: Large changes (design decisions, architecture)
# - Automatically optimize performance based on PR size
# - Process comments in parallel batches
# - Detect unresolved comment threads requiring action
# - Deduplicate reviews (especially useful for Codex double-submissions)
# - Cache API responses to reduce redundant calls
# - Support automatic resume if interrupted
# - Generate actionable tasks with priorities and initial status
# - Save results to .pr-review/PR-{number}/# View all task status
./reviewtask status
./reviewtask status 123
# Show current/next task details
./reviewtask show
# Show specific task details
./reviewtask show <task-id>
# Update specific task status
./reviewtask update <task-id> <status>
# Valid statuses: todo, doing, done, pending, cancel
# Start working on a task (intuitive alternative to update)
./reviewtask start <task-id>
# The start command provides a more intuitive way to begin work:
# - Changes status from todo β doing
# - Provides visual feedback with π emoji
# - Shows helpful next-step guidance
# - Equivalent to: reviewtask update <task-id> doingThe status command now includes comprehensive unresolved comment detection:
./reviewtask status
ReviewTask Status - 75.0% Complete (3/4) - PR #123
Completion Status
βββββββββββββββββ
Status: 75.0% Complete
Summary: β³ Incomplete: 1 pending tasks, 1 unresolved comments
π Unresolved items: 1 tasks, 1 comments
Review Status
βββββββββββββ
Unresolved Comments: 1
β 1 comments not yet analyzed
Tasks
βββββ
TODO: 1
DOING: 0
DONE: 3Comment Categories:
- Unanalyzed comments: Exist on GitHub but tasks not yet generated
- In-progress comments: Tasks generated but not completed
- Resolved comments: All tasks completed and GitHub threads resolved
Completion Detection:
- Integrates task completion status with comment resolution status
- Provides accurate completion percentage and detailed summary
- Shows remaining work items clearly categorized
./reviewtask hold [--reason "explanation"]
### 5. Task Lifecycle Management
```bash
# Complete task with full automation workflow (RECOMMENDED)
./reviewtask done <task-id>
# The done command provides automated workflow:
# 1. Verification (build/test/lint)
# 2. Auto-commit with structured message
# 3. Thread resolution (when all comment tasks complete)
# 4. Next task suggestion
#
# Skip specific phases if needed:
./reviewtask done <task-id> --skip-verification
./reviewtask done <task-id> --skip-commit
./reviewtask done <task-id> --skip-resolve
./reviewtask done <task-id> --skip-suggestion
# Cancel a task with explanation (posts comment to GitHub review thread)
./reviewtask cancel <task-id> --reason "Already addressed in commit abc1234"
# Cancel all pending tasks at once
./reviewtask cancel --all-pending --reason "Deferring to follow-up PR #125"
# Verify task implementation quality (before completing)
./reviewtask verify <task-id>
Cancel Command Features:
- Posts cancellation reason as comment on GitHub review thread
- Returns non-zero exit code on failure (safe for CI/CD scripts)
- Supports batch cancellation with
--all-pendingflag - Provides clear feedback to reviewers why tasks weren't addressed
- Thread Resolution Guidance: After cancellation, displays clear instructions for resolving review threads when appropriate:
π Thread Resolution Guidance: If this cancellation fully addresses the reviewer's feedback (e.g., by referencing a follow-up Issue or PR), consider resolving the review thread: reviewtask resolve <task-id> Or resolve all done/cancelled tasks at once: reviewtask resolve --all
# Manually resolve GitHub review thread for completed task
./reviewtask resolve <task-id>
# Resolve all completed tasks at once
./reviewtask resolve --all
# Force resolve regardless of task status
./reviewtask resolve --all --force| Command | Description |
|---|---|
reviewtask [PR_NUMBER] |
Fetch reviews and analyze with AI (integrated workflow) |
reviewtask status [PR_NUMBER] |
Show task status, completion progress, and unresolved comment detection for current branch or specific PR |
reviewtask show [task-id] |
Show current/next task or specific task details |
reviewtask update <id> <status> |
Update task status (todo/doing/done/pending/cancel) |
| Command | Description |
|---|---|
reviewtask done <task-id> |
[RECOMMENDED] Complete task with full automation workflow (verification + commit + thread resolution + next task) |
reviewtask start <task-id> |
[INTUITIVE] Start working on a task (equivalent to update <id> doing) |
reviewtask hold <task-id> [--reason "..."] |
[INTUITIVE] Put task on hold (equivalent to update <id> pending) |
reviewtask cancel <task-id> --reason "..." |
Cancel task and post reason to GitHub review thread |
reviewtask cancel --all-pending --reason "..." |
Cancel all pending tasks with same reason |
reviewtask verify <task-id> |
Run verification checks before task completion |
| Command | Description |
|---|---|
reviewtask resolve <task-id> |
Manually resolve GitHub review thread for completed task |
reviewtask resolve --all |
Resolve threads for all done tasks |
reviewtask resolve --all --force |
Force resolve all tasks regardless of status |
| Command | Description |
|---|---|
reviewtask stats [PR_NUMBER] [options] |
Show detailed task statistics with comment breakdown |
reviewtask config show |
Display current verification configuration |
reviewtask config set-verifier <task-type> <cmd> |
Configure custom verification commands |
| Command | Description |
|---|---|
reviewtask version [VERSION] |
Show version information or switch to specific version |
reviewtask versions |
List available versions from GitHub releases |
reviewtask prompt <provider> <target> |
Generate AI provider command templates |
reviewtask debug fetch <phase> [PR] |
Test specific phases independently |
reviewtask init |
Initialize repository with interactive wizard |
reviewtask auth <cmd> |
Authentication management |
reviewtask --refresh-cache |
Clear cache and reprocess all comments |
--refresh-cache- Clear cache and reprocess all comments (available with main command)
--all- Show information for all PRs--pr <number>- Show information for specific PR--branch <name>- Show information for specific branch
reviewtask auth login- Interactive GitHub token setupreviewtask auth status- Show current authentication source and userreviewtask auth logout- Remove local authenticationreviewtask auth check- Comprehensive validation of token and permissions
reviewtask version- Show current version with update checkreviewtask version <VERSION>- Switch to specific version (e.g.,v1.2.3,latest)reviewtask version --check- Check for available updatesreviewtask versions- List recent 5 versions with release information
reviewtask prompt claude pr-review- Generate PR review workflow template for Claude Codereviewtask cursor [TARGET]- Generate Cursor IDE integration templatesreviewtask cursor pr-review- Generate PR review workflow templatereviewtask cursor issue-to-pr- Generate issue-to-PR workflow templatereviewtask cursor label-issues- Generate label issues workflow templatereviewtask cursor --all- Generate all available templatesreviewtask cursor [TARGET] --stdout- Output to stdout for CI/CD integration
reviewtask prompt stdout <target>- Output prompts to stdout for redirection or pipingreviewtask prompt <provider> <target>- Generate templates for various AI providers (extensible)
reviewtask debug fetch review <PR>- Fetch and save PR reviews only (no task generation)reviewtask debug fetch task <PR>- Generate tasks from previously saved reviews only- Debug commands automatically enable verbose mode for detailed logging
Start with just 2 lines of configuration:
{
"language": "English",
"ai_provider": "auto"
}That's it! The tool will automatically:
- Detect your project type (Go, Node.js, Rust, Python, etc.)
- Configure appropriate build/test/lint commands
- Find and use available AI providers (Cursor CLI or Claude Code)
- Apply sensible defaults for all other settings
# Interactive setup wizard
reviewtask init
# Validate your configuration
reviewtask config validate
# Migrate existing config to simplified format
reviewtask config migrate
# Show current configuration
reviewtask config show{
"language": "English",
"ai_provider": "auto"
}{
"language": "English",
"ai_provider": "cursor",
"model": "grok",
"priorities": {
"project_specific": {
"critical": "Authentication vulnerabilities",
"high": "Payment processing errors"
}
}
}See Configuration Reference for all available parameters.
Configure automation behavior for the reviewtask done command:
{
"done_workflow": {
"enable_auto_resolve": "complete",
"enable_verification": true,
"enable_auto_commit": true,
"enable_next_task_suggestion": true,
"verifiers": {
"build": "go build ./...",
"test": "go test ./...",
"lint": "golangci-lint run",
"format": "gofmt -l ."
}
}
}Settings:
enable_auto_resolve: Thread resolution mode"immediate": Resolve thread immediately after task completion"complete": Resolve only when all tasks from same comment are done"disabled": No automatic resolution
enable_verification: Run build/test/lint checks before completionenable_auto_commit: Automatically commit changes with structured messageenable_next_task_suggestion: Show next recommended task after completionverifiers: Custom commands for verification checks (by verification type)
Control the prompt style used for task generation. Default is v2.
{
"ai_settings": {
"prompt_profile": "v2" // one of: v2 (alias: rich), compact, minimal, legacy
}
}Render the exact prompt (offline, no AI) from saved reviews for inspection or A/B comparison:
reviewtask debug fetch review 123 # Save .pr-review/PR-123/reviews.json
reviewtask debug prompt 123 --profile v2 # Print v2 prompt to stdout
reviewtask debug prompt 123 --profile legacyEdit .pr-review/config.json to customize priority rules:
{
"priority_rules": {
"critical": "Security vulnerabilities, authentication bypasses, data exposure risks",
"high": "Performance bottlenecks, memory leaks, database optimization issues",
"medium": "Functional bugs, logic improvements, error handling",
"low": "Code style, naming conventions, comment improvements"
},
"task_settings": {
"default_status": "todo",
"auto_prioritize": true,
"low_priority_patterns": ["nit:", "nits:", "minor:", "suggestion:", "consider:", "optional:", "style:"],
"low_priority_status": "pending"
},
"ai_settings": {
"user_language": "English",
"validation_enabled": false,
"verbose_mode": true
}
}The tool can automatically detect and handle low-priority comments (such as "nits" from code review tools):
low_priority_patterns: List of patterns to identify low-priority comments (case-insensitive)- Default patterns:
["nit:", "nits:", "minor:", "suggestion:", "consider:", "optional:", "style:"] - Matches comments starting with these patterns or containing them after a newline
- Default patterns:
low_priority_status: Status to assign to tasks from matching comments (default:"pending")- This allows developers to focus on critical issues first
- Low-priority tasks can be addressed later or promoted to active status
Example: A comment like "nit: Consider using const instead of let" will create a task with "pending" status instead of "todo".
Configure AI provider and model settings:
{
"ai_settings": {
"ai_provider": "auto", // Options: "claude", "cursor", "auto" (tries cursor then claude)
"model": "auto", // Model selection (auto lets provider choose best model)
"cursor_path": "", // Optional custom path to cursor-agent CLI
"claude_path": "" // Optional custom path to Claude CLI
}
}Supported AI Providers:
- Claude Code CLI: The original Claude AI provider (
npm install -g @anthropic-ai/claude-code) - Cursor CLI: Cursor's AI with automatic model selection (
npm install -g cursor-agent) - Auto: Automatically tries Cursor first, falls back to Claude if unavailable
Configure advanced processing features in .pr-review/config.json:
{
"ai_settings": {
"verbose_mode": false, // Enable detailed debug logging
"validation_enabled": true, // Enable AI task validation
"max_retries": 5, // Validation retry attempts
"quality_threshold": 0.8, // Minimum validation score (0.0-1.0)
"deduplication_enabled": true, // AI-powered task deduplication
"similarity_threshold": 0.8, // Task similarity detection threshold
"process_nitpick_comments": true, // Process ALL comments (default: true)
"nitpick_priority": "low", // Priority for nitpick-generated tasks
"enable_json_recovery": true, // Enable JSON recovery for incomplete responses
"max_recovery_attempts": 3, // Maximum JSON recovery attempts
"partial_response_threshold": 0.7, // Minimum threshold for partial responses
"log_truncated_responses": true, // Log truncated responses for debugging
"preserve_ai_prompts": true, // Preserve "π€ Prompt for AI Agents" blocks from CodeRabbit
"optimize_file_size": true, // Enable file size optimization (removes verbose metadata)
"html_entity_processing": true, // Process HTML entities in GitHub API responses
"process_self_reviews": false // Process self-review comments from PR author
}
}The tool uses AI to automatically assess the implementation complexity of each review comment and assign appropriate initial status:
Task Status Assignment:
-
TODO: Small/medium changes that can be completed quickly
- Typo fixes, variable renaming, adding comments
- Simple logic fixes, adding error handling, validation
- Changes requiring <30 minutes without design decisions
-
PENDING: Large changes requiring design decisions
- Architecture modifications, API changes
- Adding significant new functionality
- Major refactoring, breaking changes
- Changes needing team discussion or alignment
Impact Assessment Criteria:
- Implementation time: TODO for <30min tasks, PENDING for longer
- Design decisions required: PENDING if requires architectural discussion
- Code impact scope: TODO for localized changes, PENDING for broad changes
- Risk level: PENDING for changes affecting core functionality
Note: Impact assessment is independent of priority level. A critical bug can be TODO if the fix is straightforward, while a low-priority improvement might be PENDING if it requires design discussion.
The tool intelligently optimizes review data storage while preserving essential AI guidance:
- Preserves CodeRabbit AI Prompts: Keeps "π€ Prompt for AI Agents" blocks intact for enhanced task generation
- Smart Content Filtering: Removes verbose GitHub suggestion blocks and metadata while maintaining review essence
- HTML Entity Support: Properly processes both HTML-escaped (
\u003c,\u003e) and normal HTML content
- Significant Size Reduction: Achieves up to 66% reduction in
reviews.jsonfile size (e.g., 200KB β 67KB) - Intelligent Metadata Removal: Strips GitHub suggestion blocks, committable suggestions, and fingerprinting comments
- Content Structure Preservation: Maintains markdown formatting, code references, and essential feedback
{
"ai_settings": {
"preserve_ai_prompts": true, // Keep AI prompt blocks from CodeRabbit
"optimize_file_size": true, // Enable comprehensive size optimization
"html_entity_processing": true // Handle HTML entity variations
}
}- πΎ Reduced Storage: Smaller JSON files for faster processing and reduced disk usage
- π§ Enhanced AI Analysis: Preserved AI prompts provide better context for task generation
- β‘ Improved Performance: Smaller data files lead to faster processing and analysis
- π§ Better Compatibility: Handles various GitHub API response formats consistently
The tool can process self-reviews (comments made by the PR author on their own PR):
process_self_reviews: Enable processing of PR author's own comments (default:false)- When enabled, fetches both issue comments and PR review comments from the author
- Self-review comments are processed through the same AI task generation pipeline
- Useful for capturing TODO comments, known issues, and self-documentation
Example use cases:
- Authors documenting known issues or technical debt
- TODO comments for follow-up work
- Self-review before requesting external reviews
- Design decisions and trade-offs documentation
To enable self-review processing:
{
"ai_settings": {
"process_self_reviews": true
}
}The tool now includes advanced recovery mechanisms for handling incomplete Claude API responses:
-
JSON Recovery: Automatically recovers valid tasks from truncated or malformed JSON responses
- Extracts complete task objects from partial arrays
- Cleans up malformed JSON syntax
- Validates recovered data before processing
- Configurable recovery attempts and thresholds
-
Intelligent Retry: Smart retry strategies based on error patterns
- Automatic prompt size reduction for token limit errors
- Exponential backoff for rate limiting
- Pattern detection for common truncation issues
- Configurable retry attempts and delays
-
Response Monitoring: Tracks API performance and provides optimization insights
- Response size and truncation pattern analysis
- Success rate tracking and error distribution
- Optimal prompt size recommendations
- Performance analytics and reporting
- Parallel Mode (
validation_enabled: false): Fast processing with individual comment analysis - Validation Mode (
validation_enabled: true): Two-stage validation with retry logic and quality scoring - Verbose Mode (
verbose_mode: true): Detailed logging for debugging and development - Automatic Chunking: Large comments (>20KB) are automatically split for optimal processing
.pr-review/
βββ config.json # Priority rules and project settings
βββ auth.json # Local authentication (gitignored)
βββ PR-<number>/
βββ info.json # PR metadata
βββ reviews.json # Review data with nested comments
βββ tasks.json # AI-generated tasks
- Generation: AI analyzes ALL review comments (including nitpicks, questions, suggestions)
- Impact Assessment: AI assigns initial status based on implementation complexity
- TODO: Small changes (<30min: typos, simple fixes, error handling)
- PENDING: Large changes (design decisions, architecture, major refactoring)
- Assignment: Tasks get UUID-based IDs with AI-assigned initial status
- Execution: Developers work on TODO tasks first (todo β doing β done)
- Decision: After TODO tasks, review PENDING tasks and decide
- Start implementing: update status to "doing"
- Defer or decline: cancel with reason
- Preservation: Subsequent runs preserve existing task statuses
- Verification: Optional automated checks ensure implementation quality
- Completion: Tasks marked as done with automatic or manual verification
- Thread Resolution: GitHub review threads resolved manually or automatically
- Cancellation: Tasks can be cancelled with explanatory comments posted to GitHub
The tool provides context-aware guidance based on your current task state:
- TODO tasks available: Shows next recommended task and commands to start
- All TODO complete, PENDING tasks exist: Prompts to review and decide on PENDING tasks
- Unresolved comments detected: Suggests running analysis to generate new tasks
- All tasks complete: Recommends pushing changes and checking for new reviews
The cancel command includes robust error propagation for safe use in CI/CD environments:
# Returns non-zero exit code on failure
reviewtask cancel <task-id> --reason "Already implemented" || echo "Cancellation failed"
# Batch cancellation with proper error handling
reviewtask cancel --all-pending --reason "Deferred to next PR"
# Exit code 0: All tasks successfully cancelled
# Exit code 1: One or more cancellations failed (first error wrapped and returned)
# Safe for CI/CD scripts
if ! reviewtask cancel --all-pending --reason "Sprint ended"; then
echo "Failed to cancel pending tasks" >&2
exit 1
fiError Handling Features:
- Wraps first encountered error with detailed context using Go's error wrapping (
%w) - Provides total failure count in error message
- Returns immediately on single-task cancellation failures
- Continues processing remaining tasks in batch mode before returning error
- Preserves error chains for better debugging and troubleshooting
Generate Cursor-specific integration files for enhanced development experience:
# Generate Cursor IDE integration files
# Generate specific template
reviewtask cursor pr-review
# Generate all templates at once
reviewtask cursor --all
# Output to stdout for custom integration
reviewtask cursor pr-review --stdout > my-workflow.mdThis creates organized templates in:
.cursor/commands/pr-review/: PR review workflow automation.cursor/commands/issue-to-pr/: Issue-to-PR development workflow.cursor/commands/label-issues/: Automatic issue labeling workflow
After running this command, Cursor IDE will:
- Understand reviewtask commands and suggest appropriate usage
- Provide context-aware assistance for PR review workflows
- Support custom commands through the command palette
Generate Claude Code command templates:
# Generate Claude Code workflow template
reviewtask prompt claude pr-reviewThis creates workflow templates in .claude/commands/ for streamlined PR review management.
- Existing task statuses are preserved during subsequent review fetches
- Comment content changes trigger automatic task cancellation
- New tasks are added without overwriting existing work progress
- Fixed worker pool for predictable resource usage and system stability
- 89-90% goroutine reduction compared to per-comment goroutine pattern (e.g., 28 comments: 28β3 goroutines)
- Eliminates context switching overhead and prevents system freezes during heavy AI processing
- Configurable concurrency via
max_concurrent_requestssetting (default: 5 workers) - Job queue distributes work efficiently across fixed pool of workers
- Comprehensive testing for concurrency control, job distribution, and error handling
- Each comment is processed independently using worker pool
- Complete data fetching with pagination support for PRs with 100+ comments or threads
- Prevents data loss: Fixes critical 59% comment loss issue on large PRs
- Batch GraphQL fetching reduces API calls by 96% (N+1 β 3-4 calls per PR)
- Thread resolution state tracking with accurate GitHub sync
- Reduced prompt sizes (3,000-6,000 characters vs 57,760)
- Better performance and AI provider reliability
- Automatically detects significant changes in comment content
- Cancels outdated tasks and creates new ones as needed
- Preserves completed work and prevents duplicate tasks
Use the reviewtask stats command to get detailed task analytics:
# Current branch statistics
reviewtask stats
# Statistics for specific PR
reviewtask stats 123
reviewtask stats --pr 123
# Statistics for all PRs
reviewtask stats --all
# Statistics for specific branch
reviewtask stats --branch feature/new-feature- Comment-level breakdown: Task counts per review comment
- Priority distribution: Critical/high/medium/low task counts
- Status distribution: Todo/doing/done/pending/cancel counts
- Completion metrics: Task completion rates and progress tracking
- File-level summary: Tasks grouped by affected files
The tool includes built-in version management capabilities:
# Show current version and check for updates
reviewtask version
# List available versions from GitHub releases
reviewtask versions
# Switch to specific version
reviewtask version v1.2.3
reviewtask version latest
# Check for updates only
reviewtask version --check- Automatic update detection: Checks for newer versions on startup
- GitHub releases integration: Downloads binaries directly from GitHub
- Version switching: Easy switching between versions
- Rollback capability: Return to previous versions if needed
Improve performance and handle data consistency with cache controls:
# Force cache refresh (reprocess all comments)
reviewtask --refresh-cache
# When to use --refresh-cache:
# - After significant PR changes
# - When comment content has been updated
# - To regenerate tasks with updated priority rules
# - Troubleshooting inconsistent task generation- Performance optimization: Avoids re-processing unchanged comments
- Consistency preservation: Maintains task state across runs
- Selective refresh: Only processes changed or new content
- Manual override:
--refresh-cachebypasses all caching
Streamline your AI workflows with generated templates for various providers:
# Generate PR review workflow template for Claude Code (writes to .claude/commands/)
reviewtask prompt claude pr-review
# Output prompts to stdout for redirection or piping
reviewtask prompt stdout pr-review # Display on terminal
reviewtask prompt stdout pr-review > my-workflow.md # Save to custom file
reviewtask prompt stdout pr-review | pbcopy # Copy to clipboard (macOS)
reviewtask prompt stdout pr-review | xclip # Copy to clipboard (Linux)
# Extensible architecture for future AI providers
# reviewtask prompt <provider> <target>This provides flexible options for AI integration:
- Claude provider: Creates optimized command templates in
.claude/commands/directory - Stdout provider: Outputs prompts to standard output for maximum flexibility
- Structured PR review analysis workflows
- Task generation and management integration
- Consistent review quality and format
- Integration with existing reviewtask data structures
Note: The reviewtask claude command is deprecated. Please use reviewtask prompt claude for future compatibility.
The tool maintains synchronized workflow prompts across multiple AI providers and locations:
Synchronized Locations:
.claude/commands/pr-review/review-task-workflow.md- Claude Code integration.cursor/commands/pr-review/review-task-workflow.md- Cursor IDE integrationcmd/prompt_stdout.go- Programmatic template generation
All workflow prompts include comprehensive command documentation:
- 19 commands organized in 4 categories (Core/Lifecycle/Thread/Statistics)
cancelcommand with GitHub comment posting and error propagationresolvecommand for manual thread managementstatscommand for task analytics and progress tracking- 8 detailed output examples showing actual command behavior
- Task classification guidelines (when to cancel/pending/process tasks)
Verification:
# Generate prompts from all sources and verify synchronization
reviewtask prompt claude pr-review # Writes to .claude/commands/
reviewtask cursor pr-review # Writes to .cursor/commands/
reviewtask prompt stdout pr-review # Outputs to stdout
# All three methods produce identical content
diff .claude/commands/pr-review/review-task-workflow.md \
.cursor/commands/pr-review/review-task-workflow.md
# Should show no differences# Check token permissions and repository access
reviewtask auth check
# View current authentication status
reviewtask auth status
# Re-authenticate if needed
reviewtask auth logout
reviewtask auth login
# Common solutions:
export GITHUB_TOKEN="your_token_here"
# or
gh auth login# Check current version and available updates
reviewtask version
# View available versions
reviewtask versions
# Switch to stable version if experiencing issues
reviewtask version latest
# Manually check GitHub releases
# https://github.com/biwakonbu/reviewtask/releases# Clear cache and reprocess all data
reviewtask --refresh-cache
# Check statistics for diagnostic information
reviewtask stats --all
# Symptoms requiring cache refresh:
# - Inconsistent task generation
# - Missing tasks for recent comments
# - Outdated task contentEnsure your AI provider CLI is properly installed and accessible:
# Test Claude Code availability (for Claude provider)
claude --version
# Generate integration templates if missing
reviewtask prompt claude pr-review
# Common issues:
# - AI provider CLI not in PATH
# - Authentication required
# - Network connectivityHandle incomplete or truncated Claude API responses:
# Enable verbose mode to see recovery attempts
# Edit .pr-review/config.json:
{
"ai_settings": {
"verbose_mode": true,
"enable_json_recovery": true
}
}
# Common recovery scenarios:
# - "unexpected end of JSON input" errors
# - Truncated responses at token limits
# - Malformed JSON from API timeouts
# - Partial task arrays
# Monitor API performance:
# Check .pr-review/response_analytics.json for patternsRequired GitHub API permissions:
repo(for private repositories)public_repo(for public repositories)read:org(for organization repositories)
Use reviewtask auth check for comprehensive permission validation.
Start here if you want to use reviewtask to manage PR reviews:
- Installation Guide
- Quick Start Tutorial
- Authentication Setup
- Command Reference
- Configuration Guide
- Workflow Guide
- Troubleshooting
Start here if you want to contribute or extend reviewtask:
- Architecture Overview
- Development Setup
- Project Structure
- Testing Strategy
- Contributing Guidelines
- Versioning & Releases
Please see our Contributing Guide for detailed information on:
- Development setup and guidelines
- Pull request process
- Release labeling system
- Code style and testing
- Fork the repository
- Create a feature branch
- Make your changes
- Add appropriate release label (
release:major,release:minor, orrelease:patch) - Submit a pull request
- Contributing Guide - Detailed contribution guidelines
- Versioning Guide - Semantic versioning rules and release process
- Project Requirements - Project vision and development guidelines
MIT License - see LICENSE file for details.