AI-powered code review tool that analyzes git branch differences and generates comprehensive review reports with structured output.
- GitHub Action: Automated PR reviews with inline comments and summary
- Comprehensive Reviews: Detailed analysis of logic, security, performance, and code quality
- Structured Output: Issues organized by severity with summary table
- Multi-Provider: AWS Bedrock, Anthropic API, Ollama, or Moonshot
- Smart Analysis: Context provided upfront with prompt caching
- Git Integration: Works with any repository, supports commit hashes
- Verification Mode: Experimental Chain-of-Verification to reduce false positives
Run with Docker (recommended):
docker run --rm -it -v $(pwd):/repo \
-e MODEL_PROVIDER=anthropic \
-e ANTHROPIC_API_KEY=sk-ant-your-api-key \
kirill89/reviewcerberus:latest \
--repo-path /repo --output /repo/review.mdThat's it! The review will be saved to review.md in your current
directory.
See Configuration for AWS Bedrock setup and other options.
For automated PR reviews, add to .github/workflows/review.yml:
name: Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
review:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: Kirill89/reviewcerberus/action@v1
with:
model_provider: anthropic
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}The action posts review comments directly on your PR. See GitHub Action for all options.
# Run code review
poetry run reviewcerberus
# Custom target branch
poetry run reviewcerberus --target-branch develop
# Custom output location
poetry run reviewcerberus --output /path/to/review.md
poetry run reviewcerberus --output /path/to/dir/ # Auto-generates filename
# Output as JSON instead of markdown
poetry run reviewcerberus --json
# Different repository
poetry run reviewcerberus --repo-path /path/to/repo
# Add custom review guidelines
poetry run reviewcerberus --instructions guidelines.md
# Enable verification mode (experimental)
poetry run reviewcerberus --verify
# Enable SAST pre-scan (experimental)
poetry run reviewcerberus --sast# Full review with custom guidelines
poetry run reviewcerberus --target-branch main \
--output review.md --instructions guidelines.md
# Review a different repo
poetry run reviewcerberus --repo-path /other/repoDetailed analysis covering:
- Logic & Correctness: Bugs, edge cases, error handling
- Security: OWASP issues, access control, input validation
- Performance: N+1 queries, bottlenecks, scalability
- Code Quality: Duplication, complexity, maintainability
- Side Effects: Impact on other system parts
- Testing: Coverage gaps, missing test cases
- Documentation: Missing or outdated docs, unclear comments
Every review includes:
- Summary: High-level overview of changes and risky areas
- Issues Table: All issues at a glance with severity indicators (π΄ CRITICAL, π HIGH, π‘ MEDIUM, π’ LOW)
- Detailed Issues: Each issue with explanation, location, and suggested fix
Enable with --verify flag to reduce false positives using
Chain-of-Verification (CoVe):
- Generate Questions: Creates falsification questions for each issue
- Answer Questions: Answers questions using code context
- Score Confidence: Assigns 1-10 confidence score based on evidence
Each issue in the output includes a confidence score and rationale.
Enable with --sast flag to run an
OpenGrep (Semgrep fork) pre-scan before
the AI review:
- Scans only new findings introduced by the current branch
- Findings are provided to the AI agent as supplementary context
- The agent independently verifies each finding and dismisses false positives
- Combines static analysis precision with AI contextual understanding
- Detects current git branch and repository
- Collects all context upfront: changed files, commit messages, and diffs
- Analyzes using AI agent with access to:
- Full diff context (truncated at 10k chars per file)
- File reading with line ranges
- Pattern search across codebase
- Directory listing
- Generates structured review output rendered as markdown
Progress Display:
Repository: /path/to/repo
Current branch: feature-branch
Target branch: main
Found 3 changed files:
- src/main.py (modified)
- src/utils.py (modified)
- tests/test_main.py (added)
Starting code review...
π€ Thinking... β±οΈ 3.0s
π§ read_file_part: src/main.py
β Review completed: review_feature-branch.md
Token Usage:
Input tokens: 6,856
Output tokens: 1,989
Total tokens: 8,597
All configuration via environment variables (.env file):
MODEL_PROVIDER=bedrock # or "anthropic", "ollama", "moonshot" (default: bedrock)AWS_ACCESS_KEY_ID=your_key
AWS_SECRET_ACCESS_KEY=your_secret
AWS_REGION_NAME=us-east-1
MODEL_NAME=us.anthropic.claude-opus-4-5-20251101-v1:0 # optionalDocker example with Bedrock:
docker run --rm -it -v $(pwd):/repo \
-e AWS_ACCESS_KEY_ID=your_key \
-e AWS_SECRET_ACCESS_KEY=your_secret \
-e AWS_REGION_NAME=us-east-1 \
kirill89/reviewcerberus:latest \
--repo-path /repo --output /repo/review.mdANTHROPIC_API_KEY=sk-ant-your-api-key-here
MODEL_NAME=claude-opus-4-5-20251101 # optionalMODEL_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434 # optional, default
MODEL_NAME=deepseek-v3.1:671b-cloud # optionalDocker example with Ollama:
# Assumes Ollama running on host machine
docker run --rm -it -v $(pwd):/repo \
-e MODEL_PROVIDER=ollama \
-e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
kirill89/reviewcerberus:latest \
--repo-path /repo --output /repo/review.mdMODEL_PROVIDER=moonshot
MOONSHOT_API_KEY=sk-your-api-key-here
MOONSHOT_API_BASE=https://api.moonshot.ai/v1 # optional, default
MODEL_NAME=kimi-k2.5 # optionalMAX_OUTPUT_TOKENS=10000 # Maximum tokens in response
TOOL_CALL_LIMIT=100 # Maximum tool calls before forcing output
VERIFY_MODEL_NAME=... # Model for verification (defaults to MODEL_NAME)Customize prompts in src/agent/prompts/:
full_review.md- Main review promptcontext_summary.md- Context compaction for large PRs
Use ReviewCerberus as a GitHub Action for automated PR reviews.
| Input | Description | Default |
|---|---|---|
model_provider |
Provider: bedrock, anthropic, ollama, or moonshot |
bedrock |
anthropic_api_key |
Anthropic API key | - |
aws_access_key_id |
AWS Access Key ID (Bedrock) | - |
aws_secret_access_key |
AWS Secret Access Key (Bedrock) | - |
aws_region_name |
AWS Region (Bedrock) | us-east-1 |
model_name |
Model name (provider-specific) | - |
verify |
Enable Chain-of-Verification | false |
sast |
Enable OpenGrep SAST pre-scan | false |
min_confidence |
Min confidence score 1-10 (requires verify) | - |
instructions |
Path to custom review guidelines | - |
- uses: Kirill89/reviewcerberus/action@v1
with:
model_provider: anthropic
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
verify: "true"
min_confidence: "7"- uses: Kirill89/reviewcerberus/action@v1
with:
model_provider: anthropic
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
sast: "true"- uses: Kirill89/reviewcerberus/action@v1
with:
model_provider: bedrock
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region_name: us-east-1- Runs the review using the Docker image
- Resolves any existing review threads from previous runs
- Posts a summary comment with all issues
- Creates inline review comments on specific lines
For local development (not required for Docker usage):
# Clone and install
git clone <repo-url>
poetry install
# Configure credentials
cp .env.example .env
# Edit .env with your provider credentialsSee Configuration for credential setup.
make test
# or
poetry run pytest -vmake lint # Check with mypy, isort, black, mdformat
make format # Auto-format with isort and blackmake docker-build # Build locally
make docker-build-push # Build and push (multi-platform)Version is auto-read from pyproject.toml. See DOCKER.md for
details.
βββ src/ # Python CLI
β βββ config.py # Configuration
β βββ main.py # CLI entry point
β βββ agent/
β βββ agent.py # Agent setup
β βββ model.py # Model initialization
β βββ runner.py # Review execution
β βββ prompts/ # Review prompts
β βββ schema.py # Data models (structured output)
β βββ git_utils/ # Git operations
β βββ formatting/ # Context and output rendering
β βββ verification/ # Chain-of-Verification pipeline
β βββ progress_callback_handler.py
β βββ tools/ # 3 review tools
β
βββ action/ # GitHub Action (TypeScript)
βββ action.yml # Action definition
βββ src/ # Action source code
βββ dist/ # Bundled action
- Strict type checking: All functions require type annotations
- Return types: Must be explicit (
warn_return_any = true) - Formatting: Black + isort with black profile
- Testing: Integration tests with real git operations
- Python 3.11+
- Git
- One of:
- AWS Bedrock access with Claude models
- Anthropic API key
- Poetry (for development)
MIT