Debug, Detect, Destroy Bad Code
A comprehensive static code analyzer for legacy multi-language codebases. Supports Python, JavaScript/TypeScript, and Java with extensible architecture.
- π Multi-language scanning: Python, JavaScript, TypeScript, Java
- π Metrics calculation: LOC, cyclomatic complexity, function length, nesting depth
- π Code smell detection: Long functions, large files, unused variables/imports, code duplication
- πΈοΈ Dependency graph: Visual representation of module dependencies
- π Multiple report formats: JSON and HTML outputs
- π€ Optional AI integration: Code explanations, refactoring suggestions, issue prioritization
- π Extensible architecture: Easy to add new languages and rules
git clone https://github.com/basteez/d3bg
cd d3bg
pip install -r requirements.txt
pip install -e .# Build the image
docker build -t d3bg .
# Analyze a project (reports saved in the project directory)
docker run -v $(pwd)/my-project:/code d3bg analyze . --output-json report.json --output-html report.html
# The reports will be saved in $(pwd)/my-project/
# With AI integration
docker run -v $(pwd)/my-project:/code \
-e LLM_API_KEY=your-api-key \
-e LLM_PROVIDER=openai \
d3bg analyze . --output-json report.json --ai
# Analyze the examples included in the project
docker run -v $(pwd)/examples:/code d3bg analyze . --output-html report.htmlNote: The working directory inside the container is /code, which is your mounted project directory. All reports will be saved there by default.
ai-code-inspector analyze ./my-project --output-json report.jsonai-code-inspector analyze ./my-project --output-html report.htmlexport LLM_API_KEY=your-api-key
export LLM_PROVIDER=openai # or anthropic
ai-code-inspector analyze ./my-project --output-json report.json --aiai-code-inspector summarize report.jsonSet environment variables:
LLM_API_KEY: Your AI provider API key (optional)LLM_PROVIDER:openaioranthropic(default:openai)
With the --ai flag, the tool generates comprehensive, granular insights:
export LLM_API_KEY="your-api-key"
ai-code-inspector analyze path/to/code --ai --output-html report.htmlAI Analysis includes:
- Executive Summary: Overall code quality assessment with main concerns and next steps
- Prioritized Issues: Issues ranked by impact and effort with specific guidance
- Detailed Recommendations: For each code smell, you get:
- Root cause explanation (1 sentence)
- Specific fix with code examples (2-3 sentences)
- Expected benefits
- Problematic Files: Top files with most issues, with LOC and function count
- File-level Analysis: Granular insights for each problematic module
For a comprehensive analysis based on Robert C. Martin's "Clean Code" principles:
export LLM_API_KEY="your-api-key"
ai-code-inspector analyze path/to/code --ai --clean-code --output-html report.htmlClean Code Review analyzes:
- Meaningful Names: Intention-revealing names, avoiding mental mapping, proper conventions
- Functions: Size, single responsibility, abstraction levels, argument count
- Comments: Good vs bad comments, when code should replace comments
- Formatting: Vertical and horizontal formatting, file organization
- Objects & Data Structures: Law of Demeter, proper abstraction, avoiding hybrids
- Error Handling: Exceptions vs return codes, proper context
- SOLID Principles: Single Responsibility, Open/Closed, Dependency Inversion
- Code Smells: Dead code, duplication, feature envy, inappropriate intimacy
Clean Code Output includes:
- Clean Code Score (0-10)
- Detailed violation reports with before/after examples
- Impact assessment (readability, maintainability, testability)
- Prioritized improvement list
- Reading recommendations from Clean Code book
Example Output:
π€ Generating detailed AI insights...
β AI Summary:
The codebase has 8 quality issues across 3 files. Main concerns are long
functions, high complexity, and deep nesting in complex_legacy.py.
π Top AI Recommendations:
1. long_function (warning) - examples/complex_legacy.py
Function: process_data
Root Cause: Function exceeds 30 lines.
Specific Fix: Break down into smaller functions (clean_data, transform_data,
analyze_data) following single responsibility principle...
Expected Benefit: Improved maintainability, testability, and readability.
β οΈ Most Problematic Files:
β’ complex_legacy.py: 7 issues (124 LOC)
π§Ή Performing Clean Code Review on most problematic files...
Reviewing: complex_legacy.py...
β Clean Code Review completed
β Generated 1 Clean Code reviews
The HTML report includes both AI Insights and Clean Code Reviews sections with detailed, actionable recommendations.
{
"summary": {
"total_files": 42,
"total_loc": 5432,
"languages": {"python": 30, "javascript": 12}
},
"files": [...],
"functions": [...],
"smells": [...],
"graph": "path/to/dependencies.dot"
}Interactive HTML report with:
- File overview table
- Code smells list with severity
- Dependency graph visualization
- AI suggestions (if enabled)
- Long functions: > 30 LOC (warning), > 200 LOC (severe)
- High complexity: Cyclomatic complexity > 10
- Large files: > 1000 LOC
- Unused variables: Simple static analysis
- Unused imports: Detect unreferenced imports
- Code duplication: Hash-based snippet detection
- Add tree-sitter grammar to dependencies
- Update
parser_manager.pywith new language handler - Implement metric extractors in
metrics.py - Add smell rules in
smells.py
Edit smells.py and add new detection functions following the existing pattern.
pytestruff check .MIT
- Python 3.11+
- tree-sitter (with language bindings)
- graphviz (for dependency graphs)
Built with β€οΈ by Tiziano Basile