DevCompanion is a modular, explainable AI-assisted developer toolkit designed to help engineers diagnose and resolve complex technical issues. Unlike generic chatbots, DevCompanion uses a structured routing engine and specialized analysis modules to provide deterministic, high-confidence insights.
DevCompanion is built on a "Deterministic First, AI Second" philosophy. It prioritizes rule-based analysis for reliability and uses local Large Language Models (LLMs) via Ollama for optional enhancement.
- Modular Architecture: Easily extendable with new analysis modules.
- Smart Problem Router: Automatically detects input types (stack traces, application logs, or configuration files) and routes them to the correct module.
- Deterministic Analysis: Rule-based scoring for identifying failure layers and detecting anomalies or security risks.
- Buddy Mode (Interactive Chat): A specialized AI partner that understands the deterministic analysis and helps you resolve the issue step-by-step.
- Explainable Execution Trace: Full transparency into how the tool reached its conclusions.
- Local AI Support: Optional integration with Ollama for root cause hypothesis generation.
devcompanion/
│
├── main.ts # CLI Entry Point
├── router.ts # Input Detection & Routing
├── state.ts # Central Execution State
├── modules/ # Specialized Analysis Modules
│ ├── stacktrace.ts
│ ├── log.ts # NEW: Log Analysis
│ ├── config.ts # NEW: Configuration Analysis
│ └── base_module.ts
├── ai/ # AI Enhancement Layer
│ └── ai_engine.ts
└── utils/ # Supporting Utilities
├── parser.ts
├── scoring.ts
└── formatter.ts
npx tsx src/devcompanion/main.ts analyze path/to/error.txt--ai: Enable local LLM reasoning via Ollama (requires Ollama running locally).--verbose: Show the full execution trace, including routing and scoring details.--json: Output the entire state in JSON format for integration with other tools.
npx tsx src/devcompanion/main.ts chat path/to/error.txt "How do I fix this database connection issue?"DevCompanion Interface
=== DevCompanion Analysis ===
Predicted Layer: Backend
Confidence: 62.0%
Layer Breakdown:
Frontend: 20%
Backend: 60%
Database: 5%
Network: 10%
Auth: 5%
Reproduction Plan:
1. Check server-side logs for detailed error messages.
2. Verify API endpoint availability using cURL or Postman.
3. Restart the backend service to clear transient states.
Debug Plan:
1. Trace the request through middleware and controllers.
2. Check for resource leaks (memory, file handles, connections).
3. Verify environment variables and configuration files.
=== Execution Trace ===
[Main] Loaded input from path/to/error.txt
[Router] Detected stacktrace (confidence 0.82)
[StackTraceModule] Starting deterministic analysis...
[StackTraceModule] Matched signals: HTTP 5xx, controller.js
[Scoring] Backend score increased to 60%
- Prompt Linter Module: Help engineers write better prompts for LLMs.
- Repo Analyzer Module: Scan repositories for common configuration issues.
- Network Traffic Analyzer: Analyze PCAP or HAR files for network issues.
- Clone the repository.
- Install dependencies:
npm install
- Install Ollama from ollama.com.
- Pull the Llama3 model:
ollama pull llama3
- Ensure Ollama is running at
http://localhost:11434.
Developed by Aphator
Tone: Engineering-focused. Not startup-marketing. Not exaggerated.