An intelligent, privacy-first Instagram account analyzer powered by cutting-edge Generative AI and Local LLM technology. Uses sophisticated AI reasoning to detect spam, inappropriate content, and bot accounts while maintaining complete privacy through local processing.
- 🤖 Local AI Analysis - Ollama + LLaMA 3.2 with chain-of-thought reasoning
- 🔒 Privacy First - 100% local processing, zero cloud dependencies
- 🎯 Multi-Dimensional Scoring - Spam, content, bot, quality detection
- ⚡ Smart Caching - 90%+ cache hits, sub-second responses
- 📊 Batch Processing - Analyze hundreds of accounts efficiently
- 🔍 Transparent AI - See exactly how AI reaches each decision
- 📥 Export Results - Download analysis as CSV
# Setup and run
git clone <your-repo-url>
cd instasanity
make setup
make run
# Access at http://localhost:8501Requirements: Docker, 4GB+ RAM, Optional GPU
┌─────────────────────────────────────────────────────────────┐
│ 🧠 LOCAL LLM LAYER │
├─────────────────┬──────────────────┬─────────────────────────┤
│ Streamlit UI │ Analysis Core │ 🦙 Ollama Engine │
│ • Real-time │ • Multi-criteria│ • Chain-of-Thought │
│ • Interactive │ • Batch Process │ • Content Moderation │
└─────────────────┴──────────────────┴─────────────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌──────────────┐ ┌─────────────────┐
│ Instagram API │ │ Smart Cache │ │ Redis Session │
│ (Data Source) │ │ (24h TTL) │ │ (State Mgmt) │
└─────────────────┘ ┌──────────────┘ └─────────────────┘
# Multi-step reasoning for transparent decisions
analysis_prompt = """
Think step by step:
1. SPAM: promotional language, follow-for-follow patterns
2. CONTENT: inappropriate material, violence, hate speech
3. BOTS: username patterns, behavioral signals
4. QUALITY: authenticity, completeness
Return JSON with reasoning, scores, confidence...
"""- Smart Caching: MD5 cache keys, 24h TTL, 90%+ hit rates
- Batch Processing: 200+ accounts in <2 minutes with caching
- Fallback Intelligence: Rule-based backup when LLM unavailable
- Real-time Metrics: Cache efficiency, confidence scoring
- Local Processing: No external API calls, Ollama runs locally
- Data Isolation: All analysis happens on your machine
- Session Management: Secure Instagram API integration
- Zero Telemetry: No usage data sent anywhere
| Component | Technology | Purpose |
|---|---|---|
| AI/ML | Ollama + LLaMA 3.2 | Local LLM analysis engine |
| Frontend | Streamlit | Interactive web interface |
| Backend | Python 3.11+ | Core logic & Instagram API |
| Caching | DiskCache + Redis | Performance optimization |
| Deployment | Docker Compose | Service orchestration |
- 0-3: Keep following (good accounts)
- 4-6: Review manually (questionable accounts)
- 7-10: Consider unfollowing (problematic accounts)
Analysis includes: Overall score, detailed reasoning, confidence level, content flags, account metadata
make setup # Install AI models
make run # Start application
make stop # Stop services
make logs # View logs
make clean # Clean containers- InstaSanity (8501) - Main application
- Ollama (11434) - AI model server
- Redis (6379) - Caching layer
make dev-build && make dev-run # Development mode
make logs-app # Application logs
make logs-ollama # AI model logsKey settings in config.py:
- Ollama model and host configuration
- Cache TTL and directory settings
- Instagram API rate limits
- Fork repository
- Create feature branch
- Test with
make dev-run - Submit pull request
MIT License. For educational/personal use only. Always review AI results manually before taking action. Respect Instagram's Terms of Service.