Thanks to visit codestin.com
Credit goes to github.com

Skip to content

MAS으로 만든 데이터 찾고, 인사이트, 차트, 제안 프롬프트 Agent 실행해주줌

Notifications You must be signed in to change notification settings

WithModulabs/npsbot

 
 

Repository files navigation

🤖 NPS Analytics RAG System

AI-Powered Multi-Dimensional NPS Analysis and Insight Generation Platform

Python 3.8+ Streamlit LangChain License: MIT

📋 Table of Contents

🎯 Overview

The NPS Analytics RAG System is an AI-powered analysis platform with a Perplexity.ai-style real-time streaming interface for comprehensive Net Promoter Score (NPS) analysis. It leverages a multi-agent architecture to process natural language queries and generate actionable insights from multi-dimensional customer experience data.

Core Features

  • 🤖 4 Specialized AI Agents (SQL, Explanation, Visualization, Validation)
  • Real-time Streaming analysis results
  • 🔍 Natural Language Query processing
  • 📊 Interactive Visualizations with Plotly
  • Automatic Validation and confidence scoring

🚀 Key Features

1. Intelligent Query Processing

Natural language queries are seamlessly translated to SQL and executed:

✅ "Show me NPS scores for the US market"
✅ "What's the product-wise NPS breakdown?"
✅ "Compare R-NPS competitive scores year-over-year"
✅ "Show customer journey NPS for TV products in 2024"

2. Multi-Dimensional Analysis

  • Year-over-Year Gap Analysis: Track performance trends across time periods
  • Competitive Gap Analysis: Benchmark against competitors
  • Improvement Analysis: Identify trends and improvement opportunities
  • Segmentation Analysis: Deep dive by customer demographics, products, and channels

3. Real-time Streaming UI

🔍 SQL Agent      ████████████ 100%  ✅ Complete (2.1s)
🤖 Explanation    ████████████  85%  🔄 Analyzing gaps...
📈 Chart          ████░░░░░░░░  30%  🎨 Generating chart...
✅ Validation     ░░░░░░░░░░░░   0%  ⏳ Waiting

🏗️ System Architecture

Multi-Agent Architecture

graph TD
    A[User Query] --> B[SQL Agent]
    B --> C[Explanation Agent]
    C --> D[Chart Agent]
    D --> E[Validation Agent]
    E --> F[Results Display]

    G[LangGraph Orchestrator] --> B
    G --> C
    G --> D
    G --> E

    H[Streamlit UI] --> A
    F --> H
Loading

Core Components

Component Role Tech Stack
SQL Agent Natural language → SQL conversion, data retrieval LangChain, SQLite
Explanation Agent Data interpretation, insight generation OpenAI GPT-4, Statistical analysis
Chart Agent Visualization generation Plotly, Interactive charts
Validation Agent Result verification, confidence scoring Data quality algorithms
Orchestrator Agent coordination, workflow management LangGraph
Streaming UI Real-time interface Streamlit, Async processing

Agent Execution Flow

Each agent extends the BaseAgent class which provides:

  • Standardized execution lifecycle with timing and error handling
  • Streaming update capabilities via StreamingUpdate class
  • Common result structure (AgentResult) with confidence scoring
  • Async execution pattern with AgentStatus enum
  • Built-in timeout management and error recovery

⚙️ Installation

1. Prerequisites

  • Python 3.8 or higher
  • OpenAI API key
  • 4GB RAM minimum
  • 2GB disk space

2. Clone Repository

git clone <repository-url>
cd nps-analytics-system

3. Setup Virtual Environment

# Create virtual environment
python -m venv venv

# Activate (Windows)
venv\Scripts\activate

# Activate (Linux/Mac)
source venv/bin/activate

4. Install Dependencies

pip install -r requirements.txt

5. Environment Configuration

Create a .env file in the project root:

# Required
OPENAI_API_KEY=your_openai_api_key_here

# Optional
ANTHROPIC_API_KEY=your_anthropic_api_key_here
DATABASE_PATH=./data/database.sqlite
CSV_FILE_PATH=./data/nps_scorecard.csv
STREAMLIT_PORT=8501
DEBUG_MODE=false

6. Initialize Database

# Prepare your CSV file in data/ folder
# Then initialize database
python data_manager.py

7. Launch Application

streamlit run streamlit_app.py

Access the application at http://localhost:8501

📖 Usage

Query Input Methods

1. Direct Input

Type natural language questions:

"What's the NPS for the US market?"
"Compare TV segment NPS across brands"
"Show 2024 customer journey NPS trends"

2. Example Queries

Choose from 6 categories:

  • Geographic Analysis: Regional market analysis
  • Product Analysis: Product category breakdown
  • Competitive Analysis: Brand comparison and gaps
  • Trend Analysis: Time series analysis
  • Customer Journey: CEJ stage analysis
  • Segmentation: Demographics and channel analysis

3. Query History

Reuse or modify previous queries for quick analysis

Result Interpretation

Real-time Progress

  • Agent-by-agent execution status with progress bars
  • Streaming intermediate results for immediate feedback

Analysis Result Tabs

  1. 📊 Data: SQL query results and raw data tables
  2. 💡 Insights: AI-generated analysis and key findings
  3. 📈 Charts: Interactive Plotly visualizations
  4. ✅ Validation: Confidence scores and data quality metrics

🔧 Configuration

Environment Variables (.env)

# API Keys (Required)
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=optional_key_here

# Database Configuration
DATABASE_PATH=./data/database.sqlite
CSV_FILE_PATH=./data/nps_scorecard.csv

# Server Configuration
STREAMLIT_PORT=8501
WEBSOCKET_PORT=8502

# Feature Flags
ENABLE_STREAMING=true
DEBUG_MODE=false
LOG_LEVEL=INFO

# Performance Tuning
MAX_CONCURRENT_AGENTS=4
AGENT_TIMEOUT=30
STREAM_BUFFER_SIZE=1024
MAX_CHART_POINTS=1000

# Model Configuration
DEFAULT_LLM_MODEL=gpt-4o
TEMPERATURE=0.1

# UI Configuration
CHART_THEME=plotly_white

Database Schema

CREATE TABLE nps_scorecard (
    id INTEGER PRIMARY KEY,
    year INTEGER,
    quarter TEXT,
    market VARCHAR(50),           -- Geographic market (US, Korea, Global)
    company VARCHAR(50),          -- Company/Brand name
    product_category VARCHAR(50), -- TV, Refrigerator, Washing Machine, etc.
    product_type VARCHAR(100),    -- Front Loader, OLED, etc.
    price_segment VARCHAR(20),    -- Premium, Mid-range, Budget
    cej_stage VARCHAR(50),        -- Customer journey stage
    channel VARCHAR(50),          -- Brand store, Online, etc.
    customer_age_group VARCHAR(20), -- Age demographics
    customer_gender VARCHAR(10),   -- Gender demographics
    nps_score DECIMAL(5,2),       -- NPS score
    r_nps_score DECIMAL(5,2),     -- Relationship NPS score
    sample_size INTEGER           -- Sample size
);

📚 API Documentation

StreamingOrchestrator

from orchestrator import StreamingOrchestrator

# Initialize orchestrator
orchestrator = StreamingOrchestrator()

# Execute with streaming
async for update in orchestrator.execute_workflow_stream(query):
    print(f"Agent: {update.agent_name}, Status: {update.status}")

# Execute without streaming
result = await orchestrator.execute_workflow(query)

Individual Agent Usage

from agents.sql_agent import SqlAgent
from agents.explanation_agent import ExplanationAgent

# SQL Agent
sql_agent = SqlAgent()
sql_result = await sql_agent.execute("Show US market NPS")

# Explanation Agent
explanation_agent = ExplanationAgent()
explanation_result = await explanation_agent.execute(sql_result)

Custom Agent Development

from agents.base_agent import BaseAgent, AgentResult

class CustomAgent(BaseAgent):
    async def _execute_impl(self, query: str) -> AgentResult:
        # Emit progress updates
        await self.emit_progress("Processing...", 0.5)

        # Your logic here
        result = process_query(query)

        return AgentResult(
            agent_name=self.name,
            data=result,
            confidence_score=0.95,
            execution_time=time.time() - start_time
        )

🛠️ Troubleshooting

Common Issues

API Key Errors

❌ Error: OPENAI_API_KEY not set
✅ Solution: Configure API key in .env file

CSV File Not Found

❌ Error: CSV file not found
✅ Solution: Place your CSV file in data/ folder or use sample data

Port Conflicts

# Use different port
streamlit run streamlit_app.py --server.port 8502

Memory Issues

# Reduce resource usage in .env
MAX_CONCURRENT_AGENTS=2
STREAM_BUFFER_SIZE=512
MAX_CHART_POINTS=500

Debug Mode

DEBUG_MODE=true
LOG_LEVEL=DEBUG

Check logs in logs/ directory for detailed error information.

Performance Optimization

# Limit concurrent agents
MAX_CONCURRENT_AGENTS=2

# Adjust streaming buffer
STREAM_BUFFER_SIZE=1024

# Limit chart data points
MAX_CHART_POINTS=500

🤝 Contributing

Development Setup

# Install development dependencies
pip install -r requirements-dev.txt

# Code quality checks
black .
flake8 .
mypy .

Running Tests

# Unit tests
python -m pytest tests/

# Integration tests
python -m pytest tests/integration/

# Coverage report
pytest --cov=agents --cov-report=html

Code Standards

  • Follow PEP 8 style guidelines
  • Add docstrings to all public methods
  • Write unit tests for new features
  • Update documentation for API changes

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

📞 Support

🙏 Acknowledgments


NPS Analytics RAG System v1.0 🚀 Next-generation AI-powered customer experience analytics

About

MAS으로 만든 데이터 찾고, 인사이트, 차트, 제안 프롬프트 Agent 실행해주줌

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.4%
  • PowerShell 0.6%