Thanks to visit codestin.com
Credit goes to github.com

Skip to content

adithyakpb/merit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

63 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MERIT: Monitoring, Evaluation, Reporting, Inspection, Testing

Python 3.8+ License: MIT Version

A comprehensive framework for evaluating, monitoring, and testing AI systems, particularly those powered by Large Language Models (LLMs). MERIT provides tools for performance monitoring, evaluation metrics, RAG system testing, and comprehensive reporting.

πŸš€ Features

πŸ“Š Monitoring & Observability

  • Real-time LLM monitoring with customizable metrics
  • Performance tracking (latency, throughput, error rates)
  • Cost monitoring and estimation
  • Usage analytics and token volume tracking
  • Multi-backend storage (SQLite, MongoDB, file-based)
  • Live dashboard with interactive metrics

πŸ§ͺ Evaluation & Testing

  • RAG system evaluation with comprehensive metrics
  • LLM performance testing with custom test sets
  • Automated evaluation using LLM-based evaluators
  • Test set generation for systematic testing
  • Multi-model evaluation support

πŸ“ˆ Metrics & Analytics

  • Correctness, Faithfulness, Relevance for RAG systems
  • Coherence and Fluency metrics
  • Context Precision evaluation
  • Custom metric development framework
  • Performance benchmarking

πŸ”§ Integration & APIs

  • Simple 3-line integration for existing applications
  • REST API for remote monitoring
  • CLI tools for configuration and execution
  • Multiple AI provider support (OpenAI, Google, custom)

πŸ“¦ Installation

Basic Installation

pip install merit-ai

Full Installation with All Dependencies

pip install merit-ai[all]

Development Installation

git clone https://github.com/your-username/merit.git
cd merit
pip install -e .[dev]

πŸš€ Quick Start

1. Simple Integration (3 Lines!)

from merit.monitoring.service import MonitoringService

# Initialize monitoring
monitor = MonitoringService()

# Log an interaction
monitor.log_simple_interaction({
    'user_message': 'Hello, how are you?',
    'llm_response': 'I am doing well, thank you!',
    'latency': 0.5,
    'model': 'gpt-3.5-turbo'
})

2. RAG System Evaluation

from merit.evaluation.evaluators.rag import RAGEvaluator

# Initialize evaluator
evaluator = RAGEvaluator()

# Evaluate RAG response
results = evaluator.evaluate(
    query="What is machine learning?",
    response="Machine learning is a subset of AI...",
    context=["Document 1 content...", "Document 2 content..."]
)

print(f"Relevance: {results['relevance']}")
print(f"Faithfulness: {results['faithfulness']}")

3. CLI Usage

# Start evaluation with config file
merit start --config my_config.py

# Monitor your application
merit monitor --config monitoring_config.py

πŸ“š Examples

Basic Chat Application Integration

from merit.monitoring.service import MonitoringService
from datetime import datetime

class ChatApp:
    def __init__(self):
        # Initialize MERIT monitoring
        self.monitor = MonitoringService()
    
    def process_message(self, user_message: str) -> str:
        start_time = datetime.now()
        
        # Your existing chat logic here
        response = self.llm_client.chat(user_message)
        
        end_time = datetime.now()
        
        # Log interaction with MERIT
        self.monitor.log_simple_interaction({
            'user_message': user_message,
            'llm_response': response,
            'latency': (end_time - start_time).total_seconds(),
            'model': 'gpt-3.5-turbo',
            'timestamp': end_time.isoformat()
        })
        
        return response

Advanced RAG System with MERIT

from merit.evaluation.evaluators.rag import RAGEvaluator
from merit.monitoring.service import MonitoringService

class RAGSystem:
    def __init__(self):
        self.evaluator = RAGEvaluator()
        self.monitor = MonitoringService()
    
    def query(self, user_question: str):
        # Retrieve relevant documents
        documents = self.retriever.search(user_question)
        
        # Generate response
        response = self.llm.generate(user_question, documents)
        
        # Evaluate with MERIT
        evaluation = self.evaluator.evaluate(
            query=user_question,
            response=response,
            context=[doc.content for doc in documents]
        )
        
        # Monitor performance
        self.monitor.log_simple_interaction({
            'query': user_question,
            'response': response,
            'evaluation_scores': evaluation,
            'num_documents': len(documents)
        })
        
        return response, evaluation

πŸ—οΈ Project Structure

merit/
β”œβ”€β”€ api/                    # API clients (OpenAI, Google, etc.)
β”œβ”€β”€ core/                   # Core models and utilities
β”œβ”€β”€ evaluation/             # Evaluation framework
β”‚   β”œβ”€β”€ evaluators/        # LLM and RAG evaluators
β”‚   └── templates/         # Evaluation templates
β”œβ”€β”€ knowledge/              # Knowledge base management
β”œβ”€β”€ metrics/                # Metrics framework
β”‚   β”œβ”€β”€ rag.py            # RAG-specific metrics
β”‚   β”œβ”€β”€ llm_measured.py   # LLM-based metrics
β”‚   └── monitoring.py     # Monitoring metrics
β”œβ”€β”€ monitoring/             # Monitoring service
β”‚   └── collectors/        # Data collectors
β”œβ”€β”€ storage/               # Storage backends
β”œβ”€β”€ templates/             # Dashboard and report templates
└── testset_generation/    # Test set generation tools

πŸ“Š Available Metrics

RAG Metrics

  • Correctness: Accuracy of generated responses
  • Faithfulness: Adherence to source documents
  • Relevance: Response relevance to query
  • Coherence: Logical flow and consistency
  • Fluency: Natural language quality
  • Context Precision: Quality of retrieved context

Monitoring Metrics

  • Latency: Response time tracking
  • Throughput: Requests per second
  • Error Rate: Failure percentage
  • Cost: Token usage and cost estimation
  • Usage: Model and feature usage patterns

πŸ”§ Configuration

Basic Configuration File

# merit_config.py
from merit.config.models import MeritMainConfig

config = MeritMainConfig(
    evaluation={
        "evaluator": "rag",
        "metrics": ["relevance", "faithfulness", "correctness"]
    },
    monitoring={
        "storage_type": "sqlite",
        "collection_interval": 60,
        "retention_days": 30
    }
)

Advanced Configuration

# advanced_config.py
config = MeritMainConfig(
    evaluation={
        "evaluator": "rag",
        "metrics": ["relevance", "faithfulness", "correctness"],
        "test_set": {
            "path": "test_questions.json",
            "size": 100
        }
    },
    monitoring={
        "storage_type": "mongodb",
        "storage_config": {
            "uri": "mongodb://localhost:27017",
            "database": "merit_metrics"
        },
        "metrics": ["latency", "cost", "error_rate"],
        "collection_interval": 30,
        "retention_days": 90
    },
    knowledge_base={
        "type": "vector_store",
        "path": "./knowledge_base"
    }
)

🎯 Use Cases

1. Production LLM Monitoring

Monitor your deployed LLM applications in real-time with performance metrics, cost tracking, and error monitoring.

2. RAG System Development

Evaluate and improve your RAG systems with comprehensive metrics and automated testing.

3. Model Comparison

Compare different models and configurations using standardized evaluation metrics.

4. Quality Assurance

Implement automated testing for LLM applications with custom test sets and evaluation criteria.

🀝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

Development Setup

git clone https://github.com/your-username/merit.git
cd merit
pip install -e .[dev]
pytest tests/

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Built with modern Python practices and Pydantic for type safety
  • Inspired by the need for comprehensive AI system evaluation
  • Designed for simplicity and ease of integration

πŸ“ž Support


MERIT: Making AI systems more reliable, one evaluation at a time. πŸš€

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published