Thanks to visit codestin.com
Credit goes to github.com

Skip to content

๐Ÿง  Experimental AI assistant with hybrid memory system that learns from conversations using FAISS + Qdrant

License

Notifications You must be signed in to change notification settings

mrdesautu/Meta-agent

Repository files navigation

๐Ÿง  MetaAgent - AI Assistant with Hybrid Memory System

๐Ÿšง Work in Progress - Experimental AI system exploring meta-learning and adaptive prompting concepts

Python 3.8+ License: MIT Status: Experimental

๐ŸŽฏ What It Does

MetaAgent is an experimental AI assistant that learns from conversations using a hybrid memory architecture inspired by cognitive neuroscience. Unlike traditional chatbots, it:

  • Learns from every interaction using a dual-memory system
  • Adapts prompting strategies based on successful past conversations
  • Incorporates human feedback to improve response quality over time
  • Persists knowledge across sessions for continuous improvement

๐Ÿ—๏ธ Architecture Overview

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    METAAGENT                            โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  USER INPUT โ†’ MEMORY SEARCH โ†’ LLM CALL โ†’ EVALUATION    โ”‚
โ”‚     โ†“              โ†“             โ†“           โ†“         โ”‚
โ”‚  RESPONSE โ† MEMORY UPDATE โ† SCORING โ† FEEDBACK         โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Working Memory  โ”‚    โ”‚  Long-term      โ”‚    โ”‚   Scoring    โ”‚
โ”‚   (FAISS)       โ”‚โ—„โ”€โ”€โ–บโ”‚   Memory        โ”‚    โ”‚   System     โ”‚
โ”‚                 โ”‚    โ”‚  (Qdrant)       โ”‚    โ”‚              โ”‚
โ”‚ โ€ข RAM-based     โ”‚    โ”‚ โ€ข Persistent    โ”‚    โ”‚ โ€ข Auto: 40%  โ”‚
โ”‚ โ€ข 0.1ms speed   โ”‚    โ”‚ โ€ข 10ms speed    โ”‚    โ”‚ โ€ข Human: 60% โ”‚
โ”‚ โ€ข Last 100      โ”‚    โ”‚ โ€ข All history   โ”‚    โ”‚ โ€ข Threshold  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿš€ Quick Start

Prerequisites

  • Python 3.8+
  • Docker (for Qdrant vector database)
  • OpenAI API key (optional - works with mock LLM for testing)

1. Clone and Setup

git clone https://github.com/tu-usuario/Metaprompting-mrtalearning.git
cd Metaprompting-mrtalearning

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

2. Start Vector Database

# Start Qdrant container
docker run -p 6333:6333 -p 6334:6334 \
  -v $(pwd)/qdrant_data:/qdrant/storage:z \
  qdrant/qdrant

3. Configure OpenAI (Optional)

# Create .env file
echo "OPENAI_API_KEY=your-api-key-here" > .env

# Or export directly
export OPENAI_API_KEY="your-api-key-here"

4. Run Interactive Chat

python interactive_chat.py

๐Ÿ’ก How It Works

Hybrid Memory System

  • Working Memory (FAISS): Stores recent strategies in RAM for ultra-fast retrieval (~0.1ms)
  • Long-term Memory (Qdrant): Persists successful strategies to disk for permanent learning (~10ms)
  • Automatic Promotion: High-scoring strategies automatically move from working to long-term memory

Learning Process

  1. User asks question โ†’ System searches both memory layers for similar past interactions
  2. Strategy retrieval โ†’ Adapts successful prompts from memory to current context
  3. LLM generation โ†’ Uses adapted strategy to generate response
  4. Human feedback โ†’ User rates response quality (1-5 stars)
  5. Hybrid scoring โ†’ Combines automatic metrics (40%) with human rating (60%)
  6. Memory update โ†’ Stores successful strategies for future use

Smart Features

  • Contextual adaptation: Reuses successful strategies for similar questions
  • Frequency tracking: Commonly used strategies get priority
  • Session persistence: Conversations and learned strategies persist across sessions
  • Fallback modes: Works with mock LLM when OpenAI is unavailable

๐ŸŽฎ Interactive Commands

/help          # Show all available commands
/stats         # Display system statistics and performance metrics
/memory        # Show memory system status (working vs long-term)
/history       # View recent conversation history
/backup        # Force backup of working memory to long-term storage
/save          # Manually save current session
/load          # Load previous session
/clear         # Clear conversation history
/exit          # Exit and auto-save

๐Ÿ“Š What Makes This Interesting

Novel Concepts Explored

  • Hybrid memory architecture combining speed (FAISS) with persistence (Qdrant)
  • Human-in-the-loop learning with weighted scoring system
  • Strategy promotion based on success metrics and usage frequency
  • Cross-session learning that genuinely improves over time

Technical Achievements

  • End-to-end integration of multiple vector databases
  • Real-time strategy adaptation using semantic similarity
  • Robust error handling and fallback mechanisms
  • Docker-based deployment with persistent storage

๐Ÿ“ˆ Current Status & Limitations

โœ… What Works

  • Complete hybrid memory system functional
  • OpenAI integration with automatic fallback to mock LLM
  • Interactive chat with human feedback loop
  • Session persistence and strategy learning
  • Comprehensive documentation and workflow analysis

๐Ÿšง Current Limitations

  • Memory system complexity may not provide proportional utility gains
  • Human feedback loop needs validation with larger user base
  • Performance improvement over standard RAG systems is marginal
  • Use cases still being validated and refined

๐ŸŽฏ This is NOT

  • A production-ready AI assistant
  • A replacement for ChatGPT or similar services
  • A proven method for AI improvement
  • Ready for commercial deployment

๐Ÿ› ๏ธ Project Structure

metaprompting-mrtalearning/
โ”œโ”€โ”€ meta_agent/
โ”‚   โ”œโ”€โ”€ agent.py              # Core MetaAgent class
โ”‚   โ”œโ”€โ”€ hybrid_memory.py      # Hybrid memory system implementation
โ”‚   โ””โ”€โ”€ __init__.py
โ”œโ”€โ”€ interactive_chat.py       # Main chat interface
โ”œโ”€โ”€ example.py               # Simple demo with mock LLM
โ”œโ”€โ”€ WORKFLOW_SISTEMA.md      # Detailed system workflow documentation
โ”œโ”€โ”€ ROADMAP_PROYECTO.md      # Development roadmap
โ”œโ”€โ”€ requirements.txt         # Python dependencies
โ”œโ”€โ”€ .env.example            # Environment variables template
โ””โ”€โ”€ README.md               # This file

๐Ÿ”ฌ What I Learned Building This

Technical Insights

  • Vector database integration is more complex than expected
  • Memory hierarchy design requires careful balance between speed and persistence
  • Human feedback collection needs thoughtful UX design
  • Embedding dimension compatibility is crucial for system reliability

AI/ML Concepts

  • Meta-learning in practice has subtle but important challenges
  • Retrieval-augmented generation can be enhanced with learning loops
  • Scoring systems need careful calibration between automatic and human metrics
  • System architecture matters as much as algorithms for AI applications

Development Lessons

  • Documentation is essential for complex AI systems
  • Modular design enables experimentation and iteration
  • Error handling becomes critical with multiple external dependencies
  • Docker containerization simplifies deployment significantly

๐Ÿš€ Future Directions

Short-term Improvements

  • Simplify architecture based on real usage patterns
  • Add more comprehensive evaluation metrics
  • Implement A/B testing for strategy effectiveness
  • Create specialized versions for specific domains

Research Opportunities

  • Investigate domain-specific specialization (legal, medical, educational)
  • Explore automatic prompt optimization techniques
  • Study long-term learning patterns and knowledge retention
  • Develop better metrics for measuring genuine AI improvement

๐Ÿค Contributing

This is an experimental learning project, and contributions are welcome! Areas where help would be valuable:

  • Use case validation: Testing with real-world scenarios
  • Performance optimization: Improving memory system efficiency
  • Evaluation metrics: Better ways to measure learning effectiveness
  • Documentation: Examples, tutorials, and use case studies

Getting Started with Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes and test thoroughly
  4. Update documentation as needed
  5. Submit a pull request with clear description

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • Inspired by cognitive neuroscience research on memory systems
  • Built using amazing open-source tools: LangChain, Qdrant, FAISS, OpenAI
  • Thanks to the AI/ML community for continuous inspiration and learning

๐Ÿ“ž Contact & Discussion

  • Issues: Use GitHub Issues for bugs and feature requests
  • Discussions: Use GitHub Discussions for questions and ideas

โš ๏ธ Disclaimer: This is an experimental project for learning and research purposes. The effectiveness of the meta-learning approach is still being validated. Use at your own discretion and always verify outputs for important applications.

About

๐Ÿง  Experimental AI assistant with hybrid memory system that learns from conversations using FAISS + Qdrant

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published