๐ง Work in Progress - Experimental AI system exploring meta-learning and adaptive prompting concepts
MetaAgent is an experimental AI assistant that learns from conversations using a hybrid memory architecture inspired by cognitive neuroscience. Unlike traditional chatbots, it:
- Learns from every interaction using a dual-memory system
- Adapts prompting strategies based on successful past conversations
- Incorporates human feedback to improve response quality over time
- Persists knowledge across sessions for continuous improvement
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ METAAGENT โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ USER INPUT โ MEMORY SEARCH โ LLM CALL โ EVALUATION โ
โ โ โ โ โ โ
โ RESPONSE โ MEMORY UPDATE โ SCORING โ FEEDBACK โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ
โ Working Memory โ โ Long-term โ โ Scoring โ
โ (FAISS) โโโโโบโ Memory โ โ System โ
โ โ โ (Qdrant) โ โ โ
โ โข RAM-based โ โ โข Persistent โ โ โข Auto: 40% โ
โ โข 0.1ms speed โ โ โข 10ms speed โ โ โข Human: 60% โ
โ โข Last 100 โ โ โข All history โ โ โข Threshold โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ
- Python 3.8+
- Docker (for Qdrant vector database)
- OpenAI API key (optional - works with mock LLM for testing)
git clone https://github.com/tu-usuario/Metaprompting-mrtalearning.git
cd Metaprompting-mrtalearning
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt# Start Qdrant container
docker run -p 6333:6333 -p 6334:6334 \
-v $(pwd)/qdrant_data:/qdrant/storage:z \
qdrant/qdrant# Create .env file
echo "OPENAI_API_KEY=your-api-key-here" > .env
# Or export directly
export OPENAI_API_KEY="your-api-key-here"python interactive_chat.py- Working Memory (FAISS): Stores recent strategies in RAM for ultra-fast retrieval (~0.1ms)
- Long-term Memory (Qdrant): Persists successful strategies to disk for permanent learning (~10ms)
- Automatic Promotion: High-scoring strategies automatically move from working to long-term memory
- User asks question โ System searches both memory layers for similar past interactions
- Strategy retrieval โ Adapts successful prompts from memory to current context
- LLM generation โ Uses adapted strategy to generate response
- Human feedback โ User rates response quality (1-5 stars)
- Hybrid scoring โ Combines automatic metrics (40%) with human rating (60%)
- Memory update โ Stores successful strategies for future use
- Contextual adaptation: Reuses successful strategies for similar questions
- Frequency tracking: Commonly used strategies get priority
- Session persistence: Conversations and learned strategies persist across sessions
- Fallback modes: Works with mock LLM when OpenAI is unavailable
/help # Show all available commands
/stats # Display system statistics and performance metrics
/memory # Show memory system status (working vs long-term)
/history # View recent conversation history
/backup # Force backup of working memory to long-term storage
/save # Manually save current session
/load # Load previous session
/clear # Clear conversation history
/exit # Exit and auto-save- Hybrid memory architecture combining speed (FAISS) with persistence (Qdrant)
- Human-in-the-loop learning with weighted scoring system
- Strategy promotion based on success metrics and usage frequency
- Cross-session learning that genuinely improves over time
- End-to-end integration of multiple vector databases
- Real-time strategy adaptation using semantic similarity
- Robust error handling and fallback mechanisms
- Docker-based deployment with persistent storage
- Complete hybrid memory system functional
- OpenAI integration with automatic fallback to mock LLM
- Interactive chat with human feedback loop
- Session persistence and strategy learning
- Comprehensive documentation and workflow analysis
- Memory system complexity may not provide proportional utility gains
- Human feedback loop needs validation with larger user base
- Performance improvement over standard RAG systems is marginal
- Use cases still being validated and refined
- A production-ready AI assistant
- A replacement for ChatGPT or similar services
- A proven method for AI improvement
- Ready for commercial deployment
metaprompting-mrtalearning/
โโโ meta_agent/
โ โโโ agent.py # Core MetaAgent class
โ โโโ hybrid_memory.py # Hybrid memory system implementation
โ โโโ __init__.py
โโโ interactive_chat.py # Main chat interface
โโโ example.py # Simple demo with mock LLM
โโโ WORKFLOW_SISTEMA.md # Detailed system workflow documentation
โโโ ROADMAP_PROYECTO.md # Development roadmap
โโโ requirements.txt # Python dependencies
โโโ .env.example # Environment variables template
โโโ README.md # This file
- Vector database integration is more complex than expected
- Memory hierarchy design requires careful balance between speed and persistence
- Human feedback collection needs thoughtful UX design
- Embedding dimension compatibility is crucial for system reliability
- Meta-learning in practice has subtle but important challenges
- Retrieval-augmented generation can be enhanced with learning loops
- Scoring systems need careful calibration between automatic and human metrics
- System architecture matters as much as algorithms for AI applications
- Documentation is essential for complex AI systems
- Modular design enables experimentation and iteration
- Error handling becomes critical with multiple external dependencies
- Docker containerization simplifies deployment significantly
- Simplify architecture based on real usage patterns
- Add more comprehensive evaluation metrics
- Implement A/B testing for strategy effectiveness
- Create specialized versions for specific domains
- Investigate domain-specific specialization (legal, medical, educational)
- Explore automatic prompt optimization techniques
- Study long-term learning patterns and knowledge retention
- Develop better metrics for measuring genuine AI improvement
This is an experimental learning project, and contributions are welcome! Areas where help would be valuable:
- Use case validation: Testing with real-world scenarios
- Performance optimization: Improving memory system efficiency
- Evaluation metrics: Better ways to measure learning effectiveness
- Documentation: Examples, tutorials, and use case studies
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes and test thoroughly
- Update documentation as needed
- Submit a pull request with clear description
This project is licensed under the MIT License - see the LICENSE file for details.
- Inspired by cognitive neuroscience research on memory systems
- Built using amazing open-source tools: LangChain, Qdrant, FAISS, OpenAI
- Thanks to the AI/ML community for continuous inspiration and learning
- Issues: Use GitHub Issues for bugs and feature requests
- Discussions: Use GitHub Discussions for questions and ideas