Memorall is an AI-powered browser extension that transforms how you manage digital knowledge. It seamlessly captures, organizes, and recalls information from your browsing experience while maintaining complete privacy through local AI processing.
- ✨ Key Features
- 🚀 Quick Start
- 📸 Demo
- 🎯 Use Cases
- 🛠️ Technical Architecture
- 🕸️ Knowledge Graph Engine
- 🏗️ Architecture Details
- 🔀 Extension Flow
- 📋 Installation & Development
- 🎮 Usage
- 🤝 Contributing
- 📚 Documentation
- 📄 License
See Memorall in action! The extension seamlessly integrates with your browsing experience to build a personal knowledge base.
- 📚 Research & Learning: Summarize articles, papers, and documentation while browsing
- 📝 Note Taking: Ask AI to remember key points from meetings, videos, or conversations
- 🏗️ Knowledge Building: Build a personal knowledge base that grows with your browsing
- 🔄 Context Switching: Quickly recall what you were working on across different projects
- 🔗 Information Synthesis: Let AI help connect related memories and insights
- ⚛️ Frontend: React with TypeScript
- 🧠 AI Engine: WebAssembly-based language models (Wllama) + HuggingFace Transformers
- 🗄️ Database: PGlite (PostgreSQL in the browser) with vector embeddings
- 🔧 Extension Framework: Extension.js
- 🎨 Styling: Tailwind CSS with Radix UI components
The Knowledge Graph Flow is Memorall's core intelligence module that transforms unstructured content into interconnected knowledge:
- 📄 Content Processing: Analyzes web pages, documents, and conversations
- 🔍 Entity Extraction: Identifies people, organizations, concepts, and locations
- 💡 Relationship Discovery: Finds connections between entities
- ⏰ Temporal Understanding: Captures when relationships were established
- 🕸️ Knowledge Building: Creates a persistent, searchable knowledge graph
- 🎯 Smart Deduplication: Prevents duplicate entities (e.g., "Dr. Smith" = "John Smith")
- 🔄 Incremental Learning: Continuously builds knowledge from new content
- 🕐 Temporal Awareness: Tracks how relationships change over time
- 🔍 Hybrid Search System: Three-tier search using SQL, trigram matching, and vector similarity
- 🎯 Intelligent Fallback: Automatic failover to vector search when needed for optimal recall
- 📈 Context Building: Connects new information to existing knowledge
- Research: "Alice published a paper on AI safety in 2023" → Creates entities for Alice (Person), AI Safety (Concept), and their relationship with publication date
- Professional: "Google acquired DeepMind" → Links companies and captures acquisition relationship
- Personal: "Met Sarah at the conference last week" → Records social connection with temporal context
Memorall uses a sophisticated three-tier hybrid search system for optimal knowledge retrieval:
- SQL Search (60%): Lightning-fast exact pattern matching using database indexes
- Trigram Search (40%): Fuzzy text matching with PostgreSQL's
pg_trgm
extension for typo tolerance - Vector Fallback: Intelligent semantic similarity using embeddings when primary methods yield insufficient results
This approach ensures both high performance and comprehensive recall, making knowledge discovery both fast and thorough.
The Knowledge Graph enables Memorall to provide contextual, intelligent responses by understanding not just what you've encountered, but how everything connects together.
📚 Detailed Documentation - Learn more about the architecture and implementation
- 🤖 Language Model: Wllama (WebAssembly-based LLM)
- 📊 Embeddings: HuggingFace Transformers for text embeddings
- 🔍 Hybrid Search Engine:
- SQL Search (60%): Fast exact pattern matching using database indexes
- Trigram Search (40%): Fuzzy text matching with PostgreSQL's
pg_trgm
extension - Vector Fallback: Semantic similarity using embeddings when primary methods insufficient
- 💬 Conversations: Chat history and context
- 🕸️ Knowledge Graph: Nodes, edges, and relationships between concepts
- 📊 Embeddings: Vector representations for semantic search
- 📜 Sources: Webpage content and metadata
- Services Overview:
docs/services.md
- LLM Service:
docs/llm-service.md
- Embedding Service:
docs/embedding-service.md
- Database Service:
docs/database-service.md
- Flows Service:
docs/flows-service.md
- Shared Storage Service:
docs/shared-storage.md
- Background Jobs:
docs/background-jobs.md
- Remember Service:
docs/remember-service.md
- Knowledge Graph Service:
docs/knowledge-graph-service.md
- Service for building knowledge graphs - Knowledge Pipeline:
docs/knowledge-pipeline.md
- Complete pipeline architecture and flow - Knowledge RAG System:
docs/knowledge-rag-service.md
- Retrieval-Augmented Generation for Q&A
High-level flow showing clear separation between UI, background coordination, content script injection, and offscreen processing.
graph TD
%% UI Layer - Direct to Offscreen via Background Jobs
UI[UI Surfaces] -.->|background-jobs| JQ[Job Queue]
%% Background Script - Context Menus & Content Script Communication Only
BG[Background Script] -->|inject & extract| CS[Content Scripts]
CS -->|page data| BG
BG -->|enqueue extracted data| JQ
%% Offscreen Processing
JQ --> OFF[Offscreen Document]
OFF --> SVC[Core Services\nLLM/Embedding/DB/Remember/KG]
OFF -->|claim & process| JQ
%% Content Scripts - Page Injection Only
CS -.->|page context| WEB[Web Pages]
- UI Surfaces → Offscreen Document: Direct communication via Job Queue (no background script involvement)
- Background Script → Content Scripts: Context menu actions trigger content script data extraction
- Content Scripts → Background Script: Return extracted page data (HTML, selections, metadata)
- Background Script → Offscreen Document: Enqueue extracted data for processing via Job Queue
- Offscreen Document ↔ Job Queue: Claim jobs, process data, update status
- Background Script: Context menu registration, content script communication, and job enqueueing only
- Job Queue: Cross-context job queue with progress tracking and offscreen processing
- UI Surfaces: Directly use job queue service for user actions
- Content Scripts: Page data extraction and injection (communicate only with background script)
- Offscreen Document: Heavy processing, AI operations, and database work to keep UI responsive
- UI Surfaces:
popup.html
&standalone.html
- Extension popup and full-page interfaces - Background Script:
src/background.ts
- Chrome extension service worker - Job Queue:
src/services/background-jobs/*
- Cross-context job management system - Content Scripts: Injected scripts in web pages for data extraction
- Offscreen Document:
public/offscreen.html
- Dedicated processing environment - Core Services: Database, LLM, Embedding, Remember, and Knowledge Graph services
git clone <repository-url>
cd memorall
npm install
npm run build
# Load the 'dist' folder in your browser's extension manager
Command | Description |
---|---|
npm run dev |
🚀 Development mode with hot reloading |
npm run build |
📦 Production build |
npm run preview |
👀 Preview built extension |
npm run type-check |
🔍 TypeScript type checking |
- Open your browser's extension management page
- Enable "Developer mode"
- Click "Load unpacked" and select the
dist
folder - First launch will download and initialize AI models (one-time setup)
- Use GitHub Issues to report bugs
- Include steps to reproduce and expected behavior
- Provide browser and extension version information
- Open a GitHub Issue with "enhancement" label
- Describe the feature and its benefits
- Consider submitting a pull request
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature
- Make changes following existing code style
- Test thoroughly
- Submit pull request with clear description
- Improve documentation and examples
- Fix typos and clarify instructions
- Add tutorials or guides
- 📖 Documentation: Check out our comprehensive docs
- 🐛 Issues: Report bugs or request features on GitHub Issues
- 💡 Discussions: Join conversations in GitHub Discussions
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ❤️ using Extension.js • Made for privacy-conscious knowledge workers