Your AI-powered file launcher and search assistant. Think Spotlight or Alfred, but with the intelligence to understand what you're looking for. Press Option (⌥) + Space
anywhere to start searching!
- Download the latest release for your architecture:
- For Apple Silicon (M1/M2):
albert-launcher-{version}-mac-arm64.dmg
- For Intel:
albert-launcher-{version}-mac-x64.dmg
- For Apple Silicon (M1/M2):
- Open the DMG file and drag alBERT to your Applications folder
- Since the app is not signed with an Apple Developer certificate, you'll need to:
- Right-click (or Control-click) on alBERT in Applications
- Select "Open" from the context menu
- Click "Open" in the security dialog
- This is only required for the first launch
- Press
Option (⌥) + Space
anywhere to open alBERT!
- 🚀 Launch: Press
Option (⌥) + Space
anywhere to open alBERT - 🔍 Search: Just start typing - alBERT understands natural language
- 💡 Smart Results: Results are ranked by relevance to your query
- ⌨️ Navigate: Use arrow keys to move, Enter to open
- ⚡️ Quick Exit: Press Esc to close
Unlike traditional file search tools that rely on filename matching or basic content indexing, alBERT-launcher uses advanced semantic search and AI capabilities to understand the meaning behind your queries. It maintains a dedicated folder (~/alBERT
) where it indexes and searches through your important documents, providing:
- Semantic Search: Find documents based on meaning, not just keywords
- AI-Powered Answers: Get direct answers to questions about your documents
- Context-Aware Results: Results are ranked based on relevance to your query context
- Instant Access: Global shortcut (
Option (⌥) + Space
) to access from anywhere
graph TD
A[User Query] --> B[Query Processor]
B --> C{Query Type}
C -->|File Search| D[Local Search Engine]
C -->|Web Search| E[Brave Search API]
C -->|AI Question| F[Perplexity AI]
D --> G[Document Embeddings]
D --> H[File Index]
G --> I[Search Results]
H --> I
E --> I
F --> I
I --> J[Result Ranker]
J --> K[UI Display]
subgraph "Local Index"
H
G
end
subgraph "External Services"
E
F
end
graph LR
A[Electron Main Process] --> B[IPC Bridge]
B --> C[Renderer Process]
subgraph "Main Process"
A --> D[File Watcher]
A --> E[Search DB]
A --> F[Embeddings Service]
end
subgraph "Renderer Process"
C --> G[React UI]
G --> H[Search Bar]
G --> I[Results View]
G --> J[Settings Panel]
end
The Electron main process is now organised into small lifecycle utilities so contributors can refactor or extend behaviour without digging through a monolithic index.ts
file:
Concern | Module | Responsibilities |
---|---|---|
Window orchestration | src/main/lifecycle/window-manager.ts |
Builds the translucent shell window, wires blur/close hooks, and loads the correct renderer URL depending on environment. |
IPC routing | src/main/lifecycle/ipc.ts |
Registers the tRPC bridge against the active window so renderer modules can call into the main process. |
Tray & shortcuts | src/main/lifecycle/tray.ts , src/main/lifecycle/shortcuts.ts |
Sets up the status bar tray menu and keyboard toggle with graceful teardown on quit. |
Search database lifecycle | src/main/lifecycle/search-service.ts |
Lazily initialises the embedded vector index, streams progress updates to the renderer, and persists/shuts down cleanly. |
These modules keep the bootstrap logic declarative inside src/main/index.ts
while still exposing focused hooks for new background services or observability.
To keep the liquid-glass UI responsive, the renderer has been decomposed into focused utilities instead of one monolithic App.tsx
:
Concern | Module | Responsibilities |
---|---|---|
Smithery connectors | src/renderer/src/hooks/useSmitheryContext.ts |
Debounced MCP fetching with cancellation, error surfacing, and manual refresh support. |
Context scoring | src/renderer/src/hooks/useContextScoring.ts |
Reranks merged local/MCP documents and tracks cosine similarity maps without blocking the UI thread. |
Prompt assembly | src/renderer/src/lib/context-builder.ts |
Normalises sticky notes, Smithery snippets, and search results into a bounded context string for the LLM middleware. |
Shared search types | src/renderer/src/types/search.ts |
Source of truth for search result, cache, and sticky note shapes reused across components. |
These modules let feature components subscribe to just the slices of data they need, improving readability while reducing redundant network requests and state churn.
Secrets stay local. Configuration lives in
.env
(ignored by git) with safe defaults documented in.env.example
; never commit API keys to the repo.
sequenceDiagram
participant U as User
participant UI as UI Layer
participant S as Search Engine
participant DB as Search DB
participant AI as AI Services
U->>UI: Enter Query
UI->>S: Process Query
S->>DB: Search Local Index
S->>AI: Get AI Answer
par Local Results
DB-->>S: Document Matches
and AI Response
AI-->>S: Generated Answer
end
S->>S: Rank & Merge Results
S->>UI: Display Results
UI->>U: Show Results
- 🚀 Lightning-fast local file search
- 🤖 AI-powered answers using Perplexity
- 🔍 Semantic search capabilities
- 🌐 Web search integration with Brave Search
- ⌨️ Global keyboard shortcuts (
Option (⌥) + Space
) - 💾 Smart caching system
- 🎯 Context-aware search results
- 🫧 Liquid glass interface inspired by macOS Sonoma
- 📱 Modern, responsive UI
The renderer has been refreshed to echo Apple's latest "liquid glass" design language:
- Layered frosted panels with animated gradient orbs and a soft grid backdrop keep the workspace calm yet dynamic.
- A sculpted header surfaces key stats (pinned notes, surfaced results, conversations) so you can gauge context at a glance.
- Search, chat, and notes live inside glass panels with softened borders, luminous highlights, and adaptive blur so focus stays on your content.
- Sticky notes inherit the same frosted aesthetic and can be spawned instantly from the header or via ⌘/Ctrl + N.
To tweak the visuals, inspect src/renderer/src/assets/index.css
for theme tokens and glass utilities, and adjust the panel layout inside src/renderer/src/App.tsx
.
alBERT now speaks the Model Context Protocol (MCP) so you can stream structured knowledge packs straight from Smithery into every search and chat turn.
- Open Settings → Smithery MCP and (optionally) drop in your Smithery API key if you need private connectors.
- Paste a Smithery slug (for example
notion-notes
) or a manifest URL to link a connector. You can also pick from the built-in directory and click Link connector. - Toggle connectors on/off, refresh their metadata, or remove them entirely without editing config files.
Once linked, context cards from each connector show up beneath search results. The retrieved snippets are automatically blended into the conversation context so the assistant can cite them alongside your local documents. Everything is stored in local preferences—no Smithery secrets are shipped with the repo.
For deeper integrations—hosting your own MCP servers, enabling OAuth, or wiring Smithery connectors into other runtimes—see docs/smithery-mcp.md
for a protocol primer and official SDK references.
The ~/alBERT
folder is your personal knowledge base. Any files placed here are:
- Automatically indexed for semantic search
- Processed for quick retrieval
- Analyzed for contextual understanding
- Accessible through natural language queries
alBERT uses advanced embedding techniques to understand the meaning of your documents:
- Documents are split into meaningful chunks
- Each chunk is converted into a high-dimensional vector
- Queries are matched against these vectors for semantic similarity
- Results are ranked based on relevance and context
- Query Understanding: Natural language processing to understand user intent
- Context Awareness: Maintains conversation context for follow-up queries
- Smart Answers: Generates answers by combining local knowledge with AI capabilities
alBERT-launcher uses OpenRouter to access powerful language models for enhanced search capabilities:
Note: The app does not include an OpenRouter API key. Provide your own key via the in-app Settings → Public AI section or by setting
OPENROUTER_API_KEY
in.env
before enabling cloud responses.
graph TD
A[User Query] --> B[Query Analyzer]
B --> C{Query Type}
C -->|Direct Question| D[OpenRouter API]
C -->|Document Analysis| E[Local Processing]
D --> F[Perplexity/LLaMA Model]
F --> G[AI Response]
E --> H[Document Vectors]
H --> I[Semantic Search]
G --> J[Result Merger]
I --> J
J --> K[Final Response]
subgraph "OpenRouter Service"
D
F
end
subgraph "Local Processing"
E
H
I
end
- Model Selection: Uses Perplexity's LLaMA-3.1-Sonar-Small-128k model for optimal performance
- Context Integration: Combines AI responses with local document context
- Source Attribution: AI responses include relevant source URLs
- Streaming Responses: Real-time response streaming for better UX
- Fallback Handling: Graceful degradation when API is unavailable
Example OpenRouter configuration:
const openRouterConfig = {
model: "perplexity/llama-3.1-sonar-small-128k-online",
temperature: 0.7,
maxTokens: 500,
systemPrompt: "You are a search engine api that provides answers to questions with as many links to sources as possible."
}
alBERT-launcher puts your privacy first by supporting local AI processing through Ollama integration. Switch between cloud and local AI with a single click:
graph TD
A[Your Query] --> B{Privacy Mode}
B -->|Private| C[Local AI]
B -->|Public| D[Cloud AI]
C --> E[Private Results]
D --> F[Cloud Results]
- 🔒 Privacy Mode: Switch between local and cloud AI instantly
- 💻 Local Processing: Keep your data on your machine
- 🌐 Flexible Choice: Use cloud AI when you need more power
- ⚡ Fast Response: No internet latency in local mode
- 💰 Cost-Free: No API costs when using local models
- 🔌 Offline Support: Work without internet connection
- Install Ollama from ollama.ai
- Enable "Private Mode" in alBERT settings
- Start searching with complete privacy!
Choose from various powerful local models:
- Llama 2
- CodeLlama
- Mistral
- And more from Ollama's model library
- Complete Privacy: Your queries never leave your computer
- No API Costs: Use AI features without subscription fees
- Always Available: Work offline without interruption
- Full Control: Choose and customize your AI models
alBERT-launcher implements a sophisticated file system monitoring and indexing system:
graph TD
A[File System Events] --> B[Event Watcher]
B --> C{Event Type}
C -->|Create| D[Index New File]
C -->|Modify| E[Update Index]
C -->|Delete| F[Remove from Index]
D --> G[File Processor]
E --> G
G --> H[Content Extractor]
H --> I[Text Chunker]
I --> J[Vector Database]
subgraph "File Processing Pipeline"
G
H
I
end
subgraph "Search Index"
J
end
-
Automatic Monitoring
- Real-time file change detection
- Efficient delta updates
- Handles file moves and renames
- Supports symbolic links
-
Content Processing
// Example content processing pipeline async function processFile(filePath: string) { const content = await readContent(filePath) const chunks = splitIntoChunks(content) const vectors = await vectorizeChunks(chunks) await updateSearchIndex(filePath, vectors) }
-
Supported File Types
- Text files (.txt, .md, .json)
- Documents (.pdf, .doc, .docx)
- Code files (.js, .py, .ts, etc.)
- Configuration files (.yaml, .toml)
- And more...
-
Smart Indexing
- Incremental updates
- Content deduplication
- Metadata extraction
- File type detection
-
Search Capabilities
- Full-text search
- Fuzzy matching
- Regular expressions
- Metadata filters
The ~/alBERT
directory structure:
~/alBERT/
├── documents/ # General documents
├── notes/ # Quick notes and thoughts
├── code/ # Code snippets and examples
├── configuration/ # Config files and settings
└── .alBERT/ # Internal index and metadata
├── index/ # Search indices
├── vectors/ # Document vectors
├── cache/ # Query cache
└── metadata/ # File metadata
-
Indexing
- Batch processing for multiple files
- Parallel processing when possible
- Priority queue for important files
- Delayed processing for large files
-
Search
graph LR A[Query] --> B[Vector] B --> C{Search Type} C -->|ANN| D[Approximate Search] C -->|KNN| E[Exact Search] D --> F[Results] E --> F
-
File Monitoring
- Debounced file system events
- Coalescence of multiple events
- Selective monitoring based on file size
- Resource-aware processing
alBERT-launcher uses Weaviate Embedded as its vector database engine, providing efficient storage and retrieval of document embeddings:
graph TD
A[Document] --> B[Content Extractor]
B --> C[Text Chunks]
C --> D[Embedding Model]
D --> E[Vector Embeddings]
E --> F[Weaviate DB]
G[Search Query] --> H[Query Vectorizer]
H --> I[Query Vector]
I --> J[Vector Search]
F --> J
J --> K[Ranked Results]
subgraph "Embedding Pipeline"
B
C
D
E
end
subgraph "Vector Store"
F
end
subgraph "Search Pipeline"
H
I
J
end
-
Document Processing
interface WeaviateDocument { content: string path: string lastModified: number extension: string }
-
Schema Definition
const schema = { class: 'File', properties: [ { name: 'path', dataType: ['string'] }, { name: 'content', dataType: ['text'] }, { name: 'filename', dataType: ['string'] }, { name: 'extension', dataType: ['string'] }, { name: 'lastModified', dataType: ['number'] }, { name: 'hash', dataType: ['string'] } ], vectorizer: 'none' // Custom vectorization }
-
Worker-based Processing
- Dedicated worker threads for vectorization
- Parallel processing of document batches
- Automatic resource management
- Error handling and recovery
-
Batch Processing
// Example batch processing export const embed = async ( text: string | string[], batch_size: number = 15 ): Promise<number[] | number[][]> => { // Process in batches for optimal performance }
-
Reranking System
- Cross-encoder for accurate result ranking
- Contextual similarity scoring
- Optional document return with scores
-
Efficient Storage
- Incremental updates
- Document hashing for change detection
- Optimized vector storage
- Automatic garbage collection
-
Fast Retrieval
graph LR A[Query] --> B[Vector] B --> C{Search Type} C -->|ANN| D[Approximate Search] C -->|KNN| E[Exact Search] D --> F[Results] E --> F
-
Optimization Techniques
- Approximate Nearest Neighbor (ANN) search
- Vector quantization
- Dimension reduction
- Caching strategies
-
Hybrid Search
- Combined keyword and semantic search
- Weighted scoring system
- Metadata filtering
- Context-aware ranking
-
Vector Operations
interface RankResult { corpus_id: number score: number text?: string }
This repository is set up for collaborative development:
-
Environment variables are documented in
.env.example
. Provide your own API keys during local development; none are committed to source control. -
The renderer and main process read configuration through the validated helpers in
src/main/config.ts
, ensuring secrets never live in the UI bundle. -
Prefer
npm
for dependency management (package-lock.json
is authoritative). After cloning, run:npm install npm run dev npm run lint npm run typecheck
-
Secret scanning is encouraged for contributors (e.g.,
git secrets --scan
) before submitting pull requests. -
UI components follow the frosted-glass design tokens declared in
src/renderer/src/assets/index.css
—please align new work with these utilities for consistency.
- Quality Assurance
- Automated consistency checks
- Vector space analysis
- Performance monitoring
- Error detection
sequenceDiagram
participant App as Application
participant VDB as Vector DB
participant Worker as Worker Thread
participant Storage as File Storage
App->>VDB: Index Request
VDB->>Worker: Vectorize Content
Worker->>Worker: Process Batch
Worker-->>VDB: Return Vectors
VDB->>Storage: Store Vectors
Storage-->>VDB: Confirm Storage
VDB-->>App: Index Complete
App->>VDB: Search Request
VDB->>Worker: Vectorize Query
Worker-->>VDB: Query Vector
VDB->>Storage: Vector Search
Storage-->>VDB: Search Results
VDB-->>App: Ranked Results
- Node.js (v18 or higher recommended)
- npm 9+ (ships with Node.js and matches the repository lockfile)
- Brave Search API key (optional)
- OpenRouter API key (optional)
# Install dependencies
npm install
# Start the development server
npm run dev
# For macOS
npm run build:mac
# For Windows
npm run build:win
# For Linux
npm run build:linux
Copy .env.example
to .env
in the project root and populate any API keys you intend to use:
cp .env.example .env
The application validates its environment variables at startup and will surface an error if required
values are malformed. Both BRAVE_API_KEY
and OPENROUTER_API_KEY
are optional—features that depend on
them will simply be skipped when the keys are not provided.
alBERT-launcher/
├── src/
│ ├── main/ # Electron main process
│ │ ├── api.ts # tRPC API endpoints
│ │ ├── db.ts # Search database management
│ │ ├── embeddings.ts # Text embedding functionality
│ │ └── utils/ # Utility functions
│ ├── renderer/ # React frontend
│ │ ├── components/ # UI components
│ │ ├── lib/ # Utility functions
│ │ └── App.tsx # Main application component
│ └── preload/ # Electron preload scripts
├── public/ # Static assets
└── electron-builder.json5 # Build configuration
The search API supports various query types:
- Basic text search
- Semantic search
- Natural language questions
- File metadata queries
Example queries:
"find documents about react hooks"
"what are the key points from my meeting notes?"
"show me python files modified last week"
alBERT automatically monitors the ~/alBERT
folder for:
- New files
- File modifications
- File deletions
- File moves
Changes are automatically indexed and available for search immediately.
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
If you encounter any issues or have questions, please file an issue on our GitHub Issues page.