Thanks to visit codestin.com
Credit goes to github.com

Skip to content
/ gno Public

Local AI-powered document search and editing with first-in-class hybrid retrieval, LLM answers, WebUI, REST API and MCP support for AI clients.

License

Notifications You must be signed in to change notification settings

gmickel/gno

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GNO

Your Local Second Brain: Index, search, and synthesize your entire digital life.

npm MIT License Website Twitter Discord

ClawdHub: GNO skills bundled for Clawdbot — clawdhub.com/gmickel/gno

GNO

GNO is a local knowledge engine that turns your documents into a searchable, connected knowledge graph. Index notes, code, PDFs, and Office docs. Get hybrid search, AI answers with citations, and wiki-style note linking—all 100% offline.


Contents


What's New in v0.13

  • Knowledge Graph: Interactive force-directed visualization of document connections
  • Graph with Similarity: See semantic similarity as golden edges (not just wiki/markdown links)
  • CLI: gno graph command with collection filtering and similarity options
  • Web UI: /graph page with zoom, pan, collection filter, similarity toggle
  • MCP: gno_graph tool for AI agents to explore document relationships
  • REST API: /api/graph endpoint with full query parameters

v0.12

  • Note Linking: Wiki-style [[links]], backlinks, and AI-powered related notes
  • Tag System: Filter searches by frontmatter tags with --tags-any/--tags-all
  • Web UI: Outgoing links panel, backlinks panel, related notes sidebar
  • CLI: gno links, gno backlinks, gno similar commands
  • MCP: gno_links, gno_backlinks, gno_similar tools

Quick Start

gno init ~/notes --name notes    # Point at your docs
gno index                        # Build search index
gno query "auth best practices"  # Hybrid search
gno ask "summarize the API" --answer  # AI answer with citations

GNO CLI


Installation

Install GNO

Requires Bun >= 1.0.0.

bun install -g @gmickel/gno

macOS: Vector search requires Homebrew SQLite:

brew install sqlite3

Verify everything works:

gno doctor

Connect to AI Agents

MCP Server (Claude Desktop, Cursor, Zed, etc.)

One command to add GNO to your AI assistant:

gno mcp install                      # Claude Desktop (default)
gno mcp install --target cursor      # Cursor
gno mcp install --target claude-code # Claude Code CLI
gno mcp install --target zed         # Zed
gno mcp install --target windsurf    # Windsurf
gno mcp install --target codex       # OpenAI Codex CLI
gno mcp install --target opencode    # OpenCode
gno mcp install --target amp         # Amp
gno mcp install --target lmstudio    # LM Studio
gno mcp install --target librechat   # LibreChat

Check status: gno mcp status

Skills (Claude Code, Codex, OpenCode)

Skills integrate via CLI with no MCP overhead:

gno skill install --scope user       # User-wide
gno skill install --target codex     # Codex
gno skill install --target all       # Both Claude + Codex

Full setup guide: MCP Integration · CLI Reference


Search Modes

Command Mode Best For
gno search Document-level BM25 Exact phrases, code identifiers
gno vsearch Contextual Vector Natural language, concepts
gno query Hybrid Best accuracy (BM25 + vector + reranking)
gno ask --answer RAG Direct answers with citations

BM25 indexes full documents (not chunks) with Snowball stemming, so "running" matches "run". Vector embeds chunks with document titles for context awareness.

gno search "handleAuth"              # Find exact matches
gno vsearch "error handling patterns" # Semantic similarity
gno query "database optimization"    # Full pipeline
gno ask "what did we decide" --answer # AI synthesis

Output formats: --json, --files, --csv, --md, --xml


Agent Integration

Give your local LLM agents a long-term memory. GNO integrates as a Claude Code skill or MCP server, allowing agents to search, read, and cite your local files.

Skills

Skills add GNO search to Claude Code/Codex without MCP protocol overhead:

gno skill install --scope user

GNO Skill in Claude Code

Then ask your agent: "Search my notes for the auth discussion"

Skill setup guide →

MCP Server

Connect GNO to Claude Desktop, Cursor, Raycast, and more:

GNO MCP

GNO exposes tools via Model Context Protocol:

Tool Description
gno_search BM25 keyword search
gno_vsearch Vector semantic search
gno_query Hybrid search (recommended)
gno_get Retrieve document by ID
gno_multi_get Batch document retrieval
gno_links Get outgoing links from document
gno_backlinks Get documents linking TO document
gno_similar Find semantically similar documents
gno_graph Get knowledge graph (nodes and edges)
gno_status Index health check

Design: MCP tools are retrieval-only. Your AI assistant (Claude, GPT-4) synthesizes answers from retrieved context. Best retrieval (GNO) + best reasoning (your LLM).

MCP setup guide →


Web UI

Visual dashboard for search, browsing, editing, and AI answers. Right in your browser.

gno serve                    # Start on port 3000
gno serve --port 8080        # Custom port

GNO Web UI

Open http://localhost:3000 to:

  • Search: BM25, vector, or hybrid modes with visual results
  • Browse: Paginated document list, filter by collection
  • Edit: Create, edit, and delete documents with live preview
  • Ask: AI-powered Q&A with citations
  • Manage Collections: Add, remove, and re-index collections
  • Switch presets: Change models live without restart

Search

GNO Search

Three retrieval modes: BM25 (keyword), Vector (semantic), or Hybrid (best of both). Adjust search depth for speed vs thoroughness.

Document Editing

GNO Document Editor

Full-featured markdown editor with:

Feature Description
Split View Side-by-side editor and live preview
Auto-save 2-second debounced saves
Syntax Highlighting CodeMirror 6 with markdown support
Keyboard Shortcuts ⌘S save, ⌘B bold, ⌘I italic, ⌘K link
Quick Capture ⌘N creates new note from anywhere

Document Viewer

GNO Document Viewer

View documents with full context: outgoing links, backlinks, and AI-powered related notes sidebar.

Knowledge Graph

GNO Knowledge Graph

Interactive visualization of document connections. Wiki links, markdown links, and optional similarity edges rendered as a navigable constellation.

Collections Management

GNO Collections

  • Add collections with folder path input
  • View document count, chunk count, embedding status
  • Re-index individual collections
  • Remove collections (documents preserved)

AI Answers

GNO AI Answers

Ask questions in natural language. GNO searches your documents and synthesizes answers with inline citations linking to sources.

Everything runs locally. No cloud, no accounts, no data leaving your machine.

Detailed docs: Web UI Guide


REST API

Programmatic access to all GNO features via HTTP.

# Hybrid search
curl -X POST http://localhost:3000/api/query \
  -H "Content-Type: application/json" \
  -d '{"query": "authentication patterns", "limit": 10}'

# AI answer
curl -X POST http://localhost:3000/api/ask \
  -H "Content-Type: application/json" \
  -d '{"query": "What is our deployment process?"}'

# Index status
curl http://localhost:3000/api/status
Endpoint Method Description
/api/query POST Hybrid search (recommended)
/api/search POST BM25 keyword search
/api/ask POST AI-powered Q&A
/api/docs GET List documents
/api/docs POST Create document
/api/docs/:id PUT Update document content
/api/docs/:id/deactivate POST Remove from index
/api/doc GET Get document content
/api/collections POST Add collection
/api/collections/:name DELETE Remove collection
/api/sync POST Trigger re-index
/api/status GET Index statistics
/api/presets GET List model presets
/api/presets POST Switch preset
/api/models/pull POST Download models
/api/models/status GET Download progress

No authentication. No rate limits. Build custom tools, automate workflows, integrate with any language.

Full reference: API Documentation


How It Works

graph TD
    A[User Query] --> B(Query Expansion)
    B --> C{Lexical Variants}
    B --> D{Semantic Variants}
    B --> E{HyDE Passage}

    C --> G(BM25 Search)
    D --> H(Vector Search)
    E --> H
    A --> G
    A --> H

    G --> I(Ranked Results)
    H --> J(Ranked Results)
    I --> K{RRF Fusion}
    J --> K

    K --> L(Top 20 Candidates)
    L --> M(Cross-Encoder Rerank)
    M --> N[Final Results]
Loading
  1. Strong Signal Check: Skip expansion if BM25 has confident match (saves 1-3s)
  2. Query Expansion: LLM generates lexical variants, semantic rephrases, and a HyDE passage
  3. Parallel Retrieval: Document-level BM25 + chunk-level vector search on all variants
  4. Fusion: RRF with 2× weight for original query, tiered bonus for top ranks
  5. Reranking: Qwen3-Reranker scores best chunk per document (4K), blended with fusion

Deep dive: How Search Works


Features

Feature Description
Hybrid Search BM25 + vector + RRF fusion + cross-encoder reranking
Document Editor Create, edit, delete docs with live markdown preview
Web UI Visual dashboard for search, browse, edit, and AI Q&A
REST API HTTP API for custom tools and integrations
Multi-Format Markdown, PDF, DOCX, XLSX, PPTX, plain text
Local LLM AI answers via llama.cpp, no API keys
Privacy First 100% offline, zero telemetry, your data stays yours
MCP Server Works with Claude Desktop, Cursor, Zed, + 8 more
Collections Organize sources with patterns, excludes, contexts
Tag Filtering Frontmatter tags with hierarchical paths, filter via --tags-any/--tags-all
Note Linking Wiki links, backlinks, related notes, cross-collection navigation
Multilingual 30+ languages, auto-detection, cross-lingual search
Incremental SHA-256 tracking, only changed files re-indexed
Keyboard First ⌘N capture, ⌘K search, ⌘/ shortcuts, ⌘S save

Local Models

Models auto-download on first use to ~/.cache/gno/models/.

Model Purpose Size
bge-m3 Embeddings (1024-dim, multilingual) ~500MB
Qwen3-Reranker-0.6B Cross-encoder reranking (32K context) ~700MB
Qwen/SmolLM Query expansion + AI answers ~600MB-1.2GB

Model Presets

Preset Disk Best For
slim ~1GB Fast, good quality (default)
balanced ~2GB Slightly larger model
quality ~2.5GB Best answers
gno models use slim
gno models pull --all  # Optional: pre-download models (auto-downloads on first use)

Configuration: Model Setup


Architecture

┌─────────────────────────────────────────────────┐
│            GNO CLI / MCP / Web UI / API         │
├─────────────────────────────────────────────────┤
│  Ports: Converter, Store, Embedding, Rerank    │
├─────────────────────────────────────────────────┤
│  Adapters: SQLite, FTS5, sqlite-vec, llama-cpp │
├─────────────────────────────────────────────────┤
│  Core: Identity, Mirrors, Chunking, Retrieval  │
└─────────────────────────────────────────────────┘

Details: Architecture


Development

git clone https://github.com/gmickel/gno.git && cd gno
bun install
bun test
bun run lint && bun run typecheck

Contributing: CONTRIBUTING.md


License

MIT


made with ❤️ by @gmickel

About

Local AI-powered document search and editing with first-in-class hybrid retrieval, LLM answers, WebUI, REST API and MCP support for AI clients.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •