The Graph Database That Learns
Neo4j-compatible • GPU-accelerated • Memory that evolves
Quick Start • Features • Docker • Docs
# Apple Silicon (M1/M2/M3) with bge-m3 embedding model + heimdall
docker pull timothyswt/nornicdb-arm64-metal-bge-heimdall:latest
docker run -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-arm64-metal-bge-heimdall
# Apple Silicon (M1/M2/M3) with bge-m3 embedding model
docker pull timothyswt/nornicdb-arm64-metal-bge:latest
docker run -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-arm64-metal-bge
# Apple Silicon (M1/M2/M3) BYOM
docker pull timothyswt/nornicdb-arm64-metal:latest
docker run -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-arm64-metal
# Apple Silicon (M1/M2/M3) BYOM + no UI
docker pull timothyswt/nornicdb-arm64-metal-headless:latest
docker run -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-arm64-headless
# NVIDIA GPU (Windows/Linux) with bge-m3 embedding model + heimdall
docker pull timothyswt/nornicdb-amd64-cuda-bge-heimdall:latest
docker run --gpus all -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-cuda-bge-heimdall
# NVIDIA GPU (Windows/Linux) with bge-m3 embedding model
docker pull timothyswt/nornicdb-amd64-cuda-bge:latest
docker run --gpus all -d --gpus all -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-cuda-bge
# NVIDIA GPU (Windows/Linux) BYOM
docker pull timothyswt/nornicdb-amd64-cuda:latest
docker run --gpus all -d --gpus all -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-cuda
# CPU Only (Windows/Linux) BYOM
docker pull timothyswt/nornicdb-amd64-cpu:latest
docker run -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-cpu
# CPU Only (Windows/Linux) BYOM + no UI
docker pull timothyswt/nornicdb-amd64-headless:latest
docker run -d --gpus all -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-cpu-headless
docker pull timothyswt/nornicdb-amd64-vulkan:latest
docker run --gpus all -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-vulkan:latest
docker pull timothyswt/nornicdb-amd64-vulkan-bge:latest
docker run --gpus all -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-vulkan-bge:latest
docker pull timothyswt/nornicdb-amd64-vulkan-headless:latest
docker run --gpus all -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-vulkan-headless:latestOpen http://localhost:7474 — Admin UI with AI assistant ready to query your data.
Below are copy-pastable commands and prerequisites to build NornicDB for the supported targets (Docker images, local binaries, cross-compiles and Raspberry Pi). These instructions reflect the Makefile targets in this repository.
- Go 1.23+ (for builds and cross-compilation)
- Docker (for image builds)
- curl (for model downloads)
- GNU make
- For localllm / BGE (local embeddings): a working
llama.cppbuild — seescripts/build-llama.sh - For CUDA builds on Linux/Windows: NVIDIA drivers + CUDA Toolkit (12.x recommended)
- For Vulkan builds on Linux: Vulkan runtime & drivers for your GPU (AMD/NVIDIA/Intel)
- On macOS (Apple Silicon): Docker +
--platform linux/arm64is used for arm64 images (Metal GPU acceleration implemented in the image) - Optional:
ghCLI if you want to create GitHub releases
Model files:
- BGE:
models/bge-m3.gguf(Makefile targetmake download-bgewill download it) - Qwen:
models/qwen2.5-0.5b-instruct.gguf(Makefile targetmake download-qwenwill download it)
- Force no cache:
NO_CACHE=1 - Set registry (default
timothyswt):REGISTRY=yourdockerid - Set tag version (default
latest):VERSION=1.0.6
NornicDB is a high-performance graph database designed for AI agents and knowledge systems. It speaks Neo4j's language (Bolt protocol + Cypher) so you can switch with zero code changes, while adding intelligent features that traditional databases lack.
NornicDB automatically discovers and manages relationships in your data, weaving connections that let meaning emerge from your knowledge graph.
# Ready to go - includes embedding model
docker run -d --name nornicdb \
-p 7474:7474 -p 7687:7687 \
-v nornicdb-data:/data \
timothyswt/nornicdb-arm64-metal-bge:latest # Apple Silicon
# timothyswt/nornicdb-amd64-cuda-bge:latest # NVIDIA GPUgit clone https://github.com/orneryd/NornicDB.git
cd NornicDB
go build -o nornicdb ./cmd/nornicdb
./nornicdb serveUse any Neo4j driver — Python, JavaScript, Go, Java, .NET:
from neo4j import GraphDatabase
driver = GraphDatabase.driver("bolt://localhost:7687")
with driver.session() as session:
session.run("CREATE (n:Memory {content: 'Hello NornicDB'})")Drop-in replacement for Neo4j. Your existing code works unchanged.
- Bolt Protocol — Use official Neo4j drivers
- Cypher Queries — Full query language support
- Schema Management — Constraints, indexes, vector indexes
Memory that behaves like human cognition.
| Memory Tier | Half-Life | Use Case |
|---|---|---|
| Episodic | 7 days | Chat context, sessions |
| Semantic | 69 days | Facts, decisions |
| Procedural | 693 days | Skills, patterns |
// Find memories that are still strong
MATCH (m:Memory) WHERE m.decayScore > 0.5
RETURN m.title ORDER BY m.decayScore DESCNornicDB weaves connections automatically:
- Embedding Similarity — Related concepts link together
- Co-access Patterns — Frequently queried pairs connect
- Temporal Proximity — Same-session nodes associate
- Transitive Inference — A→B + B→C suggests A→C
LDBC Social Network Benchmark (M3 Max, 64GB):
| Query Type | NornicDB | Neo4j | Speedup |
|---|---|---|---|
| Message content lookup | 6,389 ops/sec | 518 ops/sec | 12x |
| Recent messages (friends) | 2,769 ops/sec | 108 ops/sec | 25x |
| Avg friends per city | 4,713 ops/sec | 91 ops/sec | 52x |
| Tag co-occurrence | 2,076 ops/sec | 65 ops/sec | 32x |
Northwind Benchmark (M3 Max vs Neo4j, same hardware):
| Operation | NornicDB | Neo4j | Speedup |
|---|---|---|---|
| Index lookup | 7,623 ops/sec | 2,143 ops/sec | 3.6x |
| Count nodes | 5,253 ops/sec | 798 ops/sec | 6.6x |
| Write: node | 5,578 ops/sec | 1,690 ops/sec | 3.3x |
| Write: edge | 6,626 ops/sec | 1,611 ops/sec | 4.1x |
Cross-Platform (CUDA on Windows i9-9900KF + RTX 2080 Ti):
| Operation | Throughput |
|---|---|
| Orders by customer | 4,252 ops/sec |
| Products out of stock | 4,174 ops/sec |
| Find category | 4,071 ops/sec |
Additional advantages:
- Memory footprint: 100-500 MB vs 1-4 GB for Neo4j
- Cold start: <1s vs 10-30s for Neo4j
See full benchmark results for detailed comparisons.
Native semantic search with GPU acceleration. NornicDB automatically indexes all node embeddings - no manual index creation required.
📖 Deep Dive: See Vector Search Guide for internal index architecture, user-defined indexes, and the embedding lookup order.
Option 1: Vector Array (Neo4j Compatible)
Provide your own embeddings - works identically to Neo4j:
// Query with a pre-computed embedding vector
CALL db.index.vector.queryNodes(
'embeddings', // Index name (created via createNodeIndex)
10, // Number of results
[0.1, 0.2, 0.3, ...] // Your query vector
) YIELD node, score
RETURN node.content, scoreOption 2: String Query (NornicDB Enhanced)
Let NornicDB handle embedding generation automatically:
// Query with natural language - NornicDB generates the embedding
CALL db.index.vector.queryNodes(
'embeddings', // Index name
10, // Number of results
'machine learning guide' // Natural language query (auto-embedded)
) YIELD node, score
RETURN node.content, score💡 Note: String queries require an embedder to be configured. When enabled, NornicDB automatically generates embeddings server-side using the configured model (Ollama, OpenAI, or local GGUF).
Option 3: REST API (Hybrid Search)
Use the REST endpoint for combined vector + BM25 search:
curl -X POST http://localhost:7474/nornicdb/search \
-H "Content-Type: application/json" \
-d '{"query": "machine learning", "limit": 10}'Built-in AI that understands your database.
# Enable Heimdall
NORNICDB_HEIMDALL_ENABLED=true ./nornicdb serveNatural Language Queries:
- "Get the database status"
- "Show me system metrics"
- "Run health check"
Plugin System:
- Create custom actions the AI can execute
- Lifecycle hooks (PrePrompt, PreExecute, PostExecute)
- Database event monitoring for autonomous actions
- Inline notifications with proper ordering
See Heimdall AI Assistant Guide and Plugin Development.
950+ built-in functions for text, math, collections, and more. Plus a plugin system for custom extensions.
// Text processing
RETURN apoc.text.camelCase('hello world') // "helloWorld"
RETURN apoc.text.slugify('Hello World!') // "hello-world"
// Machine learning
RETURN apoc.ml.sigmoid(0) // 0.5
RETURN apoc.ml.cosineSimilarity([1,0], [0,1]) // 0.0
// Collections
RETURN apoc.coll.sum([1, 2, 3, 4, 5]) // 15Drop custom .so plugins into /app/plugins/ for automatic loading. See the APOC Plugin Guide.
All images available at Docker Hub.
| Image | Size | Description |
|---|---|---|
timothyswt/nornicdb-arm64-metal-bge-heimdall |
1.1 GB | Full - Embeddings + AI Assistant |
timothyswt/nornicdb-arm64-metal-bge |
586 MB | Standard - With BGE-M3 embeddings |
timothyswt/nornicdb-arm64-metal |
148 MB | Minimal - Core database, BYOM |
timothyswt/nornicdb-arm64-metal-headless |
148 MB | Headless - API only, no UI |
| Image | Size | Description |
|---|---|---|
timothyswt/nornicdb-amd64-cuda-bge |
~4.5 GB | GPU + Embeddings - CUDA + BGE-M3 |
timothyswt/nornicdb-amd64-cuda |
~3 GB | GPU - CUDA acceleration, BYOM |
timothyswt/nornicdb-amd64-cuda-headless |
~2.9 GB | GPU Headless - API only |
timothyswt/nornicdb-amd64-cpu |
~500 MB | CPU - No GPU required |
timothyswt/nornicdb-amd64-cpu-headless |
~500 MB | CPU Headless - API only |
BYOM = Bring Your Own Model (mount at /app/models)
# With your own model
docker run -d -p 7474:7474 -p 7687:7687 \
-v /path/to/models:/app/models \
timothyswt/nornicdb-arm64-metal:latest
# Headless mode (API only, no web UI)
docker run -d -p 7474:7474 -p 7687:7687 \
-v nornicdb-data:/data \
timothyswt/nornicdb-arm64-metal-headless:latestFor embedded deployments, microservices, or API-only use cases, NornicDB supports headless mode which disables the web UI for a smaller binary and reduced attack surface.
Runtime flag:
nornicdb serve --headlessEnvironment variable:
NORNICDB_HEADLESS=true nornicdb serveBuild without UI (smaller binary):
# Native build
make build-headless
# Docker build
docker build --build-arg HEADLESS=true -f docker/Dockerfile.arm64-metal .# nornicdb.yaml
server:
bolt_port: 7687
http_port: 7474
data_dir: ./data
embeddings:
provider: local # or ollama, openai
model: bge-m3
dimensions: 1024
decay:
enabled: true
recalculate_interval: 1h
auto_links:
enabled: true
similarity_threshold: 0.82
# === Async Write Settings ===
# These control the async write-behind cache for better throughput
async_writes:
enabled: true # Enable async writes (default: true)
flush_interval: 50ms # How often to flush pending writes
max_node_cache_size: 50000 # Max nodes to buffer before forcing flush
max_edge_cache_size: 100000 # Max edges to buffer before forcing flush- AI Agent Memory — Persistent, queryable memory for LLM agents
- Knowledge Graphs — Auto-organizing knowledge bases
- RAG Systems — Vector + graph retrieval in one database
- Session Context — Decaying conversation history
- Research Tools — Connect papers, notes, and insights
| Guide | Description |
|---|---|
| Getting Started | Installation & quick start |
| API Reference | Cypher functions & procedures |
| User Guides | Complete examples & patterns |
| Performance | Benchmarks vs Neo4j |
| Neo4j Migration | Compatibility & feature parity |
| Architecture | System design & internals |
| Docker Guide | Build & deployment |
| Development | Contributing & development |
| Feature | Neo4j | NornicDB |
|---|---|---|
| Protocol | Bolt ✓ | Bolt ✓ |
| Query Language | Cypher ✓ | Cypher ✓ |
| Memory Decay | Manual | Automatic |
| Auto-Relationships | No | Built-in |
| Vector Search | Plugin | Native |
| GPU Acceleration | No | Metal/CUDA |
| Embedded Mode | No | Yes |
| License | GPL | MIT |
# Basic build
make build
# Headless (no UI)
make build-headless
# With local LLM support
make build-localllm# Download models for Heimdall builds (automatic if missing)
make download-models # BGE-M3 + Qwen2.5-0.5B (~750MB)
make check-models # Verify models present
# ARM64 (Apple Silicon)
make build-arm64-metal # Base (BYOM)
make build-arm64-metal-bge # With BGE embeddings
make build-arm64-metal-bge-heimdall # With BGE + Heimdall AI
make build-arm64-metal-headless # Headless (no UI)
# AMD64 CUDA (NVIDIA GPU)
make build-amd64-cuda # Base (BYOM)
make build-amd64-cuda-bge # With BGE embeddings
make build-amd64-cuda-bge-heimdall # With BGE + Heimdall AI
make build-amd64-cuda-headless # Headless (no UI)
# AMD64 CPU-only
make build-amd64-cpu # Minimal
make build-amd64-cpu-headless # Minimal headless
# Build all variants for your architecture
make build-all
# Deploy to registry
make deploy-all # Build + push all variants# Build for other platforms from macOS
make cross-linux-amd64 # Linux x86_64
make cross-linux-arm64 # Linux ARM64
make cross-rpi # Raspberry Pi 4/5
make cross-windows # Windows (CPU-only)
make cross-all # All platforms- Neo4j Bolt protocol
- Cypher query engine (52 functions)
- Memory decay system
- GPU acceleration (Metal, CUDA)
- Vector & full-text search
- Auto-relationship engine
- HNSW vector index
- Metadata/Property Indexing
- SIMD Implementation
- Clustering support
MIT License — Originally part of the Mimir project, now maintained as a standalone repository.
See NOTICES.md for third-party license information, including bundled AI models (BGE-M3, Qwen2.5) and dependencies.
Weaving your data's destiny