-
Notifications
You must be signed in to change notification settings - Fork 26
Description
NEW REPOSITOIRY STANDALONE
π NornicDB V1.0 Release Highlights
The Graph Database That Learns β Neo4j-compatible, GPU-accelerated, with memory that evolves.
π What is NornicDB?
NornicDB is a high-performance graph database designed for AI agents and knowledge systems. It speaks Neo4j's language (Bolt protocol + Cypher) so you can switch with zero code changes, while adding intelligent features that traditional databases lack.
π³ Get Started in 30 Seconds
# Apple Silicon (M1/M2/M3) with bge-m3 embedding model + heimdall
docker pull timothyswt/nornicdb-arm64-metal-bge-heimdall:latest
docker run -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-arm64-metal-bge-heimdall
# Apple Silicon (M1/M2/M3) with bge-m3 embedding model
docker pull timothyswt/nornicdb-arm64-metal-bge:latest
docker run -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-arm64-metal-bge
# Apple Silicon (M1/M2/M3) BYOM
docker pull timothyswt/nornicdb-arm64-metal:latest
docker run -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-arm64-metal
# Apple Silicon (M1/M2/M3) BYOM + no UI
docker pull timothyswt/nornicdb-arm64-metal-headless:latest
docker run -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-arm64-headless
# NVIDIA GPU (Windows/Linux) with bge-m3 embedding model + heimdall
docker pull timothyswt/nornicdb-amd64-cuda-bge-heimdall:latest
docker run -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-cuda-bge-heimdall
# NVIDIA GPU (Windows/Linux) with bge-m3 embedding model
docker pull timothyswt/nornicdb-amd64-cuda-bge:latest
docker run -d --gpus all -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-cuda-bge
# NVIDIA GPU (Windows/Linux) BYOM
docker pull timothyswt/nornicdb-amd64-cuda:latest
docker run -d --gpus all -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-cuda
# CPU Only (Windows/Linux) BYOM
docker pull timothyswt/nornicdb-amd64-cpu:latest
docker run -d -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-cpu
# CPU Only (Windows/Linux) BYOM + no UI
docker pull timothyswt/nornicdb-amd64-headless:latest
docker run -d --gpus all -p 7474:7474 -p 7687:7687 -v nornicdb-data:/data \
timothyswt/nornicdb-amd64-cpu-headlessOpen http://localhost:7474 β Admin UI with AI assistant ready to query your data.
β¨ V1 Feature Highlights
π Neo4j Drop-In Compatibility
- Bolt Protocol β Use any official Neo4j driver (Python, JavaScript, Go, Java, .NET)
- Cypher Query Language β Full support for MATCH, CREATE, MERGE, WITH, RETURN, etc.
- Schema Management β Constraints, indexes, and vector indexes
- Zero Code Changes β Your existing Neo4j applications just work
from neo4j import GraphDatabase
driver = GraphDatabase.driver("bolt://localhost:7687")
# That's it. Your Neo4j code works unchanged.β‘ Performance (3-52x Faster Than Neo4j)
LDBC Social Network Benchmark (M3 Max, 64GB):
| Query Type | NornicDB | Neo4j | Speedup |
|---|---|---|---|
| Message content lookup | 6,389 ops/sec | 518 ops/sec | 12x |
| Recent messages (friends) | 2,769 ops/sec | 108 ops/sec | 25x |
| Avg friends per city | 4,713 ops/sec | 91 ops/sec | 52x |
| Tag co-occurrence | 2,076 ops/sec | 65 ops/sec | 32x |
Resource Efficiency:
- Memory: 100-500 MB vs 1-4 GB for Neo4j
- Cold Start: <1s vs 10-30s for Neo4j
π§ Intelligent Memory System
Memory that behaves like human cognition with automatic decay:
| Memory Tier | Half-Life | Use Case |
|---|---|---|
| Episodic | 7 days | Chat context, sessions |
| Semantic | 69 days | Facts, decisions |
| Procedural | 693 days | Skills, patterns |
MATCH (m:Memory) WHERE m.decayScore > 0.5
RETURN m.title ORDER BY m.decayScore DESCπ Auto-TLP: Automatic Relationship Inference
NornicDB weaves connections automatically:
- Embedding Similarity β Related concepts link together
- Co-access Patterns β Frequently queried pairs connect
- Temporal Proximity β Same-session nodes associate
- Transitive Inference β AβB + BβC suggests AβC
- Edge Decay β Unused connections fade over time
# Enable automatic relationship inference
NORNICDB_AUTO_TLP_ENABLED=trueπ― GPU-Accelerated Vector Search
Native semantic search with hardware acceleration:
- Apple Silicon β Metal Performance Shaders
- NVIDIA β CUDA acceleration
- Hybrid Search β RRF fusion of vector + BM25
CALL db.index.vector.queryNodes('memory_embeddings', 10, $queryVector)
YIELD node, score
RETURN node.content, scoreOTHER GPUS TO BE SUPPORTED TOO AS SOON AS I FIND TIME
π€ Heimdall AI Assistant
Built-in AI that understands your database:
- Natural Language Queries β "Show me system metrics", "Run health check"
- Bifrost Chat Interface β Real-time SSE streaming
- Plugin System β Custom actions with lifecycle hooks
- In-Memory llama.cpp β Direct SLM integration, no HTTP calls
NORNICDB_HEIMDALL_ENABLED=true ./nornicdb serve
# Access Bifrost at http://localhost:7474/bifrostπ§© APOC Functions (60+ Built-In) (964 in the plugin)
Neo4j-compatible utility functions:
// Text processing
RETURN apoc.text.camelCase('hello world') // "helloWorld"
RETURN apoc.text.slugify('Hello World!') // "hello-world"
// Machine learning
RETURN apoc.ml.sigmoid(0) // 0.5
RETURN apoc.ml.cosineSimilarity([1,0], [0,1]) // 0.0
// Collections
RETURN apoc.coll.sum([1, 2, 3, 4, 5]) // 15Plus plugin system β drop .so files for custom extensions.
π‘οΈ Production-Ready Features
- WAL (Write-Ahead Log) β Durability with crash recovery
- Snapshots β Binary gob encoding for fast persistence
- Audit Logging β Track all operations
- RBAC β Role-based access control
- Edge Provenance β Track why edges were created
- Evidence Buffering β Aggregate signals before materializing
π¦ Docker Images Ready to Go
| Platform | Image | Description |
|---|---|---|
| Apple Silicon | nornicdb-arm64-metal-bge-heimdall |
Full - Embeddings + AI |
| Apple Silicon | nornicdb-arm64-metal-bge |
Standard - With BGE-M3 |
| Apple Silicon | nornicdb-arm64-metal |
Minimal - BYOM |
| NVIDIA GPU | nornicdb-amd64-cuda-bge-heimdall |
Full - CUDA + AI |
| NVIDIA GPU | nornicdb-amd64-cuda-bge |
Standard - With BGE-M3 |
| CPU Only | nornicdb-amd64-cpu |
No GPU required |
30-Second Start:
docker run -d -p 7474:7474 -p 7687:7687 \
timothyswt/nornicdb-arm64-metal-bge-heimdall:latestπ§ Feature Flags (Runtime Configuration)
All experimental features are opt-in:
| Feature | Default | Flag |
|---|---|---|
| Auto-TLP | β Off | NORNICDB_AUTO_TLP_ENABLED |
| Edge Decay | β On | NORNICDB_EDGE_DECAY_ENABLED (requires NORNICDB_AUTO_TLP_ENABLED=true) |
| GPU Clustering | β Off | NORNICDB_GPU_CLUSTERING_ENABLED |
| Heimdall QC | β Off | NORNICDB_AUTO_TLP_LLM_QC_ENABLED |
π MCP Integration
Model Context Protocol tools for AI agents:
storeβ Create nodes with embeddingsrecallβ Retrieve by ID or filterdiscoverβ Semantic searchlinkβ Create relationshipstask/tasksβ Task managementindex/unindexβ Codebase indexing
π V1 by the Numbers
- 3-52x faster than Neo4j across benchmarks
- 60+ APOC functions built-in
- 6 Docker images for different platforms
- 30+ feature flags for fine-tuning
- <1s cold start vs 10-30s for Neo4j
- 100-500MB RAM vs 1-4GB for Neo4j
π Get Started
# Apple Silicon with everything
docker run -d -p 7474:7474 -p 7687:7687 \
-v nornicdb-data:/data \
timothyswt/nornicdb-arm64-metal-bge-heimdall:latest
# Open http://localhost:7474π Documentation
π Feedback Welcome!
We'd love to hear from you:
- What features would you like to see?
- What's working well?
- What could be improved?
"The Norns weave your data's destiny" β NornicDB automatically discovers and creates the connections that give your data meaning.