THE Memory Layer That Actually Learns
Stop re-explaining yourself every conversation. Roampal remembers your context, learns what actually works for you, and gets smarter over timeβall while keeping your data 100% private and local on your machine.
The Problem: You've explained your setup to AI 47 times. It never learns what worked. You're paying $20/month(or more!) to re-train it daily.
The Solution: Roampal implements a 5-tier memory system that:
- Remembers you: Your stack, preferences, projects stored permanently
- Learns what works: Tracks which solutions actually worked for YOU
- Gets smarter over time: Successful advice promotes to long-term memory, failures get deleted
Think of it as your personal AI that compounds in value the longer you use it.
Validated performance characteristics:
| Metric | Result |
|---|---|
| Search Latency (p95) | 34ms |
| Token Efficiency | 112 tokens/query |
| Learning Under Noise | 80% precision @ 4:1 semantic confusion |
| Routing Accuracy | 100% (cross-collection test) |
See benchmark methodology & results β
Roampal includes advanced memory features:
- Outcome-Based Learning: Memories adapt based on feedback (+0.2 worked, -0.3 failed)
- 5-Tier Architecture: Books, Working, History, Patterns, Memory Bank
- Dual Knowledge Graphs: Routing KG + Content KG for entity tracking
- Local-First: All processing on-device, no cloud dependencies
- Autonomous identity storage - AI automatically stores facts about YOU: identity, preferences, goals, projects
- Permanent memory - Never decays, always accessible (score fixed at 1.0)
- Full control - View, restore, or delete memories via Settings UI
- Smart categorization - Tags: identity, preference, goal, context, workflow
- Example: "I prefer TypeScript" β AI stores permanently β Never suggests JavaScript again
- Upload your docs - .txt, .md files become searchable permanent reference
- Semantic chunking - Smart document processing for accurate retrieval
- Source attribution - AI cites which document/page info came from
- Persistent library - Reference materials never expire or decay
- Example: Upload architecture docs β AI references YOUR conventions when answering
- Automatic outcome detection - Tracks when advice worked (+0.2) or failed (-0.3)
- Smart promotion - Score β₯0.7 + 2 uses β History. Score β₯0.9 + 3 uses β Patterns (permanent)
- Auto-cleanup - Bad advice (score <0.2) gets deleted automatically
- Organic recall - Proactively surfaces: "You tried this 3 times before, here's what worked..."
- Global search - Working memory searches across ALL conversations, not just current one
- Pattern recognition - Detects recurring issues across conversation boundaries
- True continuity - "You asked about this 3 weeks ago in a different chat..."
- YAML templates - Fully customize assistant tone, identity, behavior
- Persistent preferences - Your settings saved locally
- Role flexibility - Teacher, advisor, pair programmer, creative partner
- 100% local - All data on your machine, zero cloud dependencies
- Works offline - No internet after model download
- Full ownership - Export, backup, or delete data anytime
- No telemetry - Your data never leaves your computer
For Developers:
- "Remembers my entire stack. Never suggests Python when I use Rust."
- Learns debugging patterns that work for YOUR codebase
- Recalls past solutions: "This approach worked 3 weeks ago"
For Students & Learners:
- "My personal tutor that remembers what I struggle with"
- Tracks what concepts you've mastered
- Adapts explanations to your learning style over time
For Writers & Creators:
- "Remembers my story world, characters, and tone"
- Stores worldbuilding details permanently
- Tracks character arcs across conversations
For Entrepreneurs & Founders:
- "My business advisor that knows my entire strategy"
- Remembers your business model and goals
- Tracks which marketing approaches actually worked
Roampal uses large language models (LLMs) which may:
- Generate incorrect, outdated, or misleading information
- Produce inconsistent responses to similar queries
- Hallucinate facts, sources, or code that don't exist
- Reflect biases present in training data
Always verify critical information from authoritative sources. Do not rely on AI-generated content for:
- Medical, legal, or financial advice
- Safety-critical systems or decisions
- Production code without thorough review and testing
Downloaded models have separate licenses:
- Ollama models: Llama (Meta - License), Qwen (Alibaba), etc. - Check Ollama Library
- LM Studio models: GGUF format from Hugging Face - Check individual model cards for licenses
- Models you download have their own terms of use - review before commercial use
Search Performance:
- p95 latency: 34ms
- Token efficiency: 112 tokens/query average
- Cross-collection routing: 100% accuracy (7/7 tests)
Learning Capabilities:
- Semantic confusion resistance: 80% precision under 4:1 noise ratio
- Outcome-based score adaptation: +0.2 (worked), -0.3 (failed)
- Smart promotion: Working β History (score β₯0.7, 2+ uses), History β Patterns (score β₯0.9, 3+ uses)
Memory System:
- 5-tier architecture: Books, Working, History, Patterns, Memory Bank
- Dual knowledge graphs: Routing KG + Content KG
- Quality-based ranking: importance Γ confidence scoring
See benchmarks/README.md for test methodology
Learning-Based Knowledge Graph Routing + Enhanced MCP Integration
π― Intelligent KG Routing: System learns which collections answer which queries
- Cold start (0-10 queries): Searches all collections
- Learning phase (10-20 queries): Focuses on top 2-3 successful collections
- Confident routing (20+ queries): Routes to single best collection with 80%+ success rate
- Progression: 60% precision β 80% precision β 100% precision achievable
π Enhanced MCP Integration: Semantic learning storage with outcome-based scoring
- External LLMs (Claude Desktop, Cursor) store summaries, not verbatim transcripts
- Explicit outcome scoring (worked/failed/partial/unknown)
- Scores CURRENT learning immediately (enables optional tool calling)
- Cross-tool memory sharing across all MCP clients
π Dual Knowledge Graph System:
- Routing KG (blue nodes) - Learns query patterns β collection routing
- Content KG (green nodes) - Entity relationships extracted from memories
- Purple nodes - Concepts appearing in both graphs
π Bundled Multilingual Embeddings: Works offline in 50+ languages
- Model:
paraphrase-multilingual-mpnet-base-v2 - No internet required after initial setup
- Search latency: 34ms (p95)
- Token efficiency: 112 tokens/query
- Semantic confusion resistance: 80% precision @ 4:1 noise
- Routing accuracy: 100% (cross-collection KG test)
| Feature | Roampal Approach |
|---|---|
| Memory Type | Learns what works for you, not just what you say |
| Outcome Tracking | Scores every result (+0.2 worked, -0.3 failed) |
| Bad Advice | Auto-deleted when score drops below threshold |
| Context | Recalls from all past conversations globally |
| Privacy | 100% local, zero telemetry, full data ownership |
| Performance | 34ms search latency (p95) |
Quick start:
- Download from roampal.ai and extract
- Install an LLM provider:
- Ollama (ollama.com) - Recommended for beginners
- LM Studio (lmstudio.ai) - Advanced users with GUI preferences
- Right-click
Roampal.exeβ Run as administrator (Windows requires this to avoid permission issues) - Download your first model in the UI (Roampal handles the rest!)
Your AI will start learning about you immediately.
To update to a new version:
- Download the latest release and extract it
- Close Roampal if it's running
- Replace your old Roampal folder with the new one
- Run
Roampal.exe- all your data is preserved!
Your data is safe - All conversations, memories, settings, and downloaded models are stored in AppData and remain intact across updates. Simply overwrite the program files and you're good to go.
Roampal uses a memory-first architecture with five tiers:
- Working Memory (24h) - Current conversation context
- History (30 days) - Recent conversations and interactions
- Patterns (permanent) - Successful solutions and learned patterns
- Memory Bank (permanent) - User preferences, identity, and project context
- Books (permanent) - Uploaded reference documents
The LLM autonomously controls memory via tools (search_memory, create_memory, update_memory, archive_memory).
Connect Roampal to Claude Desktop, Cursor, and other MCP-compatible tools for persistent memory across applications.
- Open Settings β Integrations in Roampal
- Click "Connect" next to Claude Desktop or Cursor
- Restart your tool - memory tools are available immediately
Roampal auto-discovers MCP clients and writes the config for you. No manual JSON editing required.
search_memory- Search across all memory tiers with optional metadata filteringadd_to_memory_bank- Store permanent facts about the userupdate_memory- Modify existing memories by doc_idarchive_memory- Remove outdated informationrecord_response- Store semantic learnings with explicit outcome scoring (worked/failed/partial/unknown)
Semantic Learning Storage: External LLMs store summaries, not verbatim transcripts. The record_response tool accepts:
key_takeaway(required) - 1-2 sentence summary of what was learnedoutcome(optional) - Explicit scoring: "worked", "failed", "partial", or "unknown" (default)
Score CURRENT, not PREVIOUS: Unlike Roampal's internal system (which scores previous exchanges), MCP scores the learning being recorded immediately. This allows optional tool calling - external LLMs only call record_response when clear outcomes occur.
Scores retrieved memories too: When you call record_response, it also scores all memories from your last search with the same outcome. If advice worked, those memories get upvoted (+0.2). If it failed, they get downvoted (-0.3). This helps good memories promote faster and bad advice get deleted.
Cross-tool memory sharing: Learnings recorded in Claude Desktop are searchable in Cursor, Roampal, and vice versa. All tools share the same local ChromaDB instance.
- β Auto-discovery - Detects Claude Desktop, Cursor, and other MCP clients automatically
- β Semantic learning - Stores concepts, not chat logs
- β Outcome-based scoring - External LLM judges quality based on user feedback
- β 50+ languages - Bundled multilingual embedding model (paraphrase-multilingual-mpnet-base-v2)
- β 100% local - All data stays on your machine
Roampal is an experiment in building sustainable technology without artificial scarcity or surveillance capitalism.
Core principles:
- β Open source from day one (MIT License)
- β One-time payment, not subscription trap
- β Zero telemetry, zero tracking
- β Your data stays on your machine
- β Free to build from source forever
The $9.99 pre-built version includes:
- Tested, packaged executable with embedded Python
- Bundled dependencies (ChromaDB, FastAPI, multilingual embeddings)
- Ready-to-run on Windows with zero setup
Building from source is free forever - Technical users can clone the repo, install dependencies, and build for $0. The pre-built version exists to save you time, not lock you in.
Works with any tool-calling capable model via Ollama or LM Studio:
- Llama - Meta's models (3B - 70B parameters)
- Qwen - Alibaba models (3B - 72B parameters)
- GPT - OpenAI models (20B - 120B parameters)
- Mixtral - Mistral's mixture-of-experts (8x7B)
Install models via Settings β Model Management in the UI.
For issues or feedback:
- Discord: https://discord.gg/F87za86R3v
- Email: [email protected]
- GitHub Issues: https://github.com/roampal-ai/roampal/issues
Made with β€οΈ for people who want AI that actually remembers