An open-source SQL-Native memory engine for AI
From Postgres to MySQL, Memori plugs into the SQL databases you already use. Simple setup, infinite scale without new infrastructure.
Memori uses structured entity extraction, relationship mapping, and SQL-based retrieval to create transparent, portable, and queryable AI memory. Memomi uses multiple agents working together to intelligently promote essential long-term memories to short-term storage for faster context injection.
With a single line of code memori.enable()
any LLM gains the ability to remember conversations, learn from interactions, and maintain context across sessions. The entire memory system is stored in a standard SQLite database (or PostgreSQL/MySQL for enterprise deployments), making it fully portable, auditable, and owned by the user.
- Radical Simplicity: One line to enable memory for any LLM framework (OpenAI, Anthropic, LiteLLM, LangChain)
- True Data Ownership: Memory stored in standard SQL databases that users fully control
- Complete Transparency: Every memory decision is queryable with SQL and fully explainable
- Zero Vendor Lock-in: Export your entire memory as a SQLite file and move anywhere
- Cost Efficiency: 80-90% cheaper than vector database solutions at scale
- Compliance Ready: SQL-based storage enables audit trails, data residency, and regulatory compliance
Install Memori:
pip install memorisdk
- Install OpenAI:
pip install openai
- Set OpenAI API Key:
export OPENAI_API_KEY="sk-your-openai-key-here"
- Run this Python script:
from memori import Memori
from openai import OpenAI
# Initialize OpenAI client
openai_client = OpenAI()
# Initialize memory
memori = Memori(conscious_ingest=True)
memori.enable()
print("=== First Conversation - Establishing Context ===")
response1 = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": "I'm working on a Python FastAPI project"
}]
)
print("Assistant:", response1.choices[0].message.content)
print("\n" + "="*50)
print("=== Second Conversation - Memory Provides Context ===")
response2 = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": "Help me add user authentication"
}]
)
print("Assistant:", response2.choices[0].message.content)
print("\n💡 Notice: Memori automatically knows about your FastAPI Python project!")
By default, Memori uses in-memory SQLite database. Get FREE serverless database instance in GibsonAI platform.
🚀 Ready to explore more?
- 📖 Examples - Basic usage patterns and code samples
- 🔌 Framework Integrations - LangChain, Agno & CrewAI examples
- 🎮 Interactive Demos - Live applications & tutorials
office_work.enable() # Records ALL LLM conversations automatically
- Entity Extraction: Extracts people, technologies, projects
- Smart Categorization: Facts, preferences, skills, rules
- Pydantic Validation: Structured, type-safe memory storage
conscious_ingest=True # One-shot short-term memory injection
- At Startup: Conscious agent analyzes long-term memory patterns
- Memory Promotion: Moves essential conversations to short-term storage
- One-Shot Injection: Injects working memory once at conversation start
- Like Human Short-Term Memory: Names, current projects, preferences readily available
auto_ingest=True # Continuous intelligent memory retrieval
- Every LLM Call: Retrieval agent analyzes user query intelligently
- Full Database Search: Searches through entire memory database
- Context-Aware: Injects relevant memories based on current conversation
- Performance Optimized: Caching, async processing, background threads
# Mimics human conscious memory - essential info readily available
memori = Memori(
database_connect="sqlite:///my_memory.db",
conscious_ingest=True, # 🧠 Short-term working memory
openai_api_key="sk-..."
)
How Conscious Mode Works:
- At Startup: Conscious agent analyzes long-term memory patterns
- Essential Selection: Promotes 5-10 most important conversations to short-term
- One-Shot Injection: Injects this working memory once at conversation start
- No Repeats: Won't inject again during the same session
# Searches entire database dynamically based on user queries
memori = Memori(
database_connect="sqlite:///my_memory.db",
auto_ingest=True, # 🔍 Smart database search
openai_api_key="sk-..."
)
How Auto Mode Works:
- Every LLM Call: Retrieval agent analyzes user input
- Query Planning: Uses AI to understand what memories are needed
- Smart Search: Searches through entire database (short-term + long-term)
- Context Injection: Injects 3-5 most relevant memories per call
# Get both working memory AND dynamic search
memori = Memori(
conscious_ingest=True, # Working memory once
auto_ingest=True, # Dynamic search every call
openai_api_key="sk-..."
)
- Memory Agent - Processes every conversation with Pydantic structured outputs
- Conscious Agent - Analyzes patterns, promotes long-term → short-term memories
- Retrieval Agent - Intelligently searches and selects relevant context
- 👤 Personal Identity: Your name, role, location, basic info
- ❤️ Preferences & Habits: What you like, work patterns, routines
- 🛠️ Skills & Tools: Technologies you use, expertise areas
- 📊 Current Projects: Ongoing work, learning goals
- 🤝 Relationships: Important people, colleagues, connections
- 🔄 Repeated References: Information you mention frequently
Type | Purpose | Example | Auto-Promoted |
---|---|---|---|
Facts | Objective information | "I use PostgreSQL for databases" | ✅ High frequency |
Preferences | User choices | "I prefer clean, readable code" | ✅ Personal identity |
Skills | Abilities & knowledge | "Experienced with FastAPI" | ✅ Expertise areas |
Rules | Constraints & guidelines | "Always write tests first" | ✅ Work patterns |
Context | Session information | "Working on e-commerce project" | ✅ Current projects |
from memori import Memori
# Conscious mode - Short-term working memory
memori = Memori(
database_connect="sqlite:///my_memory.db",
template="basic",
conscious_ingest=True, # One-shot context injection
openai_api_key="sk-..."
)
# Auto mode - Dynamic database search
memori = Memori(
database_connect="sqlite:///my_memory.db",
auto_ingest=True, # Continuous memory retrieval
openai_api_key="sk-..."
)
# Combined mode - Best of both worlds
memori = Memori(
conscious_ingest=True, # Working memory +
auto_ingest=True, # Dynamic search
openai_api_key="sk-..."
)
from memori import Memori, ConfigManager
# Load from memori.json or environment
config = ConfigManager()
config.auto_load()
memori = Memori()
memori.enable()
Create memori.json
:
{
"database": {
"connection_string": "postgresql://user:pass@localhost/memori"
},
"agents": {
"openai_api_key": "sk-...",
"conscious_ingest": true,
"auto_ingest": false
},
"memory": {
"namespace": "my_project",
"retention_policy": "30_days"
}
}
Works with ANY LLM library:
memori.enable() # Enable universal recording
# OpenAI
from openai import OpenAI
client = OpenAI()
client.chat.completions.create(...)
# LiteLLM
from litellm import completion
completion(model="gpt-4", messages=[...])
# Anthropic
import anthropic
client = anthropic.Anthropic()
client.messages.create(...)
# All automatically recorded and contextualized!
# Automatic analysis every 6 hours (when conscious_ingest=True)
memori.enable() # Starts background conscious agent
# Manual analysis trigger
memori.trigger_conscious_analysis()
# Get essential conversations
essential = memori.get_essential_conversations(limit=5)
from memori.tools import create_memory_tool
# Create memory search tool for your LLM
memory_tool = create_memory_tool(memori)
# Use in function calling
tools = [memory_tool]
completion(model="gpt-4", messages=[...], tools=tools)
# Get relevant context for a query
context = memori.retrieve_context("Python testing", limit=5)
# Returns: 3 essential + 2 specific memories
# Search by category
skills = memori.search_memories_by_category("skill", limit=10)
# Get memory statistics
stats = memori.get_memory_stats()
-- Core tables created automatically
chat_history # All conversations
short_term_memory # Recent context (expires)
long_term_memory # Permanent insights
rules_memory # User preferences
memory_entities # Extracted entities
memory_relationships # Entity connections
memori/
├── core/ # Main Memori class, database manager
├── agents/ # Memory processing with Pydantic
├── database/ # SQLite/PostgreSQL/MySQL support
├── integrations/ # LiteLLM, OpenAI, Anthropic
├── config/ # Configuration management
├── utils/ # Helpers, validation, logging
└── tools/ # Memory search tools
- Basic Usage - Simple memory setup with conscious ingestion
- Personal Assistant - AI assistant with intelligent memory
- Memory Retrieval - Function calling with memory tools
- Advanced Config - Production configuration
- Interactive Demo - Live conscious ingestion showcase
- Simple Multi-User - Basic demonstration of user memory isolation with namespaces
- FastAPI Multi-User App - Full-featured REST API with Swagger UI for testing multi-user functionality
Memori works seamlessly with popular AI frameworks:
Framework | Description | Example |
---|---|---|
AgentOps | Track and monitor Memori memory operations with comprehensive observability | Memory operation tracking with AgentOps analytics |
Agno | Memory-enhanced agent framework integration with persistent conversations | Simple chat agent with memory search |
AWS Strands | Professional development coach with Strands SDK and persistent memory | Career coaching agent with goal tracking |
Azure AI Foundry | Azure AI Foundry agents with persistent memory across conversations | Enterprise AI agents with Azure integration |
AutoGen | Multi-agent group chat memory recording | Agent chats with memory integration |
CamelAI | Multi-agent communication framework with automatic memory recording and retrieval | Memory-enhanced chat agents with conversation continuity |
CrewAI | Multi-agent system with shared memory across agent interactions | Collaborative agents with memory |
Digital Ocean AI | Memory-enhanced customer support using Digital Ocean's AI platform | Customer support assistant with conversation history |
LangChain | Enterprise-grade agent framework with advanced memory integration | AI assistant with LangChain tools and memory |
OpenAI Agent | Memory-enhanced OpenAI Agent with function calling and user preference tracking | Interactive assistant with memory search and user info storage |
Swarms | Multi-agent system framework with persistent memory capabilities | Memory-enhanced Swarms agents with auto/conscious ingestion |
Explore Memori's capabilities through these interactive demonstrations:
Title | Description | Tools Used | Live Demo |
---|---|---|---|
🌟 Personal Diary Assistant | A comprehensive diary assistant with mood tracking, pattern analysis, and personalized recommendations. | Streamlit, LiteLLM, OpenAI, SQLite | Run Demo |
🌍 Travel Planner Agent | Intelligent travel planning with CrewAI agents, real-time web search, and memory-based personalization. Plans complete itineraries with budget analysis. | CrewAI, Streamlit, OpenAI, SQLite | |
🧑🔬 Researcher Agent | Advanced AI research assistant with persistent memory, real-time web search, and comprehensive report generation. Builds upon previous research sessions. | Agno, Streamlit, OpenAI, ExaAI, SQLite | Run Demo |
- See CONTRIBUTING.md for development setup and guidelines.
- Community: Discord
MIT License - see LICENSE for details.
Made for developers who want their AI agents to remember and learn