Skip to content

Releases: GibsonAI/memori

memori-v2.1.1

02 Oct 12:09
Compare
Choose a tag to compare

Release Notes – v2.1.1

Bug Fixes

Patch Release: Fixed hostname resolution issues with MongoDB Atlas connections using modern mongodb+srv:// format.

MongoDB Atlas Connection Fixes

  • Fixed DNS Resolution Warnings: Resolved hostname resolution warnings when connecting to MongoDB Atlas using mongodb+srv:// URIs
  • Improved SRV URI Parsing: Enhanced connection string parsing logic to properly handle DNS seedlist discovery
  • Better Error Handling: Added proper exception handling for server topology inspection
  • Type Safety: Fixed MyPy type checking errors for conditional MongoDB imports

Technical Improvements

  • Fixed hostname parsing logic in mongodb_connector.py and mongodb_manager.py
  • Added proper SRV URI detection to skip unnecessary DNS resolution attempts
  • Enhanced error handling for server descriptions without address attributes
  • Improved conditional import patterns for optional MongoDB dependencies

memori-v2.1.0

22 Sep 19:15
Compare
Choose a tag to compare

Release Notes – v2.1.0

This update adds MongoDB support to the memori, enabling seamless operation with both SQL and MongoDB backends.

Key Changes
• Database Abstraction: Automatic detection of backend type (SQL or MongoDB) in ConsciousAgent, MemoryAgent, and MemorySearchEngine.
• Refactored Logic: Unified ingestion, initialization, and promotion flows with backend-specific methods.
• Unified Search: MemorySearchEngine.execute_search now works across both SQL and MongoDB with fallback to keyword and category search.
• Error Handling & Logging: Improved tracebacks, error messages, and operational logs for debugging and monitoring.
• Demo & Documentation: Added examples/databases/mongodb_demo.py and updated docstrings/comments to clarify backend behavior.

This makes the memori more flexible, extensible, and ready for production with either SQL or MongoDB.

memori-v2.0.1

22 Sep 18:01
77bc05e
Compare
Choose a tag to compare

memory_tool v2.0.1 – Patch Release

This patch improves logging, error handling, and database search logic for memory retrieval.

Changes
• Added detailed debug logs for memory searches, category filtering, and retrieval functions.
• Improved error reporting with stack traces and contextual details for primary and fallback search strategies.
• Refined category extraction logic with better fallback handling.
• Enhanced database query construction for SQLite and MySQL (including COALESCE handling and proportional limits).
• Added clear logging of which search strategy is used (SQLite FTS5, MySQL FULLTEXT, PostgreSQL FTS, LIKE fallback).
• Strengthened fallback mechanisms with explicit logs and robust error handling when no results are found.

These updates make the system more transparent, debuggable, and resilient across different backends.

memori-v2.0.0

05 Sep 09:57
262f2a2
Compare
Choose a tag to compare

We are releasing Memori v2, a major upgrade focused on modularity, performance, and improved memory handling.

Highlights

  • Refactored into a more modular and maintainable codebase.
  • Simplified memory architecture for easier usage and extension.
  • Long-term memory now works without conscious_ingest=true, aligning with the original design.
  • Improved performance with fewer API calls and optimized search.

Changes

  • conscious_ingest behavior redesigned:

    • Originally intended to handle only conscious-essential information (short-term personalization data such as your name, workplace, or other user-defining details).
    • In v2, conscious_ingest is no longer tied to long-term memory ingestion. Instead, it injects short-term context into the system prompt for personalization.
    • This keeps short-term, identity-related memory separate from long-term memory, which continues to grow automatically through auto_ingest.
  • Removed conscious_ingest processing from the main pipeline — now fully handled by memory_processing.

  • Simplified Pydantic schemas and reduced redundant API calls for faster execution.

  • Refactored search engine and memory handling with standardized SQLAlchemy integration.

  • Database connections modularized with adapter–connector design, enabling clean extensibility.

Provider Support

  • Added support for local and Azure OpenAI providers via the official OpenAI library.
  • Extended integration with Ollama and LM Studio.
  • Improved interception of LiteLLM, OpenAI.
  • Added examples showcasing how to use Memori with multiple OpenAI-compatible providers.

Database Support

  • Added support for MySQL and PostgreSQL using SQLAlchemy.
  • Support for remote databases via connection strings.
  • Modular database adapters (mysql_adapter, postgresql_adapter, sqlite_adapter) with corresponding connectors for clean extensibility.
  • Added examples demonstrating database integration and usage.

Fixes

  • Assistant loses context across questions in basic_usage.py #12
  • Resolved environment variable issues for OpenAI provider. #17
  • personal_assistant.py – Memory not saved + Assistant seems to partially forget preferences #13
  • The background analysis of memory_agent, conscious_agent #20
  • memory_agent isn't working ! #23
  • Support for better Search methods in Memori for multi-database support #24
  • Add support & docs for connecting to local OpenAI-compatible endpoints #40
  • Improved memory schema for clarity, performance, and consistency.

Memori AI v1.0.1

04 Aug 08:53
Compare
Choose a tag to compare

Release v1.0.1

See [CHANGELOG.md](https://github.com/GibsonAI/memori/blob/main/CHANGELOG.md) for details.

Memori AI v1.0.0

04 Aug 08:03
Compare
Choose a tag to compare

[1.0.0] - 2025-08-03

🎉 Production-Ready Memory Layer for AI Agents

Complete professional-grade memory system with modular architecture, comprehensive error handling, and configuration management.

✨ Core Features

  • Universal LLM Integration: Works with ANY LLM library (LiteLLM, OpenAI, Anthropic)
  • Pydantic-based Intelligence: Structured memory processing with validation
  • Automatic Context Injection: Relevant memories automatically added to conversations
  • Multiple Memory Types: Short-term, long-term, rules, and entity relationships
  • Advanced Search: Full-text search with semantic ranking

🏗️ Architecture

  • Modular Design: Separated concerns with clear component boundaries
  • SQL Query Centralization: Dedicated query modules for maintainability
  • Configuration Management: Pydantic-based settings with auto-loading
  • Comprehensive Error Handling: Context-aware exceptions with sanitized logging
  • Production Logging: Structured logging with multiple output targets

🗄️ Database Support

  • Multi-Database: SQLite, PostgreSQL, MySQL connectors
  • Query Optimization: Indexed searches and connection pooling
  • Schema Management: Version-controlled migrations and templates
  • Full-Text Search: FTS5 support for advanced text search

🔧 Developer Experience

  • Type Safety: Full Pydantic validation throughout
  • Simple API: One-line enablement with memori.enable()
  • Flexible Configuration: File, environment, or programmatic setup
  • Rich Examples: Basic usage, personal assistant, advanced config

📊 Memory Processing

  • Entity Extraction: People, technologies, projects, skills
  • Smart Categorization: Facts, preferences, skills, rules, context
  • Importance Scoring: Multi-dimensional relevance assessment
  • Relationship Mapping: Entity interconnections and memory graphs

🔌 Integrations

  • LiteLLM Native: Uses official callback system (recommended)
  • OpenAI/Anthropic: Clean wrapper classes for direct usage
  • Tool Support: Memory search tools for function calling

🛡️ Security & Reliability

  • Input Sanitization: Protection against injection attacks
  • Error Context: Detailed error information without exposing secrets
  • Graceful Degradation: Continues operation when components fail
  • Resource Management: Automatic cleanup and connection pooling

📁 Project Structure

memoriai/
├── core/              # Main memory interface and database
├── config/            # Configuration management system
├── agents/            # Pydantic-based memory processing
├── database/          # Multi-database support and queries
├── integrations/      # LLM provider integrations
├── utils/             # Helpers, validation, logging
└── tools/             # Memory search and retrieval tools

🎯 Philosophy Alignment

  • Second-memory for LLM work: Never repeat context again
  • Flexible database connections: Production-ready adapters
  • Simple, reliable architecture: Just works out of the box
  • Conscious context injection: Intelligent memory retrieval

⚡ Quick Start

from memoriai import Memori

memori = Memori(
    database_connect="sqlite:///my_memory.db",
    conscious_ingest=True,
    openai_api_key="sk-..."
)
memori.enable()  # Start recording all LLM conversations

# Use any LLM library - context automatically injected!
from litellm import completion
response = completion(model="gpt-4", messages=[...])

📚 Documentation

  • Clean, focused README aligned with project vision
  • Essential examples without complexity bloat
  • Configuration guides for development and production
  • Architecture documentation for contributors

🗂️ Archive Management

  • Moved outdated files to archive/ folder
  • Updated .gitignore to exclude archive from version control
  • Preserved development history while cleaning main structure

💡 Breaking Changes from Pre-1.0

  • Moved from enum-driven to Pydantic-based processing
  • Simplified API surface with focus on enable()/disable()
  • Restructured package layout for better modularity
  • Enhanced configuration system replaces simple parameters