Skip to content

Conversation

@SaintNick1214
Copy link

@SaintNick1214 SaintNick1214 commented Nov 23, 2025

PR Description for Vercel AI SDK

Copy and paste this into the PR description field:


Background

The AI SDK community is growing rapidly, and developers need more options for persistent memory solutions. Currently, most available memory providers are cloud-only services that require API keys and create vendor lock-in.

Cortex Memory fills a critical gap by providing a self-hosted, TypeScript-native persistent memory solution that integrates seamlessly with the Vercel AI SDK. Built on Convex, it offers:

  • Complete data sovereignty (no third-party APIs)
  • True multi-tenancy with Memory Spaces
  • ACID guarantees for data integrity
  • Cross-application memory sharing (Hive Mode)
  • Edge runtime compatibility

This addition gives developers a production-ready alternative that prioritizes data ownership and architectural flexibility.

Summary

This PR adds comprehensive documentation for Cortex Memory (@cortexmemory/vercel-ai-provider) as a community provider.

What's Added:

  • Documentation file: content/providers/03-community-providers/69-cortex-memory.mdx
  • Package: @cortexmemory/vercel-ai-provider v0.1.2 (published on NPM)

Documentation Includes:

  • ✅ Installation and setup guide (3 steps with Convex deployment)
  • ✅ 10+ usage examples:
    • Basic chat with automatic memory
    • Semantic search with embeddings
    • Multi-tenant SaaS patterns
    • Multi-provider support (OpenAI, Anthropic, Google)
    • Manual memory control APIs
    • Hive Mode for cross-application memory
    • Edge runtime examples
  • ✅ Complete configuration reference (12+ options)
  • ✅ Feature comparison table (Cortex vs cloud solutions)
  • ✅ Migration guide from mem0
  • ✅ "How It Works" visual flow
  • ✅ Troubleshooting section
  • ✅ Links to all resources (website, docs, GitHub, examples)

Key Features Highlighted:

  • 🏠 Self-hosted on Convex (no API keys required)
  • 📦 TypeScript-native (not ported from Python)
  • 🎯 Memory Spaces for true multi-tenancy isolation
  • 🐝 Hive Mode for cross-application memory sharing
  • Edge compatible (Vercel Edge Functions, Cloudflare Workers)
  • 📊 ACID guarantees via Convex transactions
  • 🔄 Automatic versioning (10 versions per memory)
  • 🔍 Semantic search with embedding support
  • 🧬 Fact extraction for structured knowledge

Package Information:

Manual Verification

The package and documentation have been thoroughly tested:

Package Testing:

  • ✅ Published on NPM as @cortexmemory/vercel-ai-provider v0.1.2
  • ✅ Implements LanguageModelV2 specification correctly
  • ✅ Works with OpenAI, Anthropic, Google providers
  • ✅ Edge runtime tested in Vercel Edge Functions
  • ✅ Streaming support verified with automatic buffering
  • ✅ Manual memory control APIs tested (search, remember, clear)
  • ✅ Memory Spaces isolation verified
  • ✅ Hive Mode cross-application sharing tested

Documentation Verification:

  • ✅ All code examples tested and working
  • ✅ Installation steps verified end-to-end
  • ✅ All links verified (website, docs, GitHub, NPM)
  • ✅ MDX syntax validated
  • ✅ Follows existing community provider format

Example Applications:

Working example apps in the repo demonstrate:

  • Basic chat with persistent memory (Next.js App Router)
  • RAG pattern with documents + conversation memory
  • Multi-tenant SaaS patterns

Checklist

  • Tests have been added / updated - N/A (documentation only, package has its own test suite)
  • Documentation has been added / updated - Complete documentation added for community provider
  • A patch changeset for relevant packages has been added - N/A (documentation only, no SDK changes)
  • I have reviewed this pull request - Self-reviewed for accuracy, formatting, and completeness

Future Work

Potential future enhancements (not required for this PR):

  • Video tutorial showing Cortex Memory setup and usage
  • Blog post comparing self-hosted vs cloud-only memory solutions
  • Additional example apps (multi-modal, advanced RAG patterns)

Related Issues

N/A - This is a new community provider addition.


Additional Context

Why Cortex Memory?

Cortex addresses key pain points developers face with existing memory solutions:

Challenge Cortex Solution
Vendor lock-in Self-hosted on Convex (deploy anywhere)
API key management No API keys needed
Multi-tenancy Built-in Memory Spaces
Data sovereignty Full control over infrastructure
Edge compatibility Zero Node.js dependencies
Cost unpredictability Fixed Convex pricing vs per-API-call

Comparison with Alternatives

Feature Cortex Cloud Solutions
Self-hosted
API key required
Memory Spaces
ACID guarantees ⚠️
Versioning
Edge compatible ⚠️
Cross-app sharing

Community Reception

  • Active development and maintenance
  • Production-ready (v0.1.2)
  • Apache 2.0 licensed (permissive)
  • Growing adoption for self-hosted AI applications

Resources for Review


Thank you for reviewing! I'm happy to address any feedback or make adjustments. 🙏

…, detailing setup, features, and usage examples
@vercel-ai-sdk vercel-ai-sdk bot added ai/provider documentation Improvements or additions to documentation provider/community labels Nov 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai/provider documentation Improvements or additions to documentation provider/community

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants