Skip to content

feat(core): Replace Vercel KV adapter with Vercel Runtime Cache #2439

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 4 commits into
base: canary
Choose a base branch
from
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .changeset/shy-poems-follow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"@bigcommerce/catalyst-core": patch
---

Simplify TTL handling for middleware cache and remove SWR-like behavior, preferring long TTLs instead. Introduce STORE_STATUS_CACHE_TTL and ROUTE_CACHE_TTL
5 changes: 5 additions & 0 deletions .changeset/violet-cups-carry.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"@bigcommerce/catalyst-core": minor
---

Implement Vercel Runtime Cache as a replacement for KV products for middleware caching
291 changes: 291 additions & 0 deletions core/lib/fetch-cache/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,291 @@
# Fetch Cache System

A 2-layer caching system designed to work around Next.js middleware limitations where the normal `fetch()` Data Cache is not available.

## Overview

This system provides a drop-in replacement for data fetching in Next.js middleware with automatic TTL-based caching. It uses a memory-first approach with configurable backend storage.

## Architecture

```
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ Request │───▶│ Memory Cache │───▶│ Backend │
└─────────────┘ └──────────────┘ │ Storage │
│ └─────────────┘
┌──────────────┐
│ Fresh Fetch │
└──────────────┘
```

**2-Layer Strategy:**

1. **Memory Cache** (L1): Fast, in-memory LRU cache with TTL support
2. **Backend Storage** (L2): Persistent storage (Vercel Runtime Cache, Redis, etc.)

## Quick Start

### Basic Usage

```typescript
import { fetchWithTTLCache } from '~/lib/fetch-cache';

// Cache a single data fetch
const userData = await fetchWithTTLCache(
async () => {
const response = await fetch('/api/user/123');
return response.json();
},
'user:123',
{ ttl: 300 }, // 5 minutes
);
```

### Batch Fetching

```typescript
import { batchFetchWithTTLCache } from '~/lib/fetch-cache';

// Cache multiple related fetches efficiently
const [route, status] = await batchFetchWithTTLCache([
{
fetcher: () => getRoute(pathname, channelId),
cacheKey: routeCacheKey(pathname, channelId),
options: { ttl: 86400 }, // 24 hours
},
{
fetcher: () => getStoreStatus(channelId),
cacheKey: storeStatusCacheKey(channelId),
options: { ttl: 3600 }, // 1 hour
},
]);
```

## Cache Key Management

```typescript
import { cacheKey, routeCacheKey, storeStatusCacheKey } from '~/lib/fetch-cache/keys';

// Generic cache key with optional scope
const key1 = cacheKey('user-profile', 'channel-123'); // → "channel-123:user-profile"

// Pre-built helpers for common use cases
const routeKey = routeCacheKey('/products', 'channel-123');
const statusKey = storeStatusCacheKey('channel-123');
```

## Backend Adapters

The system automatically detects the best available backend:

### 1. Cloudflare Workers (Future)

```typescript
// Automatically detected in Cloudflare Workers environment
// Uses native Cache API for optimal performance
```

### 2. Vercel Edge Runtime

```typescript
// Automatically detected when VERCEL=1
// Uses @vercel/functions getCache() API
```

### 3. Upstash Redis

```typescript
// Automatically detected when Redis env vars are present
// Requires: UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN
```

### 4. Memory Only (Fallback)

```typescript
// Used when no other backend is available
// Memory cache only - no persistence
```

## Configuration

### Environment Variables

```bash
# Enable detailed logging (default: enabled in development)
FETCH_CACHE_LOGGER=true

# TTL configuration (in seconds)
ROUTE_CACHE_TTL=86400 # 24 hours
STORE_STATUS_CACHE_TTL=3600 # 1 hour

# Backend-specific configuration
VERCEL=1 # Auto-detected
UPSTASH_REDIS_REST_URL=https://... # Redis backend
UPSTASH_REDIS_REST_TOKEN=... # Redis backend
```

### Cache Options

```typescript
interface FetchCacheOptions {
ttl?: number; // Time to live in seconds
tags?: string[]; // Cache tags for invalidation (backend dependent)
[key: string]: unknown; // Additional backend-specific options
}
```

## Logging

When `FETCH_CACHE_LOGGER=true`, you'll see detailed operation logs:

```
[BigCommerce Fetch Cache] FETCH user:123 (Upstash Redis) - ✓ All from memory cache - Memory: 0.02ms, Total: 0.02ms
[BigCommerce Fetch Cache] BATCH_FETCH [route:/products, store-status] (Vercel Runtime Cache) - Memory: 1, Backend: 1 - Memory: 0.04ms, Backend: 1.23ms, Total: 1.27ms
[BigCommerce Fetch Cache] FETCH product:456 (Memory Only) - ✗ Fetch required: 1 - Backend: 45.67ms, Total: 45.71ms
```

**Log Format:**

- `✓` = Cache hit
- `✗` = Cache miss (fresh fetch required)
- Backend shows which storage system is being used
- Timing breakdown shows memory vs backend vs total time

## Examples

### Middleware Usage (Before/After)

**Before** (Complex manual cache management):

```typescript
// Complex cache logic spread across multiple functions
let [route, status] = await kv.mget(kvKey(pathname, channelId), kvKey(STORE_STATUS_KEY, channelId));

if (!status) {
status = await fetchAndCacheStatus(channelId, event);
}

if (!route) {
route = await fetchAndCacheRoute(pathname, channelId, event);
}
```

**After** (Clean, declarative):

```typescript
// Simple, declarative fetch with automatic caching
const [route, status] = await batchFetchWithTTLCache([
{
fetcher: () => getRoute(pathname, channelId),
cacheKey: routeCacheKey(pathname, channelId),
options: { ttl: ROUTE_CACHE_TTL },
},
{
fetcher: () => getStoreStatus(channelId),
cacheKey: storeStatusCacheKey(channelId),
options: { ttl: STORE_STATUS_CACHE_TTL },
},
]);
```

### Custom Cache Implementation

```typescript
import { fetchCache } from '~/lib/fetch-cache';

// Direct cache access (advanced usage)
const cachedData = await fetchCache.get<UserData>('user:123');

if (!cachedData) {
const freshData = await fetchUserData('123');
await fetchCache.set('user:123', freshData, { ttl: 300 });
}
```

## Performance Benefits

- **Memory First**: Sub-millisecond cache hits for frequently accessed data
- **Batch Operations**: Optimized multi-key fetching reduces round trips
- **Platform Native**: Uses the best caching available for each environment
- **Fire-and-Forget**: Cache updates don't block the response
- **TTL Management**: Automatic expiration handling

## Migration Guide

### From Direct KV Usage

```typescript
// Old KV approach
import { kv } from '~/lib/kv';
const data = await kv.get('key');
if (!data) {
const fresh = await fetchData();
await kv.set('key', fresh, { ttl: 300 });
}

// New fetch cache approach
import { fetchWithTTLCache } from '~/lib/fetch-cache';
const data = await fetchWithTTLCache(() => fetchData(), 'key', { ttl: 300 });
```

### From Manual Cache Management

The new system eliminates the need for manual cache checking, setting, and TTL management. Just wrap your data fetching function with `fetchWithTTLCache()` and the caching is handled automatically.

## Extending the System

### Adding New Backends

```typescript
// Example: Custom database cache adapter
export class DatabaseCacheAdapter implements FetchCacheAdapter {
async get<T>(cacheKey: string): Promise<T | null> {
// Implement database get logic
}

async set<T>(cacheKey: string, data: T, options?: FetchCacheOptions): Promise<T | null> {
// Implement database set logic with TTL
}

async mget<T>(...cacheKeys: string[]): Promise<Array<T | null>> {
// Implement batch get logic
}
}
```

Then add detection logic to `createFetchCacheAdapter()` in `index.ts`.

## Troubleshooting

### Cache Not Working

1. Check if backend is properly configured (env vars)
2. Enable logging with `FETCH_CACHE_LOGGER=true`
3. Verify cache keys are consistent between set/get operations

### Performance Issues

1. Use batch fetching for multiple related operations
2. Choose appropriate TTL values (too short = frequent fetches, too long = stale data)
3. Monitor memory usage if using memory-only mode

### Backend-Specific Issues

**Vercel Runtime Cache:**

- Only works in Vercel Edge Runtime
- Limited to 1MB per key
- Automatic cleanup based on usage

**Upstash Redis:**

- Check network connectivity
- Verify authentication tokens
- Monitor Redis memory usage

**Memory Only:**

- Limited by available memory
- No persistence across restarts
- Consider LRU cache size (default: 500 items)
88 changes: 88 additions & 0 deletions core/lib/fetch-cache/adapters/cloudflare-native.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
import { FetchCacheAdapter, FetchCacheOptions } from '../types';

/**
* Cloudflare native cache adapter that uses the Cache API available in Cloudflare Workers.
* This demonstrates how platform-native caching can be used when available.
*
* Note: This is a future implementation for when running in Cloudflare Workers environment.
* Cloudflare Workers provide a Cache API that can be used directly.
*/
export class CloudflareNativeFetchCacheAdapter implements FetchCacheAdapter {
private cache?: Cache;

private async getCache(): Promise<Cache> {
if (!this.cache) {
// In Cloudflare Workers, caches.default is available
// eslint-disable-next-line @typescript-eslint/no-unsafe-member-access
this.cache = (globalThis as any).caches?.default;

if (!this.cache) {
throw new Error('Cloudflare Cache API not available');
}
}

return this.cache;
}

private createCacheKey(key: string): string {
// Create a valid cache key for the Cache API
return `https://cache.internal/${encodeURIComponent(key)}`;
}

async get<T>(cacheKey: string): Promise<T | null> {
try {
const cache = await this.getCache();
const cacheUrl = this.createCacheKey(cacheKey);

const response = await cache.match(cacheUrl);

if (!response) {
return null;
}

const data = await response.json();
// eslint-disable-next-line @typescript-eslint/consistent-type-assertions
return data as T;
} catch (error) {
// eslint-disable-next-line no-console
console.warn(`Cloudflare cache get failed for key ${cacheKey}:`, error);
return null;
}
}

async mget<T>(...cacheKeys: string[]): Promise<Array<T | null>> {
// For now, implement mget as parallel get operations
// Future optimization could use batch operations if available
const results = await Promise.all(cacheKeys.map((key) => this.get<T>(key)));

return results;
}

async set<T>(cacheKey: string, data: T, options: FetchCacheOptions = {}): Promise<T | null> {
try {
const cache = await this.getCache();
const cacheUrl = this.createCacheKey(cacheKey);

// Create headers with TTL information
const headers = new Headers({
'Content-Type': 'application/json',
});

// Add cache control headers for TTL
if (options.ttl) {
headers.set('Cache-Control', `max-age=${options.ttl}`);
}

// Create response to store in cache
const response = new Response(JSON.stringify(data), { headers });

await cache.put(cacheUrl, response);

return data;
} catch (error) {
// eslint-disable-next-line no-console
console.warn(`Cloudflare cache set failed for key ${cacheKey}:`, error);
return null;
}
}
}
62 changes: 62 additions & 0 deletions core/lib/fetch-cache/adapters/memory.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
/* eslint-disable @typescript-eslint/require-await */
import { LRUCache } from 'lru-cache';

import { FetchCacheAdapter, FetchCacheOptions } from '../types';

interface CacheEntry {
value: unknown;
expiresAt: number;
}

export class MemoryFetchCacheAdapter implements FetchCacheAdapter {
private cache = new LRUCache<string, CacheEntry>({
max: 500,
});

async get<T>(cacheKey: string): Promise<T | null> {
const entry = this.cache.get(cacheKey);

if (!entry) {
return null;
}

// Check if expired
if (entry.expiresAt < Date.now()) {
this.cache.delete(cacheKey); // Clean up expired entry
return null;
}

// eslint-disable-next-line @typescript-eslint/consistent-type-assertions
return entry.value as T;
}

async mget<T>(...cacheKeys: string[]): Promise<Array<T | null>> {
const results = cacheKeys.map((key) => {
const entry = this.cache.get(key);

if (!entry) {
return null;
}

// Check if expired
if (entry.expiresAt < Date.now()) {
this.cache.delete(key); // Clean up expired entry
return null;
}

// eslint-disable-next-line @typescript-eslint/consistent-type-assertions
return entry.value as T;
});

return results;
}

async set<T>(cacheKey: string, data: T, options: FetchCacheOptions = {}): Promise<T | null> {
this.cache.set(cacheKey, {
value: data,
expiresAt: options.ttl ? Date.now() + options.ttl * 1_000 : Number.MAX_SAFE_INTEGER,
});

return data;
}
}
57 changes: 57 additions & 0 deletions core/lib/fetch-cache/adapters/upstash-redis.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
import { Redis } from '@upstash/redis';

import { FetchCacheAdapter, FetchCacheOptions } from '../types';

export class UpstashRedisFetchCacheAdapter implements FetchCacheAdapter {
private redis = Redis.fromEnv();

async get<T>(cacheKey: string): Promise<T | null> {
try {
const result = await this.redis.get<T>(cacheKey);
return result;
} catch (error) {
// eslint-disable-next-line no-console
console.warn(`Upstash Redis get failed for key ${cacheKey}:`, error);
return null;
}
}

async mget<T>(...cacheKeys: string[]): Promise<Array<T | null>> {
try {
const result = await this.redis.mget<T[]>(cacheKeys);

// Redis mget returns an array, but we need to handle the case where some values might be null
return Array.isArray(result) ? result : cacheKeys.map(() => null);
} catch (error) {
// eslint-disable-next-line no-console
console.warn(`Upstash Redis mget failed for keys [${cacheKeys.join(', ')}]:`, error);
return cacheKeys.map(() => null);
}
}

async set<T>(cacheKey: string, data: T, options: FetchCacheOptions = {}): Promise<T | null> {
try {
// Build Redis options - support TTL but ignore tags (not supported by Redis)
const { ttl, tags, ...redisOpts } = options;
const redisOptions: Record<string, unknown> = { ...redisOpts };

// Add TTL if provided (Redis EX parameter for seconds)
if (ttl) {
redisOptions.ex = ttl;
}

const response = await this.redis.set(
cacheKey,
data,
Object.keys(redisOptions).length > 0 ? redisOptions : undefined,
);

// Redis SET returns 'OK' on success, null on failure
return response === 'OK' ? data : null;
} catch (error) {
// eslint-disable-next-line no-console
console.warn(`Upstash Redis set failed for key ${cacheKey}:`, error);
return null;
}
}
}
70 changes: 70 additions & 0 deletions core/lib/fetch-cache/adapters/vercel-runtime-cache.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
import { FetchCacheAdapter, FetchCacheOptions } from '../types';

export class VercelRuntimeCacheAdapter implements FetchCacheAdapter {
async get<T>(cacheKey: string): Promise<T | null> {
try {
const { getCache } = await import('@vercel/functions');
const cache = getCache();
const result = await cache.get(cacheKey);

// eslint-disable-next-line @typescript-eslint/consistent-type-assertions
return result as T | null;
} catch (error) {
// eslint-disable-next-line no-console
console.warn(`Vercel runtime cache get failed for key ${cacheKey}:`, error);
return null;
}
}

async mget<T>(...cacheKeys: string[]): Promise<Array<T | null>> {
const { getCache } = await import('@vercel/functions');
const cache = getCache();

const values = await Promise.all(
cacheKeys.map(async (key) => {
try {
const result = await cache.get(key);
// eslint-disable-next-line @typescript-eslint/consistent-type-assertions
return result as T | null;
} catch (error) {
// eslint-disable-next-line no-console
console.warn(`Vercel runtime cache get failed for key ${key}:`, error);
return null;
}
}),
);

return values;
}

async set<T>(cacheKey: string, data: T, options: FetchCacheOptions = {}): Promise<T | null> {
try {
const { getCache } = await import('@vercel/functions');
const cache = getCache();

// Build runtime cache options
const runtimeCacheOptions: Record<string, unknown> = {};

if (options.ttl) {
runtimeCacheOptions.ttl = options.ttl;
}

if (options.tags && Array.isArray(options.tags)) {
runtimeCacheOptions.tags = options.tags;
}

// Call cache.set with options if provided, otherwise call without options
if (Object.keys(runtimeCacheOptions).length > 0) {
await cache.set(cacheKey, data, runtimeCacheOptions);
} else {
await cache.set(cacheKey, data);
}

return data;
} catch (error) {
// eslint-disable-next-line no-console
console.warn(`Vercel runtime cache set failed for key ${cacheKey}:`, error);
return null;
}
}
}
362 changes: 362 additions & 0 deletions core/lib/fetch-cache/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,362 @@
import { FetchCacheLogger, timer } from './lib/cache-logger';
import { MemoryFetchCacheAdapter } from './adapters/memory';
import { FetchCacheAdapter, FetchCacheOptions } from './types';

interface FetchCacheConfig {
logger?: boolean;
loggerPrefix?: string;
}

class TwoLayerFetchCache {
private memoryCache = new MemoryFetchCacheAdapter();
private backendAdapter?: FetchCacheAdapter;
private logger: FetchCacheLogger;
private backendName: string;

constructor(
private createBackendAdapter: () => Promise<FetchCacheAdapter>,
private config: FetchCacheConfig = {},
backendName = 'Backend',
) {
this.backendName = backendName;
this.logger = new FetchCacheLogger({
enabled: config.logger ?? false,
prefix: config.loggerPrefix ?? '[Fetch Cache]',
});
}

async get<T>(cacheKey: string): Promise<T | null> {
const [value] = await this.mget<T>(cacheKey);
return value ?? null;
}

async mget<T>(...cacheKeys: string[]): Promise<Array<T | null>> {
const startTime = timer();

// Step 1: Check memory cache
const memoryStartTime = timer();
const memoryValues = await this.memoryCache.mget<T>(...cacheKeys);
const memoryTime = timer() - memoryStartTime;

// Analyze memory hits
const memoryHits = memoryValues.filter((value) => value !== null).length;

// If all values found in memory, return early
if (memoryHits === cacheKeys.length) {
const totalTime = timer() - startTime;

this.logger.logOperation({
operation: 'BATCH_FETCH',
cacheKeys,
memoryHits,
backendHits: 0,
totalMisses: 0,
memoryTime,
totalTime,
backend: this.backendName,
});

return memoryValues;
}

// Step 2: Get missing keys from backend
const backendStartTime = timer();
const backend = await this.getBackendAdapter();

// Identify keys that need to be fetched from backend
const keysToFetch = cacheKeys.filter((_, index) => memoryValues[index] === null);
const backendValues = await backend.mget<T>(...keysToFetch);
const backendTime = timer() - backendStartTime;

// Step 3: Merge results and update memory cache
const finalValues: Array<T | null> = [];
let backendIndex = 0;

const backendValuesToCache: Array<{ key: string; value: T }> = [];

for (let i = 0; i < cacheKeys.length; i++) {
const memoryValue = memoryValues[i];
const currentKey = cacheKeys[i];

if (memoryValue !== null && memoryValue !== undefined) {
// Use value from memory
finalValues[i] = memoryValue;
} else {
// Use value from backend
const backendValue = backendValues[backendIndex];
finalValues[i] = backendValue ?? null;

// Queue for memory cache if not null and key exists
if (backendValue !== null && backendValue !== undefined && currentKey) {
backendValuesToCache.push({ key: currentKey, value: backendValue });
}

backendIndex++;
}
}

// Update memory cache with backend values (don't await - fire and forget)
if (backendValuesToCache.length > 0) {
Promise.all(
backendValuesToCache.map(({ key, value }) => this.memoryCache.set(key, value)),
).catch((error) => {
// eslint-disable-next-line no-console
console.warn('Failed to update memory cache:', error);
});
}

// Step 4: Calculate final statistics and log
const backendHits = backendValues.filter((value) => value !== null).length;
const totalMisses = finalValues.filter((value) => value === null).length;
const totalTime = timer() - startTime;

this.logger.logOperation({
operation: 'BATCH_FETCH',
cacheKeys,
memoryHits,
backendHits,
totalMisses,
memoryTime,
backendTime,
totalTime,
backend: this.backendName,
});

return finalValues;
}

async set<T>(cacheKey: string, data: T, options: FetchCacheOptions = {}): Promise<T | null> {
const startTime = timer();

// Step 1: Set in memory cache
const memoryStartTime = timer();
await this.memoryCache.set(cacheKey, data, options);
const memoryTime = timer() - memoryStartTime;

// Step 2: Set in backend
const backendStartTime = timer();
const backend = await this.getBackendAdapter();
const result = await backend.set(cacheKey, data, options);
const backendTime = timer() - backendStartTime;

const totalTime = timer() - startTime;

this.logger.logOperation({
operation: 'CACHE_SET',
cacheKeys: [cacheKey],
memoryTime,
backendTime,
totalTime,
options,
backend: this.backendName,
});

return result;
}

private async getBackendAdapter(): Promise<FetchCacheAdapter> {
if (!this.backendAdapter) {
this.backendAdapter = await this.createBackendAdapter();
}
return this.backendAdapter;
}
}

async function createFetchCacheAdapter(): Promise<{ adapter: FetchCacheAdapter; name: string }> {
// Feature detection for Cloudflare Workers
// eslint-disable-next-line @typescript-eslint/no-unsafe-member-access
if (typeof globalThis !== 'undefined' && (globalThis as any).caches?.default) {
const { CloudflareNativeFetchCacheAdapter } = await import('./adapters/cloudflare-native');
return { adapter: new CloudflareNativeFetchCacheAdapter(), name: 'Cloudflare Native' };
}

// Vercel Edge Runtime
if (process.env.VERCEL === '1') {
const { VercelRuntimeCacheAdapter } = await import('./adapters/vercel-runtime-cache');
return { adapter: new VercelRuntimeCacheAdapter(), name: 'Vercel Runtime Cache' };
}

// Upstash Redis
if (process.env.UPSTASH_REDIS_REST_URL && process.env.UPSTASH_REDIS_REST_TOKEN) {
const { UpstashRedisFetchCacheAdapter } = await import('./adapters/upstash-redis');
return { adapter: new UpstashRedisFetchCacheAdapter(), name: 'Upstash Redis' };
}

// Fallback to memory-only
return { adapter: new MemoryFetchCacheAdapter(), name: 'Memory Only' };
}

// Create the global fetch cache instance
const createFetchCacheInstance = async () => {
const { adapter, name } = await createFetchCacheAdapter();

return new TwoLayerFetchCache(
async () => adapter,
{
logger:
(process.env.NODE_ENV !== 'production' && process.env.FETCH_CACHE_LOGGER !== 'false') ||
process.env.FETCH_CACHE_LOGGER === 'true',
loggerPrefix: '[BigCommerce Fetch Cache]',
},
name,
);
};

const fetchCacheInstance = await createFetchCacheInstance();

/**
* Fetch data with TTL caching using a 2-layer cache strategy (memory + backend).
*
* This function provides a drop-in replacement for data fetching in Next.js middleware
* where the normal fetch cache is not available.
*
* @param fetcher - Function that fetches the data (e.g., API call)
* @param cacheKey - Unique key for caching this data
* @param options - Cache options including TTL and tags
*
* @example
* ```typescript
* const userData = await fetchWithTTLCache(
* async () => {
* const response = await fetch('/api/user');
* return response.json();
* },
* 'user:123',
* { ttl: 300 } // 5 minutes
* );
* ```
*/
export async function fetchWithTTLCache<T>(
fetcher: () => Promise<T>,
cacheKey: string,
options: FetchCacheOptions = {},
): Promise<T> {
const startTime = timer();

// Try to get from cache first
const cachedData = await fetchCacheInstance.get<T>(cacheKey);

if (cachedData !== null) {
const totalTime = timer() - startTime;

fetchCacheInstance['logger'].logOperation({
operation: 'FETCH',
cacheKeys: [cacheKey],
memoryHits: 1, // We don't know the source, but we got a hit
backendHits: 0,
totalMisses: 0,
totalTime,
backend: fetchCacheInstance['backendName'],
});

return cachedData;
}

// Cache miss - fetch fresh data
const fetchStartTime = timer();
const freshData = await fetcher();
const fetchTime = timer() - fetchStartTime;

// Store in cache (fire and forget)
fetchCacheInstance.set(cacheKey, freshData, options).catch((error) => {
// eslint-disable-next-line no-console
console.warn('Failed to cache data:', error);
});

const totalTime = timer() - startTime;

fetchCacheInstance['logger'].logOperation({
operation: 'FETCH',
cacheKeys: [cacheKey],
memoryHits: 0,
backendHits: 0,
totalMisses: 1,
backendTime: fetchTime, // This is the actual fetch time
totalTime,
backend: fetchCacheInstance['backendName'],
});

return freshData;
}

/**
* Batch fetch multiple pieces of data with TTL caching.
*
* This is useful when you need to fetch multiple related pieces of data
* and want to optimize cache hits.
*
* @param requests - Array of fetch requests with cache keys
* @param options - Default cache options (can be overridden per request)
*
* @example
* ```typescript
* const results = await batchFetchWithTTLCache([
* {
* fetcher: () => getRoute(pathname, channelId),
* cacheKey: kvKey(pathname, channelId),
* options: { ttl: 86400 }
* },
* {
* fetcher: () => getStoreStatus(channelId),
* cacheKey: kvKey(STORE_STATUS_KEY, channelId),
* options: { ttl: 3600 }
* }
* ]);
* ```
*/
export async function batchFetchWithTTLCache<T>(
requests: Array<{
fetcher: () => Promise<T>;
cacheKey: string;
options?: FetchCacheOptions;
}>,
defaultOptions: FetchCacheOptions = {},
): Promise<Array<T | null>> {
const cacheKeys = requests.map((req) => req.cacheKey);

// Try to get all from cache first
const cachedValues = await fetchCacheInstance.mget<T>(...cacheKeys);

// Identify which ones need to be fetched
const toFetch: Array<{ index: number; request: (typeof requests)[0] }> = [];

cachedValues.forEach((value, index) => {
if (value === null) {
const request = requests[index];
if (request) {
toFetch.push({ index, request });
}
}
});

// Fetch missing data
if (toFetch.length > 0) {
const fetchPromises = toFetch.map(async ({ index, request }) => {
const freshData = await request.fetcher();
const options = { ...defaultOptions, ...request.options };

// Store in cache (fire and forget)
fetchCacheInstance.set(request.cacheKey, freshData, options).catch((error) => {
// eslint-disable-next-line no-console
console.warn('Failed to cache batch data:', error);
});

return { index, data: freshData };
});

const fetchResults = await Promise.all(fetchPromises);

// Merge cached and fresh data
const finalResults = [...cachedValues];
fetchResults.forEach(({ index, data }) => {
finalResults[index] = data;
});

return finalResults;
}

return cachedValues;
}

// Expose the cache instance for direct access if needed
export { fetchCacheInstance as fetchCache };
39 changes: 39 additions & 0 deletions core/lib/fetch-cache/keys.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
/**
* Generate a cache key for the fetch cache system.
* This creates a consistent, scoped key for caching fetched data.
*
* @param key - The base key (e.g., pathname, API endpoint identifier)
* @param scope - Optional scope (e.g., channelId, userId) to namespace the key
* @returns A formatted cache key
*/
export function cacheKey(key: string, scope?: string): string {
if (scope) {
return `${scope}:${key}`;
}
return key;
}

// Common cache keys used throughout the application
export const STORE_STATUS_KEY = 'store-status';
export const ROUTE_KEY_PREFIX = 'route';

/**
* Generate a route cache key.
*
* @param pathname - The route pathname
* @param channelId - The channel ID for scoping
* @returns A formatted route cache key
*/
export function routeCacheKey(pathname: string, channelId: string): string {
return cacheKey(`${ROUTE_KEY_PREFIX}:${pathname}`, channelId);
}

/**
* Generate a store status cache key.
*
* @param channelId - The channel ID for scoping
* @returns A formatted store status cache key
*/
export function storeStatusCacheKey(channelId: string): string {
return cacheKey(STORE_STATUS_KEY, channelId);
}
136 changes: 136 additions & 0 deletions core/lib/fetch-cache/lib/cache-logger.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
interface FetchCacheOperation {
operation: 'FETCH' | 'BATCH_FETCH' | 'CACHE_SET';
cacheKeys: string[];
memoryHits?: number;
backendHits?: number;
totalMisses?: number;
memoryTime?: number;
backendTime?: number;
totalTime: number;
options?: Record<string, unknown>;
backend?: string;
}

interface FetchCacheLoggerConfig {
enabled: boolean;
prefix?: string;
}

export class FetchCacheLogger {
private config: FetchCacheLoggerConfig;

constructor(config: FetchCacheLoggerConfig) {
this.config = config;
}

logOperation(operation: FetchCacheOperation): void {
if (!this.config.enabled) return;

const prefix = this.config.prefix || '[Fetch Cache]';
const { operation: op, cacheKeys, totalTime, backend } = operation;

// Build the main message
const keyStr = cacheKeys.length === 1 ? cacheKeys[0] : `[${cacheKeys.join(', ')}]`;
let message = `${prefix} ${op} ${keyStr}`;

// Add backend info if available
if (backend) {
message += ` (${backend})`;
}

// Add hit/miss analysis for fetch operations
if (op === 'FETCH' || op === 'BATCH_FETCH') {
const analysis = this.buildHitMissAnalysis(operation);
if (analysis) {
message += ` - ${analysis}`;
}
}

// Add timing breakdown
const timing = this.buildTimingBreakdown(operation);
if (timing) {
message += ` - ${timing}`;
}

// Add options if present (for CACHE_SET operations)
if (operation.options && Object.keys(operation.options).length > 0) {
const opts = this.formatOptions(operation.options);
message += ` - ${opts}`;
}

// eslint-disable-next-line no-console
console.log(message);
}

private buildHitMissAnalysis(operation: FetchCacheOperation): string {
const { cacheKeys, memoryHits = 0, backendHits = 0, totalMisses = 0 } = operation;
const total = cacheKeys.length;

if (memoryHits === total) {
return '✓ All from memory cache';
}

if (memoryHits + backendHits === total) {
if (memoryHits > 0) {
return `✓ Memory: ${memoryHits}, Backend: ${backendHits}`;
}
return `✓ All from backend cache`;
}

// Some misses - need to fetch fresh data
const parts = [];
if (memoryHits > 0) parts.push(`Memory: ${memoryHits}`);
if (backendHits > 0) parts.push(`Backend: ${backendHits}`);
if (totalMisses > 0) parts.push(`✗ Fetch required: ${totalMisses}`);

return parts.join(', ');
}

private buildTimingBreakdown(operation: FetchCacheOperation): string {
const { memoryTime, backendTime, totalTime } = operation;
const parts = [];

if (memoryTime !== undefined) {
parts.push(`Memory: ${memoryTime.toFixed(2)}ms`);
}

if (backendTime !== undefined) {
parts.push(`Backend: ${backendTime.toFixed(2)}ms`);
}

parts.push(`Total: ${totalTime.toFixed(2)}ms`);

return parts.join(', ');
}

private formatOptions(options: Record<string, unknown>): string {
const parts = [];

if (options.ttl) {
parts.push(`TTL: ${options.ttl}s`);
}

if (Array.isArray(options.tags) && options.tags.length > 0) {
parts.push(`Tags: [${options.tags.join(', ')}]`);
}

// Add other relevant options
Object.entries(options).forEach(([key, value]) => {
if (key !== 'ttl' && key !== 'tags' && value !== undefined) {
parts.push(`${key}: ${String(value)}`);
}
});

return parts.length > 0 ? `Options: ${parts.join(', ')}` : '';
}
}

// Performance timing utility with feature detection
export const getPerformanceTimer = (): (() => number) => {
if (typeof performance !== 'undefined' && typeof performance.now === 'function') {
return () => performance.now();
}
return () => Date.now();
};

export const timer = getPerformanceTimer();
20 changes: 20 additions & 0 deletions core/lib/fetch-cache/types.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
export interface FetchCacheOptions {
/** Time to live in seconds */
ttl?: number;
/** Cache tags for invalidation (when supported by backend) */
tags?: string[];
/** Additional backend-specific options */
[key: string]: unknown;
}

export interface FetchCacheAdapter {
get<T>(cacheKey: string): Promise<T | null>;
set<T>(cacheKey: string, data: T, options?: FetchCacheOptions): Promise<T | null>;
mget<T>(...cacheKeys: string[]): Promise<Array<T | null>>;
}

export interface FetchCacheResult<T> {
data: T;
fromCache: boolean;
cacheSource?: 'memory' | 'backend';
}
101 changes: 0 additions & 101 deletions core/lib/kv/adapters/bc.ts

This file was deleted.

46 changes: 0 additions & 46 deletions core/lib/kv/adapters/memory.ts

This file was deleted.

30 changes: 0 additions & 30 deletions core/lib/kv/adapters/upstash.ts

This file was deleted.

30 changes: 0 additions & 30 deletions core/lib/kv/adapters/vercel.ts

This file was deleted.

112 changes: 0 additions & 112 deletions core/lib/kv/index.ts

This file was deleted.

10 changes: 0 additions & 10 deletions core/lib/kv/keys.ts

This file was deleted.

6 changes: 0 additions & 6 deletions core/lib/kv/types.ts

This file was deleted.

118 changes: 36 additions & 82 deletions core/middlewares/with-routes.ts
Original file line number Diff line number Diff line change
@@ -6,9 +6,8 @@ import { graphql } from '~/client/graphql';
import { revalidate } from '~/client/revalidate-target';
import { getVisitIdCookie, getVisitorIdCookie } from '~/lib/analytics/bigcommerce';
import { sendProductViewedEvent } from '~/lib/analytics/bigcommerce/data-events';
import { kvKey, STORE_STATUS_KEY } from '~/lib/kv/keys';

import { kv } from '../lib/kv';
import { fetchWithTTLCache, batchFetchWithTTLCache } from '~/lib/fetch-cache';
import { routeCacheKey, storeStatusCacheKey } from '~/lib/fetch-cache/keys';

import { type MiddlewareFactory } from './compose-middlewares';

@@ -122,25 +121,12 @@ const getStoreStatus = async (channelId?: string) => {
type Route = Awaited<ReturnType<typeof getRoute>>;
type StorefrontStatusType = ReturnType<typeof graphql.scalar<'StorefrontStatusType'>>;

interface RouteCache {
route: Route;
expiryTime: number;
}

interface StorefrontStatusCache {
status: StorefrontStatusType;
expiryTime: number;
}

const StorefrontStatusCacheSchema = z.object({
status: z.union([
z.literal('HIBERNATION'),
z.literal('LAUNCHED'),
z.literal('MAINTENANCE'),
z.literal('PRE_LAUNCH'),
]),
expiryTime: z.number(),
});
const StorefrontStatusSchema = z.union([
z.literal('HIBERNATION'),
z.literal('LAUNCHED'),
z.literal('MAINTENANCE'),
z.literal('PRE_LAUNCH'),
]);

const RedirectSchema = z.object({
to: z.union([
@@ -171,45 +157,11 @@ const RouteSchema = z.object({
node: z.nullable(NodeSchema),
});

const RouteCacheSchema = z.object({
route: z.nullable(RouteSchema),
expiryTime: z.number(),
});

const updateRouteCache = async (
pathname: string,
channelId: string,
event: NextFetchEvent,
): Promise<RouteCache> => {
const routeCache: RouteCache = {
route: await getRoute(pathname, channelId),
expiryTime: Date.now() + 1000 * 60 * 30, // 30 minutes
};

event.waitUntil(kv.set(kvKey(pathname, channelId), routeCache));

return routeCache;
};
// Cache TTL configuration from environment variables
const ROUTE_CACHE_TTL = parseInt(process.env.ROUTE_CACHE_TTL || '86400', 10); // Default: 24 hours
const STORE_STATUS_CACHE_TTL = parseInt(process.env.STORE_STATUS_CACHE_TTL || '3600', 10); // Default: 1 hour

const updateStatusCache = async (
channelId: string,
event: NextFetchEvent,
): Promise<StorefrontStatusCache> => {
const status = await getStoreStatus(channelId);

if (status === undefined) {
throw new Error('Failed to fetch new storefront status');
}

const statusCache: StorefrontStatusCache = {
status,
expiryTime: Date.now() + 1000 * 60 * 5, // 5 minutes
};

event.waitUntil(kv.set(kvKey(STORE_STATUS_KEY, channelId), statusCache));

return statusCache;
};
// Functions removed - caching is now handled automatically by fetchWithTTLCache

const clearLocaleFromPath = (path: string, locale: string) => {
if (path === `/${locale}` || path === `/${locale}/`) {
@@ -231,31 +183,33 @@ const getRouteInfo = async (request: NextRequest, event: NextFetchEvent) => {
// For route resolution parity, we need to also include query params, otherwise certain redirects will not work.
const pathname = clearLocaleFromPath(request.nextUrl.pathname + request.nextUrl.search, locale);

let [routeCache, statusCache] = await kv.mget<RouteCache | StorefrontStatusCache>(
kvKey(pathname, channelId),
kvKey(STORE_STATUS_KEY, channelId),
);

// If caches are old, update them in the background and return the old data (SWR-like behavior)
// If cache is missing, update it and return the new data, but write to KV in the background
if (statusCache && statusCache.expiryTime < Date.now()) {
event.waitUntil(updateStatusCache(channelId, event));
} else if (!statusCache) {
statusCache = await updateStatusCache(channelId, event);
}

if (routeCache && routeCache.expiryTime < Date.now()) {
event.waitUntil(updateRouteCache(pathname, channelId, event));
} else if (!routeCache) {
routeCache = await updateRouteCache(pathname, channelId, event);
}
// Use batch fetch with TTL caching - much cleaner than manual cache management
const [route, status] = await batchFetchWithTTLCache<Route | StorefrontStatusType>([
{
fetcher: () => getRoute(pathname, channelId),
cacheKey: routeCacheKey(pathname, channelId),
options: { ttl: ROUTE_CACHE_TTL },
},
{
fetcher: async () => {
const fetchedStatus = await getStoreStatus(channelId);
if (fetchedStatus === undefined) {
throw new Error('Failed to fetch storefront status');
}
return fetchedStatus;
},
cacheKey: storeStatusCacheKey(channelId),
options: { ttl: STORE_STATUS_CACHE_TTL },
},
]);

const parsedRoute = RouteCacheSchema.safeParse(routeCache);
const parsedStatus = StorefrontStatusCacheSchema.safeParse(statusCache);
// Simple validation of the fetched/cached data
const parsedRoute = RouteSchema.nullable().safeParse(route);
const parsedStatus = StorefrontStatusSchema.safeParse(status);

return {
route: parsedRoute.success ? parsedRoute.data.route : undefined,
status: parsedStatus.success ? parsedStatus.data.status : undefined,
route: parsedRoute.success ? parsedRoute.data : undefined,
status: parsedStatus.success ? parsedStatus.data : undefined,
};
} catch (error) {
// eslint-disable-next-line no-console
4 changes: 2 additions & 2 deletions core/package.json
Original file line number Diff line number Diff line change
@@ -34,7 +34,7 @@
"@t3-oss/env-core": "^0.13.6",
"@upstash/redis": "^1.35.0",
"@vercel/analytics": "^1.5.0",
"@vercel/kv": "^3.0.0",
"@vercel/functions": "^2.2.0",
"@vercel/speed-insights": "^1.2.0",
"clsx": "^2.1.1",
"content-security-policy-builder": "^2.3.0",
@@ -50,7 +50,7 @@
"lodash.debounce": "^4.0.8",
"lru-cache": "^11.1.0",
"lucide-react": "^0.474.0",
"next": "15.4.0-canary.0",
"next": "15.4.0-canary.85",
"next-auth": "5.0.0-beta.25",
"next-intl": "^4.1.0",
"nuqs": "^2.4.3",
274 changes: 85 additions & 189 deletions pnpm-lock.yaml

Large diffs are not rendered by default.