-
Notifications
You must be signed in to change notification settings - Fork 27k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[dynamicIO] cacheHandlers are not used in production #72552
Comments
I am having enormous problems adding Redis to caching |
@snraets Why though? Since everything is so stable and well-documented. Just kidding.. You can try your luck with https://github.com/caching-tools/next-shared-cache but it doesn't support DIO yet. If you want to do it yourself, the ISR cache ( The data cache for DIO ( Warning The following can change at any time. The team is still cooking. Caution Please do use the following code as is. A slightly more complicated version (namespaces, skew protection, etc) is working fine for us in production but I can't vouch for it for external use. ConfigYour next.js config should look like this: export default {
cacheMaxMemorySize: 0,
cacheHandler: path.join(import.meta.dirname, 'lib/myISRCacheHandler.mjs'),
experimental: {
dynamicIO: true,
cacheHandlers: {
default: path.join(import.meta.dirname, 'lib/myDataCacheHandler.mjs'),
},
},
}; ISRNote With DynamicIO disabled this doubles as a data cache as well (for fetches) A basic implementation would like this //@ts-check
import process from 'node:process';
import Redis from 'ioredis';
import { NEXT_CACHE_TAGS_HEADER } from 'next/dist/lib/constants.js';
import { CachedRouteKind } from 'next/dist/server/response-cache/index.js';
/**
* @typedef {import("ioredis").Cluster} Cluster
* @typedef {import("next/dist/server/lib/incremental-cache").CacheHandler} ICacheHandler
* @typedef {import("next/dist/server/lib/incremental-cache").CacheHandlerContext} CacheHandlerContext
* @typedef {import("next/dist/server/lib/incremental-cache").CacheHandlerValue} CacheHandlerValue
* @typedef {import("next/dist/server/lib/incremental-cache").IncrementalCache} IncrementalCacheType
* @typedef {Parameters<IncrementalCacheType['get']>} CacheGetParameters
* @typedef {Parameters<IncrementalCacheType['set']>} CacheSetParameters
* @typedef {Parameters<IncrementalCacheType['revalidateTag']>[0]} CacheRevalidateTagParam
* @typedef {ReturnType<ICacheHandler['get']>} CacheGetReturnType
*/
// Redis endpoint
const endpoint = new URL(process.env.REDIS_ENDPOINT ?? 'redis://localhost:6379');
// Holds the tags that are shared between all applications that are using the same Redis instance
const TAGS_MANIFEST_KEY = '#';
function getTimestamp() {
return performance.timeOrigin + performance.now();
}
/**
* @implements {ICacheHandler}
*/
export default class CacheHandler {
/** @type {Redis | Cluster} */
static #client;
static #inited = false;
/**
* @param {CacheHandlerContext} _ctx
*/
constructor(_ctx) {
if (CacheHandler.#inited) {
return;
}
CacheHandler.#inited = true;
// Create the Redis client
if (!CacheHandler.#client) {
CacheHandler.#client =
process.env.REDIS_CLUSTER === '1'
? new Redis.Cluster(
[
{
host: endpoint.hostname,
port: parseInt(endpoint.port, 10),
},
],
{
dnsLookup: (address, callback) => callback(null, address),
redisOptions: {
...(endpoint.protocol === 'rediss:' ? { tls: {} } : {}),
keepAlive: 7200,
},
},
)
: new Redis({
host: endpoint.hostname,
port: parseInt(endpoint.port, 10),
...(endpoint.protocol === 'rediss:' ? { tls: {} } : {}),
keepAlive: 7200,
});
// Terminate the process if we can't connect to Redis
CacheHandler.#client.on('error', error => {
console.error('Error connecting to Redis', error);
process.exit(1);
});
CacheHandler.#client.on('end', () => {
console.error('Redis connection terminated');
});
CacheHandler.#client.on('reconnecting', () => {
console.error('Redis reconnecting');
});
}
}
resetRequestCache() {}
/**
* Returns true if value is stale
* @param {string[]} tags
* @param {Array<null | string>} values
* @param {number} lastModified
* @returns boolean
*/
static #isStale(tags, values, lastModified) {
// tags && values will always have the same length and match positions (since the Redis protocol guarantees this)
return values.some((it, idx) => {
if (it === null) {
return false;
}
// Redis client will return hash value as string even if it was stored as number internally, so we need to convert it back to number
const revalidatedAt = parseInt(it, 10);
const isStale = revalidatedAt >= lastModified;
return isStale;
});
}
/**
*
* @param {CacheHandlerValue} cachedData
* @param {string[]=} tags
* @param {string[]=} softTags
* @returns {Promise<boolean>}
*/
static async #revalidateCachedData(cachedData, tags, softTags) {
/** @type string[] */
let cacheTags;
if (cachedData.value?.kind === CachedRouteKind.FETCH) {
// Continue or preprocess here if needed
} else if (
cachedData.value?.kind === CachedRouteKind.APP_PAGE ||
cachedData.value?.kind === CachedRouteKind.APP_ROUTE
) {
let tagsHeader = cachedData.value.headers?.[NEXT_CACHE_TAGS_HEADER];
if (typeof tagsHeader !== 'string' || tagsHeader.length === 0) {
// This shouldn't happen, but if it does, we don't have any tags to revalidate. Consider it stale
console.warn(`No tags found in ${cachedData.value.kind} cache entry`);
return true;
}
cacheTags = tagsHeader.split(',');
} else {
console.warn(`Unsupported cache get - kind ${cachedData.value?.kind}`);
return true;
}
if (cacheTags.length) {
/** @type {Array<null | string>} */
const revalidatedTags = await CacheHandler.#client.hmget(TAGS_MANIFEST_KEY, ...cacheTags);
/*
A blocking validation is triggered if an ISR page had a tag revalidated.
If we want a background revalidation instead, we can return cachedData.lastModified = -1
*/
return CacheHandler.#isStale(cacheTags, revalidatedTags, cachedData?.lastModified || getTimestamp());
}
return false;
}
/**
* Stores a new entry in the cache
* @param {CacheSetParameters} args
* @returns {Promise<void>}
*/
async set(...args) {
let [cacheKey, data, ctx] = args;
if (data === null) {
return;
}
if (data.kind === CachedRouteKind.REDIRECT || data.kind === CachedRouteKind.IMAGE) {
return;
}
// Get a timestamp as soon as possible
const lastModified = getTimestamp();
// We only care about TTL. Data already contains tags, so they are stored anyway.
let revalidate = ctx.revalidate;
if (data.kind === CachedRouteKind.FETCH) {
// Sets for FETCH do not carry their tags, therefore we need to add them here in order to be able to check them later.
// The same is not the case for APP_PAGE and APP_ROUTE since tags are resolved statically (from the router path).
data.tags = ctx.tags;
} else if (data.kind === CachedRouteKind.APP_ROUTE || data.kind === CachedRouteKind.APP_PAGE) {
if (data.kind === CachedRouteKind.APP_ROUTE) {
// We need to store the body as base64 string, because Redis does not support binary data
data = structuredClone(data);
// @ts-expect-error - we revert it to Buffer before returning from the CacheHandler, so it doesn't break the rest of the app
data.body = data.body.toString('base64');
}
if (revalidate === undefined) {
if (data.status === undefined || data.status >= 300) {
revalidate = 3600; // cache error responses for an hour
}
}
}
/** @type {CacheHandlerValue} */
const cachedValue = { lastModified, value: data };
// Store the entry in Redis
if (typeof revalidate === 'number') {
await CacheHandler.#client
.multi()
.call('JSON.SET', cacheKey, '$', JSON.stringify(cachedValue))
.expire(cacheKey, revalidate)
.exec();
} else {
await CacheHandler.#client.call('JSON.SET', cacheKey, '$', JSON.stringify(cachedValue));
}
}
/**
* Gets cached value for the given cacheKey
* @param {CacheGetParameters} args
* @returns {CacheGetReturnType}
*/
async get(...args) {
let [cacheKey, ctx] = args;
// NOTE: tags & softTags are only sent for FETCH kind, otherwise they are null
const { tags, softTags, kind } = ctx;
/** @type {unknown} */
const json = await CacheHandler.#client.call('JSON.GET', cacheKey);
if (typeof json !== 'string') {
return null;
}
/** @type {CacheHandlerValue} */
const cachedData = JSON.parse(json);
const revalidated = await CacheHandler.#revalidateCachedData(cachedData, tags, softTags);
if (revalidated) {
return null;
}
// Convert the body back to a buffer
if (cachedData?.value?.kind === CachedRouteKind.APP_ROUTE) {
//@ts-expect-error - This is normal because we don't define separate types for cached and non-cached version of this object.
// The body is returned as base64 string from Redis so we need to convert it back to a Buffer
cachedData.value.body = Buffer.from(cachedData.value.body, 'base64');
}
return cachedData;
}
/**
* revalidatePath is a convenience layer on top of cache tags.
* Calling revalidatePath will call the revalidateTag function with a special default tag for the provided page.
* @param {CacheRevalidateTagParam} tagOrTags
* @returns {Promise<void>}
*/
async revalidateTag(tagOrTags) {
if (Array.isArray(tagOrTags)) {
if (tagOrTags.length === 1) {
tagOrTags = tagOrTags[0];
} else {
this.revalidateTags(tagOrTags);
return;
}
}
await CacheHandler.#client.hset(TAGS_MANIFEST_KEY, tagOrTags, getTimestamp());
}
/**
* Handles revalidation of multiple tags at once
* @param {string[]} tags
* @returns {Promise<void>}
*/
async revalidateTags(tags) {
if (!tags.length) {
return;
}
const timestamp = getTimestamp();
await CacheHandler.#client.hset(TAGS_MANIFEST_KEY, ...tags.flatMap(tag => [tag, timestamp]));
} Data CacheCaution This doesn't work in production builds (yet). //@ts-check
import process from 'node:process';
import Redis from 'ioredis';
/**
* @typedef {import("ioredis").Cluster} Cluster
* @typedef {import("next/dist/server/lib/cache-handlers/types").CacheHandler} CacheHandler
* @typedef {import("next/dist/server/lib/cache-handlers/types").CacheEntry} CacheEntry
* @typedef {Omit<CacheEntry, 'value'> & {value: string}} RedisCacheEntry
* @typedef {Parameters<CacheHandler['get']>} CacheGetParameters
* @typedef {ReturnType<CacheHandler['get']>} CacheGetReturnType
* @typedef {Parameters<CacheHandler['set']>} CacheSetParameters
* @typedef {ReturnType<CacheHandler['set']>} CacheSetReturnType
* @typedef {Parameters<CacheHandler['expireTags']>} CacheExpireTagsParameters
* @typedef {ReturnType<CacheHandler['expireTags']>} CacheExpireTagsReturnType
* @typedef {Parameters<CacheHandler['receiveExpiredTags']>} ReceiveExpiredTagsParameters
* @typedef {ReturnType<CacheHandler['receiveExpiredTags']>} ReceiveExpiredTagsReturnType
*/
/** @type PromiseWithResolvers<void> */
let isReady = Promise.withResolvers();
// Redis endpoint
const endpoint = new URL(process.env.REDIS_ENDPOINT ?? 'redis://localhost:6379');
/** @type {Redis | Cluster} */
let client;
// Holds the tags that are shared between all applications that are using the same Redis instance
const TAGS_MANIFEST_KEY = '#';
/** @type {Map<string, Promise<void>>} */
const pendingSets = new Map();
/**
* Converts a ReadableStream to a Buffer
* @param {ReadableStream} stream
* @returns Promise<Buffer>
*/
async function streamToBuffer(stream) {
const chunks = [];
for await (const chunk of stream) {
chunks.push(chunk);
}
return Buffer.concat(chunks);
}
/**
* Converts a Buffer to a ReadableStream
* @param {Buffer} buffer
* @returns ReadableStream
*/
function bufferToReadableStream(buffer) {
return new ReadableStream({
start(controller) {
controller.enqueue(buffer);
controller.close();
},
});
}
function getTimestamp() {
return performance.timeOrigin + performance.now();
}
async function init() {
// Create the Redis client
client =
process.env.REDIS_CLUSTER === '1'
? new Redis.Cluster(
[
{
host: endpoint.hostname,
port: parseInt(endpoint.port, 10),
},
],
{
dnsLookup: (address, callback) => callback(null, address),
redisOptions: {
...(endpoint.protocol === 'rediss:' ? { tls: {} } : {}),
keepAlive: 7200,
},
}
)
: new Redis({
host: endpoint.hostname,
port: parseInt(endpoint.port, 10),
...(endpoint.protocol === 'rediss:' ? { tls: {} } : {}),
keepAlive: 7200,
});
// Terminate the process if we can't connect to Redis
client.on('error', (error) => {
console.error('Error connecting to Redis:', error);
process.exit(1);
});
client.on('end', () => {
console.error('Redis connection terminated');
});
client.on('reconnecting', () => {
console.error('Redis reconnecting');
});
}
// Initialize the cache handler
init().then(isReady.resolve, isReady.reject);
/** @type {CacheHandler} */
const RedisCacheHandler = {
/**
* Stores a new entry in the cache
* @param {string} cacheKey
* @param {Promise<CacheEntry>} pendingEntry
* @returns {Promise<void>}
*/
async set(cacheKey, pendingEntry) {
/** @type {PromiseWithResolvers<void>} */
let pending = Promise.withResolvers();
pendingSets.set(cacheKey, pending.promise);
const { value, ...rest } = await pendingEntry;
const cachedValue = {
value: (await streamToBuffer(value)).toString('base64'),
...rest,
};
await isReady.promise;
try {
await client
.multi()
.call('JSON.SET', cacheKey, '$', JSON.stringify(cachedValue))
.expire(cacheKey, cachedValue.expire)
.exec();
} catch (err) {
console.error(err.message);
} finally {
pending.resolve();
pendingSets.delete(cacheKey);
}
},
/**
* Gets cached value for the given cacheKey
* @param {string} cacheKey
* @param {string[]} softTags
* @returns {Promise<undefined | CacheEntry>}
*/
async get(cacheKey, softTags) {
await pendingSets.get(cacheKey);
await isReady.promise;
/** @type {unknown} */
let entry_json = await client.call('JSON.GET', cacheKey);
if (typeof entry_json !== 'string') {
return undefined;
}
/** @type {RedisCacheEntry} */
const entry = JSON.parse(entry_json);
if (getTimestamp() > entry.timestamp + entry.revalidate * 1000) {
// In memory caches should expire after revalidate time because it is unlikely that
// a new entry will be able to be used before it is dropped from the cache.
return undefined;
}
const stale = await isStale(entry.timestamp, entry.tags, softTags);
if (stale) {
return undefined;
}
let { value, ...rest } = entry;
let buffer = Buffer.from(value, 'base64');
return {
value: bufferToReadableStream(buffer), // TODO,
...rest,
};
},
/**
* This is called when expireTags('') is called
* and should update tags manifest accordingly
* @param {string[]} tags
* @returns {Promise<void>}
*/
async expireTags(...tags) {
if (!tags.length) {
return;
}
const timestamp = getTimestamp();
await client.hset(TAGS_MANIFEST_KEY, ...tags.flatMap((tag) => [tag, timestamp]));
},
/**
* This is called when an action request sends
* NEXT_CACHE_REVALIDATED_TAGS_HEADER and tells
* us these tags are expired and the manifest
* should be updated this differs since in a multi
* instance environment you don't propagate these
* as they are request specific
* @param {string[]} tags
*/
async receiveExpiredTags(...tags) {
if (!tags.length) {
return;
}
this.expireTags(...tags);
},
};
/**
*
* @param {number} timestamp
* @param {string[]} tags
* @param {string[]} softTags
*/
async function isStale(timestamp, tags, softTags) {
const cacheTags = [...tags, ...softTags];
if (cacheTags.length) {
/** @type {Array<null | string>} */
const revalidatedTags = await client.hmget(TAGS_MANIFEST_KEY, ...cacheTags);
/*
A blocking validation is triggered if an ISR page had a tag revalidated.
If we want a background revalidation instead, we can return cachedData.lastModified = -1
*/
return revalidatedTags.some((it, idx) => {
if (it === null) {
// We've never revalidated this tag before
return false;
}
// Redis client will return hash value as string even if it was stored as number internally, so we need to convert it back to number
const revalidatedAt = parseInt(it, 10);
const isStale = revalidatedAt >= timestamp;
return isStale;
});
}
return false;
}
export default RedisCacheHandler; RedisOn a final note, my advice is to NOT use |
Link to the code that reproduces this issue
https://github.com/apostolos/next-custom-cache-handlers
To Reproduce
pnpm run dev
https://localhost:3000
[DEBUG]
lines in stdout)pnpm run build
. You'll notice that customHandlers code is correctly called during build.pnpm run start
Current vs. Expected behavior
Being able to use custom cache handlers in production is critical for self-hosting.
Following Next.js 15's recommendations, we migrated our app to "use cache" and implemented a custom shared cache handler (Redis-backed) in order to deploy the app in multiple instances. However, the custom cache handlers seem to be completely ignored in production builds.
During development our custom cache handler works fine:
During builds, it also seems to be working as expected:
However, when we launch the app our custom cache handler is never used.
Although it is initialized (the module is getting loaded), the
get()
,set()
functions are never called. It looks like the configuration is ignored and it falls back to using the default memory cache.This is a major deal breaker since we cannot self-host any serious app without control of the caching. Our workaround is to revert back to using
next: {revalidate, tags}
/unstable_cache()
instead of"use cache"
/cacheLife()
/cacheTags()
.Provide environment information
Node.js v23.1.0 Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:11 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6020 Available memory (MB): 32768 Available CPU cores: 12 Binaries: Node: 23.1.0 npm: 10.9.0 Yarn: N/A pnpm: 9.12.3 Relevant Packages: next: 15.0.4-canary.4 // Latest available version is detected (15.0.4-canary.4). eslint-config-next: N/A react: 19.0.0-rc-66855b96-20241106 react-dom: 19.0.0-rc-66855b96-20241106 typescript: 5.6.3 Next.js Config: output: N/A
Which area(s) are affected? (Select all that apply)
dynamicIO
Which stage(s) are affected? (Select all that apply)
next start (local)
Additional context
Self-hosted
The text was updated successfully, but these errors were encountered: