Deploying Next.js in Kubernetes – Caching and Redis Memorystore Challenges
Unanswered
Sage Grouse posted this in #help-forum
Sage GrouseOP
Hello,
We are deploying a standalone Next.js application with SSR using the [official Dockerfile example](https://github.com/vercel/next.js/blob/canary/examples/with-docker/Dockerfile) on GKE. Initially, we implemented caching with the [Neshca cache handler using Redis](https://caching-tools.github.io/next-shared-cache/redis). However, we encountered an issue where the pre-generated HTML files created during yarn build were not utilized. Instead, the cache was built on the Redis pod only when requests were received by the frontend. This approach introduced a critical failure point—if the backend or the Redis pod went down, the frontend would return a 500 error. I have not been able to determine the root cause of this issue.
Next, we attempted to use Memorystore for Redis on GCP. However, Redis required two modules—RedisJSON and RedisSearch—which Memorystore does not support. As an alternative, we switched to a disk cache solution, as discussed [here](https://github.com/vercel/next.js/discussions/38858#discussioncomment-6552331). This approach initially worked, but we later discovered that the generated HTML files were referencing outdated CSS and JS files from previous deployments. Further investigation revealed that this was caused by incremental caching.
Has anyone successfully deployed Next.js in Kubernetes while handling caching effectively? What approach did you take? Additionally, is there a way to use Memorystore for Redis in GCP?
Looking forward to your insights!
We are deploying a standalone Next.js application with SSR using the [official Dockerfile example](https://github.com/vercel/next.js/blob/canary/examples/with-docker/Dockerfile) on GKE. Initially, we implemented caching with the [Neshca cache handler using Redis](https://caching-tools.github.io/next-shared-cache/redis). However, we encountered an issue where the pre-generated HTML files created during yarn build were not utilized. Instead, the cache was built on the Redis pod only when requests were received by the frontend. This approach introduced a critical failure point—if the backend or the Redis pod went down, the frontend would return a 500 error. I have not been able to determine the root cause of this issue.
Next, we attempted to use Memorystore for Redis on GCP. However, Redis required two modules—RedisJSON and RedisSearch—which Memorystore does not support. As an alternative, we switched to a disk cache solution, as discussed [here](https://github.com/vercel/next.js/discussions/38858#discussioncomment-6552331). This approach initially worked, but we later discovered that the generated HTML files were referencing outdated CSS and JS files from previous deployments. Further investigation revealed that this was caused by incremental caching.
Has anyone successfully deployed Next.js in Kubernetes while handling caching effectively? What approach did you take? Additionally, is there a way to use Memorystore for Redis in GCP?
Looking forward to your insights!
5 Replies
Paper wasp
Didn't use GCP but you can switch to redis string using neshca cache handler if you don't have support for redisearch & redisJSON (very late response sorry)
Bengal
We run our prod environment in GCP with memory store and this package: https://github.com/trieb-work/nextjs-turbo-redis-cache
full nextjs 15 support.
full nextjs 15 support.
Paper wasp
awesome I'll take a look thank you !
@Bengal We run our prod environment in GCP with memory store and this package: https://github.com/trieb-work/nextjs-turbo-redis-cache
full nextjs 15 support.
California pilchard
I have some problem with default next cahcing (in local storage inside
.next/cache which become very large). Do this package allow us to cache image and fetch request from next directly with Redis (which allow us to prevent storage overflow) ?Bengal
Yes the whole fetch cache will be in redis and it works more efficient, so it cleans up, other than the local cache.
Images are not cached in there. I would recommend to try and place that in s3 as that’s more efficient. The image conversion is also taking a lot of resources in a single threaded nodejs process.
Images are not cached in there. I would recommend to try and place that in s3 as that’s more efficient. The image conversion is also taking a lot of resources in a single threaded nodejs process.