Getting around 2MB Cache limit while self-hosted?
Answered
American Shorthair posted this in #help-forum
American ShorthairOP
Hi,
I have a server action that basically keeps cached data on WebGL drawing data. It exceeds 2MB, but I am self hosting. So is there any way I can get around the 2MB limit without needing to make a whole custom cache handler?
I have a server action that basically keeps cached data on WebGL drawing data. It exceeds 2MB, but I am self hosting. So is there any way I can get around the 2MB limit without needing to make a whole custom cache handler?
⨯ unhandledRejection: Error: Failed to set Next.js data cache, items over 2MB can not be cached (31411169 bytes)
at async GetMapData (src/app/actions/GetMapData.tsx:29:14)
27 | );
28 |
> 29 | return (await mapData());
| ^Answered by Rose-breasted Grosbeak
25 Replies
Rose-breasted Grosbeak
Answer
Rose-breasted Grosbeak
From google
American ShorthairOP
Huh, so the best solution is something incredibly hacky.
And while I can appreciate the "well you shouldn't be caching more than 2MB of data!" Im not so sure about that. In my specific use case I am not sure why I shouldn't be? It's similar to caching an image. To cache a .webp of what this data renders and not have it live it would exceed 2MB, so what is the issue?
Rose-breasted Grosbeak
Idk I just searched
American ShorthairOP
not asking you directly
I appreciate it though, the hacky solution does do the trick.
Rose-breasted Grosbeak
Yeah that's stupid
I agree with the last comment in the thread
Asian black bear
Caching an image client-side and caching certain data server-side are entirely different things, so it's not comparable. The solution suggested is hacky because it references internals that have no guarantee of being stable across version updates. You can make it non-hacky and idiomatic by attaching a custom handler yourself rather than relying on the basic implementation supplied by Next. In fact, it might even make more sense to hook up a custom Redis instance for that in the configuration.
American ShorthairOP
so id have to make a custom handler or make a custom redis instance..... to provide 3mb of cached vertex data? and that makes more sense than giving a configuration to increase the limit?
im looking into redis this looks incredibly overkill for something so simple. and would add another point of failure
Asian black bear
I don't know the exact reason why the cache size is limited and I can only speculate that it's purely to prevent unexpected behavior and costs for naive users. Take timeouts of serverless functions for example where somebody inexperienced would think it's better to remove that time limit rather than fixing the underlying code and then ending up with a huge bill. Being explicit and purposeful with the way you handle caching is probably more expressive anyways. The former is just speculation though.
Another thing to keep in mind, the default cache does this on the filesystem if I'm not mistaken, so using a proper store such as Redis will make it even faster for large data sets which might become relevant in case you have a ton of load, and it helps to sync caches across instances if you have a scalable architecture.
Another thing to keep in mind, the default cache does this on the filesystem if I'm not mistaken, so using a proper store such as Redis will make it even faster for large data sets which might become relevant in case you have a ton of load, and it helps to sync caches across instances if you have a scalable architecture.
American ShorthairOP
the unstable_cache uses file system and doesn't use memory? so if you have cached data loaded by 500 people in a second it opens and closes a file 500 times? that can't be right, is it?
its not constant changing data, it's not complex, its not taking up a lot of memory. redis is absolutely overkill for this.
Asian black bear
Skimming over the cache implementation shows it's using a conventional LRUCache in front of the filesystem. So the default seems to be a mix of both.
American ShorthairOP
I think in this case it just would be nice to have an option to increase it. if there is going to be an arbitrary "this is clearly excessive" limit, then 2MB is really low considering images often times are exceeding that.
Asian black bear
I don't understand why you'd want to cache images server-side.
American ShorthairOP
vertex data lol
for WebGL
American ShorthairOP
bro would really rather try and pick holes at my implementation then just agree this is an area of possible improvement for next.js. I dont get the need to be so defensive. It's not an attack, its just a constructive critique for improvement.
also, if you want to understand why you might cache images server-side, if there was an API endpoint that served images, and it had usage limits, you may want to cache image server-side. so there are use-cases even if you can't imagine them for things.
Asian black bear
You wouldn't really do it that way and instead use a CDN rather than caching it on your application server. And similarly, if you scale your app to more than once instance you'd have to use a shared cache anyways otherwise all your instances would be hitting the limits of external APIs even faster and probably be out of sync.
On a different note, keep in mind we're neither Vercel employees nor part of the Next.js team. You can leave your feedback in the GitHub discussion thread, but there is no point in trying to convince me or another user here how Next should handle caching. It's a restriction (regardless of whether it's good or bad) and you have the two options outlined, the hacky solution of referencing the default cache handler or adding one yourself such as using Redis, R2, S3 or whatever option might be applicable.
On a different note, keep in mind we're neither Vercel employees nor part of the Next.js team. You can leave your feedback in the GitHub discussion thread, but there is no point in trying to convince me or another user here how Next should handle caching. It's a restriction (regardless of whether it's good or bad) and you have the two options outlined, the hacky solution of referencing the default cache handler or adding one yourself such as using Redis, R2, S3 or whatever option might be applicable.
American ShorthairOP
Not everything is going to scale, and also not everything needs a CDN. Total assumptions being made, trying to apply one-size-fits-all logic. There will be times that just caching it on the server is totally fine. Why would you assume every app is going to have multiple instances and scale?
I get you aren't a Vercel employee, I am not trying to convince you of anything. I am letting you know why Redis doesn't make any sense in this case, if you wanted you could have just left the discussion already. If anything you are the one trying to do the convincing. Only problem is, this is already (as clearly seen from the Github) agreeably bad.
I get you aren't a Vercel employee, I am not trying to convince you of anything. I am letting you know why Redis doesn't make any sense in this case, if you wanted you could have just left the discussion already. If anything you are the one trying to do the convincing. Only problem is, this is already (as clearly seen from the Github) agreeably bad.
American ShorthairOP
I didnt bring it up to convince anyone, I bring it up curious if someone can shine a light on why it is the way it is. Maybe there is something I don't understand. But as far as I can see, this is just something that could be improved.