Next.js Discord

Discord Forum

High memory usage NextJS 16

Unanswered
California pilchard posted this in #help-forum
Open in Discord
California pilchardOP
Hi, I’m seeing a weird memory behavior with Next.js in Docker.
Context:
- Next.js app running in Docker (docker stats)
- I just reload the same page in the browser for ~5 minutes
- No traffic besides that

Observed:
- RAM usage keeps increasing on every refresh
- It never goes down, even when idle
- Example:
MEM USAGE / LIMIT: 1.155GiB / 7.653GiB (15%)
PIDS: 11

97 Replies

Saint Hubert Jura Hound
Does it only happen on one specific page?
@Saint Hubert Jura Hound Does it only happen on one specific page?
California pilchardOP
Well after searching again I again there was some issue in my nextjs code (like recreating the supaabse client each server request). But now when launching my nextjs app in production with a dockerfile in my computer, the ram is stable at 200mo whatever the page I load. I do the exact same in my k3s cluster (with of course the same Dockerfile for the image) and after 5 minutes the ram explode :
kubectl top pod web-app-84c4987c64-dpndv
NAME                       CPU(cores)   MEMORY(bytes)   
web-app-84c4987c64-dpndv   359m         1989Mi          

But I have no idea what is the reason... Why only in k3s cluster
Saint Hubert Jura Hound
what runtime do u use?
@Saint Hubert Jura Hound what runtime do u use?
California pilchardOP
What do u means by runtime ? I guess the default one (i didnt change anything)
Saint Hubert Jura Hound
like node, deno, bun
@Saint Hubert Jura Hound like node, deno, bun
California pilchardOP
well node I guess 😅
Saint Hubert Jura Hound
try passing the --inspect arg when starting ur server and check the heap in devtools
@Saint Hubert Jura Hound https://nextjs.org/docs/app/guides/debugging#server-side-code
California pilchardOP
The thing is in dev mode (on my laptop) ram usage is okay. Same for production build with docker. But when runnign the image in my k3s cluster, memory go crazy without any reason
Saint Hubert Jura Hound
Yeah but u can still use the inspector mode to see which part of ur process is actually allocating all that memory
To check where the problem occurs
Its likely some bug in node or next though, maybe trying to switch around versions could work
@Saint Hubert Jura Hound Yeah but u can still use the inspector mode to see which part of ur process is actually allocating all that memory
California pilchardOP
Like adding the flag --inspect in the Dockerfile ? :
...
ENV PORT=3000

# server.js is created by next build from the standalone output
# https://nextjs.org/docs/pages/api-reference/config/next-config-js/output
ENV HOSTNAME="0.0.0.0"
CMD ["node", "server.js"]
Saint Hubert Jura Hound
Which node version are u on btw?
California pilchardOP
The dockerfile use FROM node:20-alpine AS base but already try with 22
Im gonna try add --inspect flag bug the fact that using the exact same docker image in local and it dont use that much ram make any sens to me 😂
Saint Hubert Jura Hound
Yeah idrk this is a weird issue
Do the nodes themselves have enough free ram?
California pilchardOP
Well nothing seems weird no ?
@Saint Hubert Jura Hound Do the nodes themselves have enough free ram?
California pilchardOP
Yeah Ive setup in K3s :
resources:
            requests:
              cpu: "1"
              memory: "2Gi"
            limits:
              cpu: "2"
              memory: "4Gi"

Seems enough
@California pilchard Well nothing seems weird no ?
Saint Hubert Jura Hound
As long as the snapshot was taken while the memory spike was occuring, then yea the problem isnt ur node app
Which is even weirder..
I guess try creating a shell into the pod and running top? See what process is taking up all the memory
@Saint Hubert Jura Hound As long as the snapshot was taken while the memory spike was occuring, then yea the problem isnt ur node app
California pilchardOP
Well no mb, the snapshot was taking just after the pod was recreated. So ram is low at starting, but increase very fastly. Now I try to make a snapshot when she is around 600mo, but the pod crash before the snapshot finish 😂
Saint Hubert Jura Hound
unrelated crash u mean?
also yea take the snapshot when the memory actually spikes lol that way u can see which of these object types is over allocating
@Saint Hubert Jura Hound unrelated crash u mean?
California pilchardOP
Not unrelated. The pod crashes because of memory pressure. When RSS grows, taking a heap snapshot temporarily allocates more memory, so the pod hits the limit and gets OOM-killed before the snapshot finishes
Saint Hubert Jura Hound
damn
i guess just manually inspect ur code. do u know at which commit the issue started?
@Saint Hubert Jura Hound oh if its a steady increase and not a sudden spike it is most likely an issue in ur own code
California pilchardOP
if it was an issue with my code, runnign the docker image should also have the issue no ?
@Saint Hubert Jura Hound damn
California pilchardOP
well the issue appear when I move to k3s hosting instead of vercel like hosting (coolify)
Saint Hubert Jura Hound
ah yea shit ur right
California pilchardOP
That why I think is something related to k3s but Im just beginning with it so its just a supposition
Well in k3s I use : kubectl top pod <pod_id> to get the ram usage, and in local with my docker I just read the docker stats
Ive ask chatgpt about it and tell me to try with a slim image instead of alpine but everything he tell me to try wasnt a success for now 😂
Saint Hubert Jura Hound
could u try using kind locally? its a tool for testing k8s, takes like 2 mins to install. jsut to see if its specific to k3s
@California pilchard Well in k3s I use : `kubectl top pod <pod_id>` to get the ram usage, and in local with my docker I just read the `docker stats`
Saint Hubert Jura Hound
it should be the same since both kubectl top and docker stats fetch the usage data from the container runtime im pretty sure
@Saint Hubert Jura Hound could u try using kind locally? its a tool for testing k8s, takes like 2 mins to install. jsut to see if its specific to k3s
California pilchardOP
Yes I can try with Kind or K3d idk which one is more representative
Saint Hubert Jura Hound
Try just kind, since i wanna see if the issue is k3s specific
@Saint Hubert Jura Hound Maybe try this first tho its faster
California pilchardOP
kubectl top pod web-app-6fbf598d9d-m4r2p
NAME                       CPU(cores)   MEMORY(bytes)   
web-app-6fbf598d9d-m4r2p   1072m        2248Mi

and inside :
Mem: 14098808K used, 1889872K free, 22016K shrd, 588304K buff, 5603996K cached
CPU:  19% usr   0% sys   0% nic  79% idle   0% io   0% irq   0% sirq
Load average: 1.33 1.07 0.97 8/1005 86
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
    1     0 nextjs   R    12.8g  84%   3  18% next-server (v
   79     0 nextjs   S     1712   0%   6   0% sh
   86    79 nextjs   R     1636   0%   0   0% top
Saint Hubert Jura Hound
alr thats good
it'd be best to find a way to be able to take the snapshot during the spike but im not sure how u would if it crashes
@Saint Hubert Jura Hound alr thats good
California pilchardOP
Running the app in local k3d cluster :
kubectl top pod
NAME                      CPU(cores)   MEMORY(bytes)   
web-app-d749b6446-r7f4t   31m          177Mi 
31m for CPU and 177Mi for ram damn 😂
Saint Hubert Jura Hound
yea thats cursed. i have no clue what could cause this tbh
are u using cache components / a cachehandler(s) implementation?
https://nextjs.org/docs/app/api-reference/config/next-config-js/cacheHandlers
Isnt only memory usage the problem, also the cpu usage :
# IN MY PROD K3S CLUSTER
NAME                       CPU(cores)   MEMORY(bytes)   
web-app-77bdfc9646-57b8t   743m         629Mi 

# IN MY LOCAL K3D CLUSTER
NAME                      CPU(cores)   MEMORY(bytes)   
web-app-d749b6446-r7f4t   1m           230Mi 
Saint Hubert Jura Hound
What does kubectl top node show for the k3s node w next running on it? Does it have at least the 2gb and 1gi mem and cpu available? Normally kubernetes would tell u in the pod status/events if it was OOM killed but just to make sure
@Saint Hubert Jura Hound What does `kubectl top node` show for the k3s node w next running on it? Does it have at least the 2gb and 1gi mem and cpu available? Normally kubernetes would tell u in the pod status/events if it was OOM killed but just to make sure
California pilchardOP
I have :
kubectl logs web-app-77bdfc9646-57b8t --previous                                                                                                                  ─╯
   ▲ Next.js 16.0.10
   - Local:         http://localhost:3000
   - Network:       http://0.0.0.0:3000

 ✓ Starting...
 ✓ Ready in 217ms

<--- Last few GCs --->

[1:0x71d52a89b650]   261796 ms: Scavenge 471.3 (519.3) -> 468.5 (519.8) MB, 3.82 / 0.01 ms  (average mu = 0.255, current mu = 0.165) allocation failure; 
[1:0x71d52a89b650]   261815 ms: Scavenge 472.1 (519.8) -> 469.0 (520.1) MB, 3.12 / 0.00 ms  (average mu = 0.255, current mu = 0.165) task; 
[1:0x71d52a89b650]   262773 ms: Mark-Compact 472.7 (520.1) -> 465.9 (520.3) MB, 915.10 / 0.30 ms  (average mu = 0.328, current mu = 0.390) task; scavenge might not succeed


<--- JS stacktrace --->

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----


And also :
 kubectl top pod                                                                                                                                                   ─╯
NAME                        CPU(cores)   MEMORY(bytes)   
api-577cf79c6f-rp5s5        1m           98Mi            
imgproxy-85457bd6c9-xqzgh   1m           12Mi            
martin-6d985c5fbf-smphs     1m           10Mi            
notify-5dffd595b9-zmjrt     1m           55Mi            
typesense-0                 3m           2282Mi          
umami-7d6b498d98-nbb9j      5m           245Mi           
umami-postgres-0            7m           157Mi           
web-app-778f966597-ktcd2    642m         471Mi 

Ive try with 20-slim image but same result
All other app in my cluster use low CPU low memory (with NestJS, ExpressJS, ... ) why NextJS 😂
California pilchardOP
Nice new record :
web-app-778f966597-ktcd2    0m           0Mi 

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
 1: 0xb76dc5 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [next-server (v]
 2: 0xee6120 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [next-server (v]
 3: 0xee6407 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [next-server (v]
 4: 0x10f8055  [next-server (v]
 5: 0x10f85e4 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [next-server (v]
 6: 0x110f4d4 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*) [next-server (v]
 7: 0x110fcec v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [next-server (v]
 8: 0x10e5ff1 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [next-server (v]
...
15: 0x12365a1 v8::internal::JsonStringifier::Result v8::internal::JsonStringifier::Serialize_<true>(v8::internal::Handle<v8::internal::Object>, bool, v8::internal::Handle<v8::internal::Object>) [next-server (v]
16: 0x1237dd9 v8::internal::JsonStringifier::Result v8::internal::JsonStringifier::Serialize_<false>(v8::internal::Handle<v8::internal::Object>, bool, v8::internal::Handle<v8::internal::Object>) [next-server (v]
18: 0x123b2e6 v8::internal::JsonStringify(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>) [next-server (v]
19: 0xf78222 v8::internal::Builtin_JsonStringify(int, unsigned long*, v8::internal::Isolate*) [next-server (v]
20: 0x1959df6  [next-server (v]
California pilchardOP
Ive create a default nextjs docker image, adding to my cluster k3s and she works fine :
kubectl top pod next-mem-test-566b55ccb4-pncxh
NAME                             CPU(cores)   MEMORY(bytes)   
next-mem-test-566b55ccb4-pncxh   1m           25Mi  

That just a nightmare to debug now, I can only see the problem in prod in my k3s cluster, so need to build docker image each time. That probably the worst scenario for me
@California pilchard Nice new record : web-app-778f966597-ktcd2 0m 0Mi FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory ----- Native stack trace ----- 1: 0xb76dc5 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [next-server (v] 2: 0xee6120 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [next-server (v] 3: 0xee6407 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [next-server (v] 4: 0x10f8055 [next-server (v] 5: 0x10f85e4 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [next-server (v] 6: 0x110f4d4 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*) [next-server (v] 7: 0x110fcec v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [next-server (v] 8: 0x10e5ff1 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [next-server (v] ... 15: 0x12365a1 v8::internal::JsonStringifier::Result v8::internal::JsonStringifier::Serialize_<true>(v8::internal::Handle<v8::internal::Object>, bool, v8::internal::Handle<v8::internal::Object>) [next-server (v] 16: 0x1237dd9 v8::internal::JsonStringifier::Result v8::internal::JsonStringifier::Serialize_<false>(v8::internal::Handle<v8::internal::Object>, bool, v8::internal::Handle<v8::internal::Object>) [next-server (v] 18: 0x123b2e6 v8::internal::JsonStringify(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>) [next-server (v] 19: 0xf78222 v8::internal::Builtin_JsonStringify(int, unsigned long*, v8::internal::Isolate*) [next-server (v] 20: 0x1959df6 [next-server (v]
Saint Hubert Jura Hound
Nodejs is hitting its max heap size, idk if the jsonstringify is the actual cause but i'd check if ur serializing any big objects
Also can i ask if u set the same resource requests/limits on the pod in k3d and docker as on k3s?
Well no Ive created a custom github docker image where I add page, one by one to see when the memory increase. But super slow to do it since I have to docker build each time...
Well just having 1 server actions in my page. I can see this :
kubectl top pod next-mem-test-66c4d96f65-7jvl9
NAME                             CPU(cores)   MEMORY(bytes)   
next-mem-test-66c4d96f65-7jvl9   0m           149Mi 

The memory increase a little bit each time a god to others page and never decrease...
Like after few navigation :
NAME                             CPU(cores)   MEMORY(bytes)   
next-mem-test-66c4d96f65-7jvl9   0m           180Mi 

So there is something wrong I guess
Saint Hubert Jura Hound
There is indeed very much something wrong.. u might wanna open an issue on github. Or potentially try a different js runtime
@Saint Hubert Jura Hound There is indeed very much something wrong.. u might wanna open an issue on github. Or potentially try a different js runtime
California pilchardOP
Maybe there is something wrong with my code. I have sitemaps routes like this :
import { createAnonClient } from "@/lib/supabase/anon";
import { NextResponse } from "next/server";

export async function GET(
  _: Request,
  { params }: { params: Promise<{ id: string }> }
) {
  const supabase = createAnonClient();
  const { id } = await params;
  const { data, error } = await supabase.storage
    .from("sitemaps")
    .download(`movies/${id}.xml.gz`);
  if (error || !data) {
    return new NextResponse("[sitemap] films page not found", { status: 404 });
  }
  return new NextResponse(data, {
    headers: {
      "Content-Type": "application/xml",
      "Content-Encoding": "gzip",
      "Cache-Control": "public, max-age=86400",
    },
  });
}


Each sitemap can weigh between 6-7mo. As in my prod cluster there is google crawl and bots, I can't see it in local test
@Saint Hubert Jura Hound Uncompressed im assuming u mean 6-7mb? That shouldnt cause any issues
California pilchardOP
My sitemap was 6-7mo compressed and 140mo uncompressed, Ive reduce them to 700ko compressed and 14mo uncompressed, but isnt the issue I guess
California pilchardOP
Ive deploy the same app in my cluster k3s just without ingress and no public access. And the memory go crazy too :
<--- Last few GCs --->

[1:0x744b288f1650]  1125429 ms: Scavenge 470.9 (519.6) -> 467.9 (519.8) MB, 4.19 / 0.01 ms  (average mu = 0.223, current mu = 0.108) allocation failure; 
[1:0x744b288f1650]  1125501 ms: Scavenge 471.9 (519.8) -> 468.8 (520.1) MB, 3.06 / 0.01 ms  (average mu = 0.223, current mu = 0.108) allocation failure; 
[1:0x744b288f1650]  1126628 ms: Mark-Compact 472.6 (520.3) -> 467.0 (520.3) MB, 1098.96 / 0.04 ms  (average mu = 0.243, current mu = 0.262) allocation failure; scavenge might not succeed


<--- JS stacktrace --->

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----

With :
 seq 1 10000 | shuf | head -n 2000 | \
xargs -n1 -P50 -I{} curl -s \
"http://localhost:3000/fr-FR/film/{}-test" > /dev/null
California pilchardOP
Waiiiiit :
CONTAINER ID   NAME            CPU %     MEM USAGE / LIMIT     MEM %     NET I/O          BLOCK I/O      PIDS
49c1256d4757   web-app-web-1   0.00%     1.391GiB / 7.653GiB   18.18%    33.9MB / 338MB   655kB / 31MB   11

My docker image of my app in local increase in mem usage too. So maybe not relted to k3s
I just forget to stop the container and I see this
California pilchardOP
With this :
seq 1 10000 | shuf | head -n 2000 | \                                      ─╯
xargs -n1 -P50 -I{} curl -s \
"http://localhost:3000/fr-FR/film/{}-test" > /dev/null

My docker go crazy :
CONTAINER ID   NAME            CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O       PIDS
dab9179d80ab   web-app-web-1   80.77%    6.518GiB / 7.653GiB   85.17%    65.2MB / 1.17GB   311MB / 887MB   12

Thats a good new. Something is wrong with my code
prob a memory leak in your code
California pilchardOP
Its a pretty simple code, just simple fetch but memory is never released...
@"use php" prob a memory leak in your code
California pilchardOP
I have a simple test route /test/[id] :
export default async function MoviePage(
  props: {
    params: Promise<{
      lang: string;
      id: string;
    }>;
  }
) {
  const params = await props.params;
  const { id: movieId } = getIdFromSlug(params.id);
  let movie: MediaMovie;
  try {
    movie = await getMovie(params.lang, movieId);
  } catch {
    return notFound();
  }
  return (
    <div>
        <h1>{movie.title}</h1>
    </div>
  );
}

With :
export const getMovie = (
    async (locale: string, id: number) => {
        console.log('[getMovie] Fetching movie with ID:', id, 'for locale:', locale);
        const supabase = createAnonClient(locale);
        const { data: film, error } = await supabase
            .from('media_movie_full')
            .select(`
                *,
                cast:media_movie_casting(
                    *,
                    person:media_person(*)
                )
            `)
            .eq('id', id)
            .single()
            .overrideTypes<MediaMovie, { merge: true }>();
        if (error) throw error;
        return film;
    }
);

I use simple command to stress test :
seq 1 10000 | shuf | head -n 2000 | \                                      ─╯
xargs -n1 -P50 -I{} curl -s \
"http://localhost:3000/fr-FR/test/{}-test" > /dev/null

DUring the test memory increase to 1.3go and after never go below 1.1go even when test end
@California pilchard Its a pretty simple code, just simple fetch but memory is never released...
Saint Hubert Jura Hound
Still possible to get memory leaks in simple code. Can u show ur supabase client file
@Saint Hubert Jura Hound Still possible to get memory leaks in simple code. Can u show ur supabase client file
California pilchardOP
Ive found something. Ive created :
/test/[id]/
- page.tsx
- layout.tsx

In both I call the getMovie function which just query to my db some stuff. In the layout.tsx I just have an header to show the title of the movie and in the page.tsx I show the overview and other stuff. With a layout.tsx when I do :
seq 1 10000 | shuf | head -n 2000 | \                                      ─╯
xargs -n1 -P50 -I{} curl -s \
"http://localhost:3000/fr-FR/test/{}-test" > /dev/null

Ram completely explose and she is never released. When I delete the layout.tsx file, so only have page.tsx, when I call the exact same stress test, memory is cleaned after...
Saint Hubert Jura Hound
Can u show the layout
@Saint Hubert Jura Hound Can u show the layout
California pilchardOP
layout.tsx :
import { notFound } from 'next/navigation';
import { getIdFromSlug } from '@/utils/get-id-from-slug';
import { getMovie } from '@/features/server/media/mediaQueries';
import { MediaMovie } from '@recomendapp/types';

export default async function MovieLayout(
  props: {
      children: React.ReactNode;
      params: Promise<{
        lang: string;
        id: string;
      }>;
  }
) {
  const params = await props.params;

  const {
      children
  } = props;
  const { id: movieId } = getIdFromSlug(params.id);

  let movie: MediaMovie;
  try {
    movie = await getMovie(params.lang, movieId);
  } catch {
    return notFound();
  }
  return (
    <div>
      <h1>layout: {movie.title}</h1>
      {children}
    </div>
    );
};

and page.tsx :
import { notFound } from 'next/navigation';
import { getIdFromSlug } from '@/utils/get-id-from-slug';
import { getMovie } from '@/features/server/media/mediaQueries';
import { MediaMovie } from '@recomendapp/types';

export default async function MoviePage(
  props: {
    params: Promise<{
      lang: string;
      id: string;
    }>;
  }
) {
  const params = await props.params;
  const { id: movieId } = getIdFromSlug(params.id);
  let movie: MediaMovie;
  try {
    movie = await getMovie(params.lang, movieId);
  } catch {
    return notFound();
  }
  return (
    <>
      <div>
        <h1>Movie: {movie.title}</h1>
        <p>Description: {movie.overview}</p>
      </div>
    </>
  );
}
Saint Hubert Jura Hound
Okay nothing special there indeed
Is supabase at the latest version?
@Saint Hubert Jura Hound Is supabase at the latest version?
California pilchardOP
yes latest docker image
Saint Hubert Jura Hound
No i mean the supabase client package
California pilchardOP
Oh yes latest too
Even my root layout.tsx :
export default async function LangLayout({
  children,
  params
} : {
  children: React.ReactNode;
  params: Promise<{ lang: string }>;
}) {
  const { lang } = await params;
  return (
    <html lang={lang} suppressHydrationWarning>
      <head>
        <link rel="search" type="application/opensearchdescription+xml" title="Recomend" href="/opensearch.xml" />
      </head>
      {process.env.NODE_ENV === 'production' && (
        <Script
        defer
        src={process.env.ANALYTICS_URL}
        data-website-id={process.env.ANALYTICS_ID}
        />
      )}
      <body className={cn('font-sans antialiased', fontSans.variable)}>
        {/* <Providers locale={lang as SupportedLocale}>{children}</Providers> */}
        {children}
      </body>
    </html>
  );
}

If I enable the <Providers /> component, whatever if I use layout.tsx or not in my /test/[id], memory usage increase directly
California pilchardOP
When I have :
<ServerComponent> // layout.tsx (root)
   <ClientComponent> // layout.tsx (root)
      <ServerComponent /> // page.tsx (test/[id]/page.tsx)
   </ClientComponent>
</ServerComponent>

Seems the memory never decrease when an client side component is parent of Server side component
California pilchardOP
This thing is really exhausting
Saint Hubert Jura Hound
What does ur providers component look like
@Saint Hubert Jura Hound What does ur providers component look like
California pilchardOP
Actually there is an bunch of stuff :
export default async function Provider() {
  const supabase = await createServerClient();
  const { data: { session } } = await supabase.auth.getSession();
  const { data: is_maintenance } = await supabase.rpc('is_maintenance').single();
  const isMaintenanceMode = is_maintenance && process.env.NODE_ENV !== 'development';
  const cookiesStore = await cookies();
  const layout = cookiesStore.get("ui:layout");
  const sidebarOpen = cookiesStore.get("ui-sidebar:open");
  const rightPanelOpen = cookiesStore.get("ui-right-panel:open");
  const defaultLayout = layout ? JSON.parse(layout.value) : undefined;
  const device = await getServerDevice();
  return (
    <NextIntlClientProvider>
     {...} // others react context
                          {isMaintenanceMode ? <MaintenancePage /> : children}
     {...}
    </NextIntlClientProvider>
  );
};

But imagine I remove the <Provider />, if in the root layout.tsx I just use :
export default async function LangLayout({
  children,
  params
} : {
  children: React.ReactNode;
  params: Promise<{ lang: string }>;
}) {
  const { lang } = await params;
  const cookiesStore = await cookies(); // HERE FORE EXAMPLE ADD RANDOM FUNCTION AWAIT
  return (
    <html lang={lang} suppressHydrationWarning>
      <head>
        <link rel="search" type="application/opensearchdescription+xml" title="Recomend" href="/opensearch.xml" />
      </head>
      {process.env.NODE_ENV === 'production' && (
        <Script
        defer
        src={process.env.ANALYTICS_URL}
        data-website-id={process.env.ANALYTICS_ID}
        />
      )}
      <body className={cn('font-sans antialiased', fontSans.variable)}>
        {children}
      </body>
    </html>
  );
}

At the moment where I use an await function (execpt the await params), my ram go crazy
Saint Hubert Jura Hound
If the leak only happens when u use an async function from nextjs its probably a bug on their end and id open an issue.
If it happens under other circumstances too then it could still be some issue with ur providers or a library u use in them. Maybe intl or something
Saint Hubert Jura Hound
Yeahh would make sense
Did u misconfigure it or is it a bug?
@Saint Hubert Jura Hound Did u misconfigure it or is it a bug?
California pilchardOP
Im gonna make a simple repo to show u but seems to be a bug
@Saint Hubert Jura Hound Yeahh would make sense
California pilchardOP
Look, with intl, memory when stress test go to 800MiB and after go down to 600MiB and never go down after. If we make another stres test, memory go up again and again.
Without next-intl, and exact same code (same request...) memory only go to 180MiB during stress test and go down to 60MiB like iddle
California pilchardOP