Next.js Discord

Discord Forum

Difference between server component and API call

Answered
African Slender-snouted Crocodil… posted this in #help-forum
Open in Discord
African Slender-snouted CrocodileOP
What is the difference between fetching data right inside the server component and making an API where i fetch the data then call that api from client using SWR ?
Answered by Roseate Spoonbill
Here's how I would approach this:
1. Start with full SSR, fetch data as needed as @Near mentioned - this will feel faster than pulling all the data from server in fetch calls spa-style
2. Observe how fast updates are and how it feels to end user.
3. In case of bottlenecks, optimize on server first - especially around caches and how often these are revalidated
4. If page still feels slow, then look for client-side optimizations like prefetching data.

In my experience, once you have good SSR, you rarely need to do more (recently we moved our blog to SSR+Next and it's so damn faster, I don't think we even need to worry about static rendering). Any slowdowns happen mostly when data source is slow, which won't happen in classical databases until you reach hundreds of thousands of documents. If classical DB is slow with smaller subset, then you probably can do further optimization around indices, structures etc.

By the time you'll need to do some actual client-side optimization, you'll have much better experience around Next, which will allow you to take more targeted approach.
View full answer

26 Replies

African Slender-snouted CrocodileOP
@Arinji 👀 can u help maybe?
Asian black bear
When you render server-side you will have all the data at once in the UI after a short time that it takes to retrieve the result from the server. When you perform client-side fetching you're back to SPA-like behavior which can easily become a waterfall of continued requests back and forth.
The idea of SSR is typically to have the result of your request within a single request.
African Slender-snouted CrocodileOP
@Asian black bear But what if my dataset is pretty big? 🤔 like what's the edge when it's not worth anymore using SSR because the data is too big?
for example, I'm fetching 5000 documents. is it still worth using SSR for this? or otherwise I can simply use some services like Meilisearch. Filter through data and return only what I need to the client ( so a fast request with a fast response ) and I can even handle infinite scroll pagination on server and request only 30-40 docs as I need in the given moment, rather than bringing all the 5000 docs at once, especially cause I won't show them to the user all at once, but rather filter through them
Roseate Spoonbill
It all depends on how you cache your data, how dynamic it is etc. Typically servers have better connection than users, so fetching data once to server and then serving smaller subsets (e.g. via pagination) to clients via ssr is a good tradeoff for end-user.
African Slender-snouted CrocodileOP
OR another approach that I've used in the past was to fetch some initial data such as 100-200 docs using SSR, and use that data as a fallback data inside my useSWR then as the updates/delete comes I was just re-fetching data for real-time updates
@Roseate Spoonbill It all depends on how you cache your data, how dynamic it is etc. Typically servers have better connection than users, so fetching data once to server and then serving smaller subsets (e.g. via pagination) to clients via ssr is a good tradeoff for end-user.
African Slender-snouted CrocodileOP
Wait wait, how? 🤔 U're saying to fetch all data on server and then serve smaller subsets via pagination to clients via ssr, but how can u do that tho?
Roseate Spoonbill
I don't want you to think this is the best approach. It's just one that came to my mind once you mentioned fetching 5k docs.

But it can be done quite easily, especially with newer "use cache".
If I wanted to precache larger set of documents on server, and serve smaller chunks to user, I'd use fetch cache. E.g.
- User wants to see documents 1-20
- Server fetches always segments rounded to 200, so server fetches 1-200 instead
- upon response, subset 1-20 is taken from the response.
- User clicks "Next page" and requests documents 21-40
- Server still rounds this to 1-200 segment, so does the same fetch, which is already cached
- Response cache get's documents 21-40

Again, not saying this is the way to go, because it's all too specific to any use case, but with slow backend API, I'd consider some precaching like this above.
@African Slender-snouted Crocodile Wait wait, how? 🤔 U're saying to fetch all data on server and then serve smaller subsets via pagination to clients via ssr, but how can u do that tho?
Asian black bear
Hypothetical examples are pretty pointless to talk about. Often people try to optimize prematurely where there is nothing to optimize or without having ever benchmarked the real impact. For example databases are designed to fetch millions of datasets in single digit ms speed.
Asian black bear
In the case of filtering you have to measure the payload size of all data sent to the client if you perform client-side filtering against the speed that server-side code has of filtering it there (often against cached data) which typically favors SSR.
@Asian black bear Hypothetical examples are pretty pointless to talk about. Often people try to optimize prematurely where there is nothing to optimize or without having ever benchmarked the real impact. For example databases are designed to fetch millions of datasets in single digit ms speed.
African Slender-snouted CrocodileOP
I m pretty new so I'm so clueless about benchmarks, I don't really know how fast a server is compared to client, i know only that client can be turbo slow xD
Roseate Spoonbill
I'd say don't focus on those aspects, if you don't know how slow or fast things are.
Asian black bear
Then start with full SSR and filtering there and only add client-side fetching/refetching and infinite scrolling if it becomes truly necessary.
It's a case of "if you are not sure whether you need it, then it's likely you won't need it"
African Slender-snouted CrocodileOP
The scenario is pretty straightforward
I have 5k+ documents, users might scroll through 200-300 docs ( but not more ) but they will for sure search in all 5k documents at some point, also they can delete/update those documents and I would like a real-time and fast return with their action
Asian black bear
There are tools to make the UX feel better such as optimistic updates etc.
Focus on getting it to work first.
And then evaluate whether it's "fast" enough or "feels right".
@Asian black bear There are tools to make the UX feel better such as optimistic updates etc.
African Slender-snouted CrocodileOP
The thing is that I want to think the situation a bit on long term, im working on a project that currently has 5k docs, but in 1 month it might have 15k docs,
Asian black bear
You are planning when to service your car without having a clue how far you will drive and btw without actually having a car to begin with.
Blanket statements about possible optimizations without actual numbers that back up claims are not useful.
@Asian black bear You are planning when to service your car without having a clue how far you will drive and btw without actually having a car to begin with.
African Slender-snouted CrocodileOP
I have an implementation already, that works pretty much SPA-like. it runs pretty nice and fast but I would like to make it better what i tehnically do is to send requests to an API that is serving me data from meilisearch based on given filters and for each page i send 1 more request to the api with page=2 page=3 and so on, but this sounds stupid because there is no really caching in this, i'm sending a new request on each scroll
Asian black bear
It's always easier to add improvements to code for various goals but much harder to remove debt because somebody thought it might be useful, especially when you don't have sufficient experience on combining server-side rendering and client-side features for better UX (which are quite difficult to reconcile).
Roseate Spoonbill
Here's how I would approach this:
1. Start with full SSR, fetch data as needed as @Near mentioned - this will feel faster than pulling all the data from server in fetch calls spa-style
2. Observe how fast updates are and how it feels to end user.
3. In case of bottlenecks, optimize on server first - especially around caches and how often these are revalidated
4. If page still feels slow, then look for client-side optimizations like prefetching data.

In my experience, once you have good SSR, you rarely need to do more (recently we moved our blog to SSR+Next and it's so damn faster, I don't think we even need to worry about static rendering). Any slowdowns happen mostly when data source is slow, which won't happen in classical databases until you reach hundreds of thousands of documents. If classical DB is slow with smaller subset, then you probably can do further optimization around indices, structures etc.

By the time you'll need to do some actual client-side optimization, you'll have much better experience around Next, which will allow you to take more targeted approach.
Answer
African Slender-snouted CrocodileOP
🤔 Okey, i will think about it and move forward, Maybe I will find an answer to myself