Next.js Discord

Discord Forum

I want to run expensive operation that could take more than 60s

Answered
arnez posted this in #help-forum
Open in Discord
Avatar
arnezOP
I want to use Next.js as I am most comfortable with (at least for the frontend).
I will be using Open AI API and task could take a while.
Should I use Next.js for the backend / python / express?
On recent project I had ran in to the timing out issue and I don't want to get that issue for that project.
Answered by Yi Lon Ma
You can create your different api using express and host on platforms like render or fly.io
View full answer

20 Replies

Avatar
Yi Lon Ma
You can try streaming but I highly suspect that it will also suffer with timeouts but I've never seen the gpt's api take over 15s to generate response.
Avatar
arnezOP
I can share you are repo if you would like to check
Avatar
Yi Lon Ma
You can create your different api using express and host on platforms like render or fly.io
Answer
Avatar
Yi Lon Ma
yes I would love to read so I can avoid this situation in the near future
Avatar
Giant panda
you can also use the custom server in route handlers
Avatar
arnezOP
Code is kinda mess cuz project was made in 2 days
But yeah
How do I do that
This was the thing in pages i think
don't know how to do it in app
This works fine locally but fails in production. Error is in "/platform" - timeout after 10s
Avatar
American Crow
Just skimmed through your repo. Maybe you want to look into something like https://sdk.vercel.ai/docs/ai-sdk-rsc. That way you'd get server side streaming from and to the LLM out of the box. You could move everything to RSC, don't have to call your own LLM endpoints client side anymore. From the looks of it your project might also be a great use case for generative UI. Just a friendly suggestion
Avatar
arnezOP
Gonna try it for sure. Any feedback is welcome and i am trully grateful that you took your time. Thank you
Avatar
American Crow
I implemented it in a project like 2 weeks ago. The paradigms/api is different so in my case i needed two days to understand it. However having a server action which communicates with (any) LLM including function and tool calling out of the box in the case of openAI. Getting the response and the option to stream JSX (not just text) to the client is really powerful
Avatar
arnezOP
Another topic kinda related to this one but if you might now. What is the best/easiest/performant way to convert pdf/docx to text so i can give it to LLM as prompt
Also this ai sdk is neat
Avatar
American Crow
Not my area of expertise sorry. I'd most likely check out how langchain does that as a first action
Avatar
Giant panda
Avatar
arnezOP
?