Issue with Persisting Assistant State After Redirect in Next.js SPA
Unanswered
Barbary Lion posted this in #help-forum
Barbary LionOP
Hey everyone! I'm building a single-page app using Next.js for an AI Assistant Chat experience. I’m using Vercel’s
The app has two main pages:
-
-
The user starts on
Here’s the issue:
The thread ID is generated server-side (using a Next.js route handler), and only after that I can redirect the user to
I’m a bit stuck on how to preserve the assistant request during this transition. Do I need to restructure the logic? Maybe kick off the message on the
Any help or best practices would be much appreciated!
ai-sdk
and specifically the useAssistant
hook.The app has two main pages:
-
/
(the homepage with a textarea and prompt suggestions) -
/c/[id]
(the actual chat thread page)The user starts on
/
, types a prompt, submits it, and then gets redirected to /c/[id]
to see the real-time assistant response.Here’s the issue:
The thread ID is generated server-side (using a Next.js route handler), and only after that I can redirect the user to
/c/[id]
. But since useAssistant
is a client-side hook, when I redirect to /c/[id]
, I lose the original submitMessage()
call and the whole assistant interaction doesn’t carry over.I’m a bit stuck on how to preserve the assistant request during this transition. Do I need to restructure the logic? Maybe kick off the message on the
/c/[id]
page instead? Has anyone tackled something like this?Any help or best practices would be much appreciated!
2 Replies
American Fuzzy Lop
I am assuming with
submitMessage()
you mean the message the user entered? If so, it gets saved in the database and then when you are in the chat page it gets requested. Otherwise you just have to convert to client components and save it in a local storageBarbary LionOP
Yes exactly — the message is saved to the database, and when the user lands on /c/[id], they can see the message there.
The problem is that the streaming is still in progress, so the user misses the real-time generation of the response. It just suddenly appears when it's done, instead of seeing it streamed word-by-word like it's supposed to.
That’s the core issue I’m trying to solve.
The problem is that the streaming is still in progress, so the user misses the real-time generation of the response. It just suddenly appears when it's done, instead of seeing it streamed word-by-word like it's supposed to.
That’s the core issue I’m trying to solve.