Langchain.js response streaming from route handler to client page
Unanswered
Cynipid gall wasp posted this in #help-forum
Cynipid gall waspOP
Hey everybody, super noob question here but can anybody help me out with streaming openai responses with langchain.js? For some reason I've been stuck on this for a few days now.
I'm currently using a route handler and every solution I could find is using deprecated methods or the vercel ai sdk's "useCompletion" which wouldn't work for my use case.
I'll link my latest code below and in the comments. My last attempt was from a youtube video I found, but that didn't seem to work either.
page.tsx
I'm currently using a route handler and every solution I could find is using deprecated methods or the vercel ai sdk's "useCompletion" which wouldn't work for my use case.
I'll link my latest code below and in the comments. My last attempt was from a youtube video I found, but that didn't seem to work either.
page.tsx
const response = await fetch("/api/coach-chat", {
method: "POST",
body: JSON.stringify(newMessages),
});
if (response.body) {
const reader = response.body.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) {
break;
}
const text = new TextDecoder().decode(value);
setMessages([
...newMessages,
{
role: "assistant",
content: text as string,
},
]);
}
}2 Replies
Cynipid gall waspOP
route.ts
export async function runLLMChain(messagesRequest) {
try {
const encoder = new TextEncoder();
const stream = new TransformStream();
const writer = stream.writable.getWriter();
const messages = await messagesRequest;
const message = messages.at(-1)?.content;
const prompt = ChatPromptTemplate.fromMessages([
["system", COACH_PROMPT],
new MessagesPlaceholder("msgs"),
]);
const openai = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
modelName: "gpt-4o-mini",
temperature: 0.7,
streaming: true,
callbacks: [
{
async handleLLMNewToken(token) {
await writer.ready;
await writer.write(encoder.encode(`${token}`));
},
async handleLLMEnd() {
await writer.ready;
await writer.close();
},
},
],
});
// const parser = StructuredOutputParser.fromZodSchema(z.string());
const chain = prompt.pipe(openai);
chain.stream({
msgs: [new HumanMessage({ content: message })],
});
return stream.readable;
} catch (error) {
throw error;
}
}
export async function POST(req) {
const messages = await req.json();
const stream = runLLMChain(messages);
return new Response(await stream);
}West African Crocodile
Did you find a solution for this @Cynipid gall wasp ?