Vercel-prisma-supabase-AI next.js app
Unanswered
Spectacled bear posted this in #help-forum
Spectacled bearOP
Hi! I can't get my app that uses prisma and supabase, and langchain and pinecone and openai, to store in the postgresql db properly. Namely: the app is to chat with PDF files. I intent to store history of the chat: User questions and Assisstant's responses (generated by OpenAI LLM, based on vectors stored in Pinecone and so on). I manage to store User's questions to the supabase, but never the Assisstant's responses. The problem won't occur when on localhost-all is stored then without any issues. Now when I try to store second User's question, then suddenly at the same moment the previous Assisstant's response is stored first then the second question and again no response to second question. and so on... Can't fix this. Need help.
I think I await properly generation of response by LLM etc. I also updated the connection strings according to different sources, I guess it is serverless functions issue... What the heck Guys?
Jakub
I will post below the functions and connection strings.
I think I await properly generation of response by LLM etc. I also updated the connection strings according to different sources, I guess it is serverless functions issue... What the heck Guys?
Jakub
I will post below the functions and connection strings.
2 Replies
Spectacled bearOP
Connection strings
DATABASE_URL="postgres://postgres.HOST:PASS@aws-0-eu-central-1.pooler.supabase.com:6543/postgres?pgbouncer=true&connection_limit=1&connect_timeout=300"
DIRECT_URL="postgres://postgres.HOST:PASS@aws-0-eu-central-1.pooler.supabase.com:5432/postgres?connect_timeout=300".
functions: POST and saveMessage called twice from within POST function:
export async function POST(request: NextRequest) {
const { messages, fileKey, documentId } = await request.json();
const query = messages[messages.length - 1].content;
//USER’s questions store
await saveMessage(documentId, "user", query, userId);
const { stream, handlers } = LangChainStream();
const pinecone = new Pinecone();
const index = pinecone.Index(process.env.PINECONE_INDEX!);
const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings(), {
pineconeIndex: index,
namespace: fileKey, }
);
const model = new OpenAI({
modelName: "gpt-3.5-turbo",
streaming: true,
callbackManager: CallbackManager.fromHandlers(handlers), });
const chain = VectorDBQAChain.fromLLM(model, vectorStore, {
k: 3, returnSourceDocuments: false, });
chain
.invoke({ query })
.then(async (res) => {
if (res) {
//ASSISTANT’s questions store
await saveMessage(documentId, "assistant", res.text, userId);
} })
.catch(console.error);
return new StreamingTextResponse(stream);
} catch (error) { console.error(error);
return NextResponse.json( { error: "Internal server error" }, { status: 500 } );
}
}
async function saveMessage(
documentId: string,
role: Role,
content: string,
userId: string
) {
const document = await prismadb.document.update({
where: {
id: documentId,
userId,
},
data: {
messages: {
create: {
content,
role,
},
},
},
});
return document;
}
DATABASE_URL="postgres://postgres.HOST:PASS@aws-0-eu-central-1.pooler.supabase.com:6543/postgres?pgbouncer=true&connection_limit=1&connect_timeout=300"
DIRECT_URL="postgres://postgres.HOST:PASS@aws-0-eu-central-1.pooler.supabase.com:5432/postgres?connect_timeout=300".
functions: POST and saveMessage called twice from within POST function:
export async function POST(request: NextRequest) {
const { messages, fileKey, documentId } = await request.json();
const query = messages[messages.length - 1].content;
//USER’s questions store
await saveMessage(documentId, "user", query, userId);
const { stream, handlers } = LangChainStream();
const pinecone = new Pinecone();
const index = pinecone.Index(process.env.PINECONE_INDEX!);
const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings(), {
pineconeIndex: index,
namespace: fileKey, }
);
const model = new OpenAI({
modelName: "gpt-3.5-turbo",
streaming: true,
callbackManager: CallbackManager.fromHandlers(handlers), });
const chain = VectorDBQAChain.fromLLM(model, vectorStore, {
k: 3, returnSourceDocuments: false, });
chain
.invoke({ query })
.then(async (res) => {
if (res) {
//ASSISTANT’s questions store
await saveMessage(documentId, "assistant", res.text, userId);
} })
.catch(console.error);
return new StreamingTextResponse(stream);
} catch (error) { console.error(error);
return NextResponse.json( { error: "Internal server error" }, { status: 500 } );
}
}
async function saveMessage(
documentId: string,
role: Role,
content: string,
userId: string
) {
const document = await prismadb.document.update({
where: {
id: documentId,
userId,
},
data: {
messages: {
create: {
content,
role,
},
},
},
});
return document;
}
Spectacled bearOP
Anyone?