Why am I getting an empty message whenever the LLM uses toolcalling? Is this normal?
Unanswered
Exotic Shorthair posted this in #help-forum
Exotic ShorthairOP
Hello everyone!
I am building a RAG application which basically works like any other RAG.
1. User uploads doc, it is vectorised and saved
2. User asks question
3. Chatbot answers.
I want the LLM to choose between 3 tools:
1. search vectorstore
2. If no sufficient answer, load whole doc into context
3. search Bing if necessary
I am using:
"ai": 3.4.33
"next": "14.1.3"
For some reason, whenever the LLM calls a tool, it appends an empty message to the messages Array, which leads to my UI rendering one or more empty chat bubbles (seem image)
So whenever the LLM calls a tool, I get a message appended with an empty content field... (see image)
This is my Chat component:
This is my route:
Is this expected behaviour and I need to adjust my UI and filter out those emtpy messges or am I doing something wrong?
I am building a RAG application which basically works like any other RAG.
1. User uploads doc, it is vectorised and saved
2. User asks question
3. Chatbot answers.
I want the LLM to choose between 3 tools:
1. search vectorstore
2. If no sufficient answer, load whole doc into context
3. search Bing if necessary
I am using:
"ai": 3.4.33
"next": "14.1.3"
For some reason, whenever the LLM calls a tool, it appends an empty message to the messages Array, which leads to my UI rendering one or more empty chat bubbles (seem image)
So whenever the LLM calls a tool, I get a message appended with an empty content field... (see image)
This is my Chat component:
export function Chat() {
const { messages, input, handleInputChange, handleSubmit, isLoading } =
useChat({
api: '/api/chat',
maxSteps: 5,
body: {
chatId,
pagesText
}
});
return (
// ui etc
);
}
This is my route:
export const maxDuration = 300;
export async function POST(req: Request) {
const { messages, pagesText } = await req.json();
const question = messages[messages.length - 1].content;
const result = await streamText({
model: openai('gpt-4o-mini'),
system: `Here is the system prompt`,
messages,
maxSteps: 3,
temperature: 0.5,
tools: {
searchDocument: {
description: `Use this tool to search the document for relevant information to answer the question.`,
parameters: z.object({
question: z.string().describe('The users question')
}),
execute: async ({ question }) => {
// function for vectorsearch
const context = formatDocumentsAsString(searchResults);
return context;
}
},
// other tools
}
}
});
return result.toDataStreamResponse();
}
Is this expected behaviour and I need to adjust my UI and filter out those emtpy messges or am I doing something wrong?