LLM unstable output behaviour for the given structured schema
Unanswered
Audubon's Oriole posted this in #help-forum
Audubon's OrioleOP
I am using gpt llm to generate the structured response using vercel ai sdk. Not sure why LLM sometimes generates structured output and sometimes not?
I am using streamText func to generate the response. does any one faced this issue?
I am using streamText func to generate the response. does any one faced this issue?