ai sdk - reasoning chunks not provided
Unanswered
Holland Lop posted this in #help-forum
Holland LopOP
When streaming text through the ai sdk using openai's o3-mini, there are no chunks for reasoning, despite there being reasoning tokens used at the end as evident by
event.experimental_providerMetadata
.const result = streamText({
model: registry.languageModel(`openai:${o3m}`),
prompt: message.payload.message,
maxSteps: 10,
onChunk(event) {
console.log("onChunk event.type", event.chunk.type);
},
onFinish: (event) => {
console.log("onFinish event.reasoning", event.reasoning);
console.log("onFinish providerMetadata", event.experimental_providerMetadata);
},
});
const dataStreamResponse = result.toDataStreamResponse({
sendReasoning: true,
});
5 Replies
Mini Satin
@Holland Lop I'm having the same issue, did you figure out solution?
Mini Satin
PS: I found solution.
model: openai.responses('o4-mini'),
providerOptions: {
openai: {
reasoningEffort: 'medium',
reasoningSummary: 'detailed',
},
},
@Mini Satin PS: I found solution.
typescript
model: openai.responses('o4-mini'),
providerOptions: {
openai: {
reasoningEffort: 'medium',
reasoningSummary: 'detailed',
},
},
Tropical Parula
I haven't see the openai.responses
what does that do differently than openai("model")?
ahh nvm I googled it