Skip to content

Commit 478acd8

Browse files
authored
feat(mistral): added magistral reasoning models (vercel#6715)
## background Mistral released two new reasoning models (`magistral-small-2506` and `magistral-medium-2506`) that use `<think>...</think>` tags to separate reasoning from final responses. ## summary - add extract reasoning middleware examples with mistral - add model ids - magistral-small-2506, magistral-medium-2506 ## verification - all tests pass including new reasoning-specific test with middleware - content separation works in both `generateText` and `streamText` modes - examples demonstrate magistral reasoning using extract reasoning middleware ## future work * update provider to extract reasoning once mistral api exposes native reasoning events --------- Co-authored-by: Josh [email protected]
1 parent d7cb6b4 commit 478acd8

File tree

6 files changed

+158
-11
lines changed

6 files changed

+158
-11
lines changed

.changeset/lemon-yaks-move.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'@ai-sdk/mistral': patch
3+
---
4+
5+
feat(mistral): added magistral reasoning models

content/providers/01-ai-sdk-providers/20-mistral.mdx

Lines changed: 46 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -143,6 +143,42 @@ const result = await generateText({
143143
});
144144
```
145145

146+
### Reasoning Models
147+
148+
Mistral offers reasoning models that provide step-by-step thinking capabilities:
149+
150+
- **magistral-small-2506**: Smaller reasoning model for efficient step-by-step thinking
151+
- **magistral-medium-2506**: More powerful reasoning model balancing performance and cost
152+
153+
These models return content that includes `<think>...</think>` tags containing the reasoning process. To properly extract and separate the reasoning from the final answer, use the [extract reasoning middleware](/docs/reference/ai-sdk-core/extract-reasoning-middleware):
154+
155+
```ts
156+
import { mistral } from '@ai-sdk/mistral';
157+
import {
158+
extractReasoningMiddleware,
159+
generateText,
160+
wrapLanguageModel,
161+
} from 'ai';
162+
163+
const result = await generateText({
164+
model: wrapLanguageModel({
165+
model: mistral('magistral-small-2506'),
166+
middleware: extractReasoningMiddleware({
167+
tagName: 'think',
168+
}),
169+
}),
170+
prompt: 'What is 15 * 24?',
171+
});
172+
173+
console.log('REASONING:', result.reasoningText);
174+
// Output: "Let me calculate this step by step..."
175+
176+
console.log('ANSWER:', result.text);
177+
// Output: "360"
178+
```
179+
180+
The middleware automatically parses the `<think>` tags and provides separate `reasoningText` and `text` properties in the result.
181+
146182
### Example
147183

148184
You can use Mistral language models to generate text with the `generateText` function:
@@ -162,14 +198,16 @@ Mistral language models can also be used in the `streamText`, `generateObject`,
162198

163199
### Model Capabilities
164200

165-
| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
166-
| ---------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
167-
| `pixtral-large-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
168-
| `mistral-large-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
169-
| `mistral-small-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
170-
| `ministral-3b-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
171-
| `ministral-8b-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
172-
| `pixtral-12b-2409` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
201+
| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
202+
| ----------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
203+
| `pixtral-large-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
204+
| `mistral-large-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
205+
| `mistral-small-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
206+
| `magistral-small-2506` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
207+
| `magistral-medium-2506` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
208+
| `ministral-3b-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
209+
| `ministral-8b-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
210+
| `pixtral-12b-2409` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
173211

174212
<Note>
175213
The table above lists popular models. Please see the [Mistral
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
import { mistral } from '@ai-sdk/mistral';
2+
import {
3+
extractReasoningMiddleware,
4+
generateText,
5+
wrapLanguageModel,
6+
} from 'ai';
7+
import 'dotenv/config';
8+
9+
async function main() {
10+
const result = await generateText({
11+
model: wrapLanguageModel({
12+
model: mistral('magistral-medium-2506'),
13+
middleware: extractReasoningMiddleware({
14+
tagName: 'think',
15+
}),
16+
}),
17+
prompt:
18+
'Solve this step by step: If a train travels 60 mph for 2 hours, how far does it go?',
19+
maxOutputTokens: 500,
20+
});
21+
22+
console.log('\nREASONING:\n');
23+
console.log(result.reasoningText);
24+
25+
console.log('\nTEXT:\n');
26+
console.log(result.text);
27+
28+
console.log();
29+
console.log('Usage:', result.usage);
30+
}
31+
32+
main().catch(console.error);
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
import { mistral } from '@ai-sdk/mistral';
2+
import { extractReasoningMiddleware, streamText, wrapLanguageModel } from 'ai';
3+
import 'dotenv/config';
4+
5+
async function main() {
6+
const result = streamText({
7+
model: wrapLanguageModel({
8+
model: mistral('magistral-small-2506'),
9+
middleware: extractReasoningMiddleware({
10+
tagName: 'think',
11+
}),
12+
}),
13+
prompt: 'What is 2 + 2?',
14+
});
15+
16+
console.log('Mistral reasoning model with extracted reasoning:');
17+
console.log();
18+
19+
let enteredReasoning = false;
20+
let enteredText = false;
21+
22+
for await (const part of result.fullStream) {
23+
if (part.type === 'reasoning') {
24+
if (!enteredReasoning) {
25+
enteredReasoning = true;
26+
console.log('REASONING:');
27+
}
28+
process.stdout.write(part.text);
29+
} else if (part.type === 'text') {
30+
if (!enteredText) {
31+
enteredText = true;
32+
console.log('\n\nTEXT:');
33+
}
34+
process.stdout.write(part.text);
35+
}
36+
}
37+
38+
console.log();
39+
console.log();
40+
console.log('Usage:', await result.usage);
41+
}
42+
43+
main().catch(console.error);

packages/mistral/src/mistral-chat-language-model.test.ts

Lines changed: 29 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -377,6 +377,34 @@ describe('doGenerate', () => {
377377
]
378378
`);
379379
});
380+
381+
it('should return raw text with think tags for reasoning models', async () => {
382+
const reasoningModel = provider.chat('magistral-small-2506');
383+
384+
prepareJsonResponse({
385+
content:
386+
"<think>\nLet me think about this problem step by step.\nFirst, I need to understand what the user is asking.\nThen I can provide a helpful response.\n</think>\n\nHello! I'm ready to help you with your question.",
387+
});
388+
389+
const { content } = await reasoningModel.doGenerate({
390+
prompt: TEST_PROMPT,
391+
});
392+
393+
expect(content).toMatchInlineSnapshot(`
394+
[
395+
{
396+
"text": "<think>
397+
Let me think about this problem step by step.
398+
First, I need to understand what the user is asking.
399+
Then I can provide a helpful response.
400+
</think>
401+
402+
Hello! I'm ready to help you with your question.",
403+
"type": "text",
404+
},
405+
]
406+
`);
407+
});
380408
});
381409

382410
describe('doStream', () => {
@@ -772,9 +800,7 @@ describe('doStream with raw chunks', () => {
772800
includeRawChunks: true,
773801
});
774802

775-
const chunks = await convertReadableStreamToArray(stream);
776-
777-
expect(chunks).toMatchInlineSnapshot(`
803+
expect(await convertReadableStreamToArray(stream)).toMatchInlineSnapshot(`
778804
[
779805
{
780806
"type": "stream-start",

packages/mistral/src/mistral-chat-options.ts

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@ export type MistralChatModelId =
88
| 'mistral-large-latest'
99
| 'mistral-small-latest'
1010
| 'pixtral-large-latest'
11+
// reasoning models
12+
| 'magistral-small-2506'
13+
| 'magistral-medium-2506'
1114
// free
1215
| 'pixtral-12b-2409'
1316
// legacy

0 commit comments

Comments
 (0)