Open
Description
Before submitting your bug report
- I believe this is a bug. I'll try to join the Continue Discord for questions
- I'm not able to find an open issue that reports the same bug
- I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS:
- Continue version:
- IDE version:
- Model:
- config:
OR link to assistant in Continue hub:
Description
ollama already supports streaming processing, but continue still considers it unsupported, which will prevent continue from streaming back the model's output in agent mode.
And this also causes models using <think>...</think>
to fail to output normally: the model's answer will be included within Thought for ...
.
An alternative is to use an openai-compatible API with the ollama model, which will work normally.
To reproduce
Use the ollama provider with models that support inference and tool calls, such as qwen3
Log output
Metadata
Metadata
Assignees
Type
Projects
Status
Todo