Auto-Compose think #446
Replies: 2 comments 1 reply
-
|
for the moment I did this |
Beta Was this translation helpful? Give feedback.
-
|
Let’s look at this. Ollama offers a cloud service with powerful models. These cloud models don’t use local disk space and don’t consume your CPU, since everything runs on Ollama’s servers. The small models that run locally can’t really follow a full conversation thread. To get good answers, the AI needs proper context in the prompt. It’s also useful to remove the “thinking” section included in some responses. A more elegant solution would be possible if the cloud version supported --json output, but I couldn’t find a way to enable it. In the nchat configuration, I changed the auto compose command so it points to a bash script:
%1 is the temporary .txt path provided by nchat. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have Ollama locally with Qwen3.
It has "think".
This gets included in the response.
Is there a way to avoid that?
Right now I get the answer but it also shows everything inside .
Beta Was this translation helpful? Give feedback.
All reactions