The Qwen3.5:397b-cloud is on Ollama. I dropped it into TGPT just to see what would happen and it loaded. The typing is so fast it seems to blink across the screen.
Here is the string I used to load it.
tgpt -provider ollama -model qwen3.5:397b-cloud -i
I have Ollama running in the background. You will need that.
These large cloud models also require you to log into Ollama.
That is a 10MB chat app using a 400b (397b) parameter LLM!
No GPU or storage needed.
No API keys.
Thank you, AAndrew-me.
The Qwen3.5:397b-cloud is on Ollama. I dropped it into TGPT just to see what would happen and it loaded. The typing is so fast it seems to blink across the screen.
Here is the string I used to load it.
tgpt -provider ollama -model qwen3.5:397b-cloud -iI have Ollama running in the background. You will need that.
These large cloud models also require you to log into Ollama.
That is a 10MB chat app using a 400b (397b) parameter LLM!
No GPU or storage needed.
No API keys.
Thank you, AAndrew-me.