Skip to content

Using TGPT a 10MB chatting app with a free 400b model is crazy! #421

@jkemp814

Description

@jkemp814

The Qwen3.5:397b-cloud is on Ollama. I dropped it into TGPT just to see what would happen and it loaded. The typing is so fast it seems to blink across the screen.

Here is the string I used to load it.

tgpt -provider ollama -model qwen3.5:397b-cloud -i

I have Ollama running in the background. You will need that.

These large cloud models also require you to log into Ollama.

That is a 10MB chat app using a 400b (397b) parameter LLM!

No GPU or storage needed.

No API keys.

Thank you, AAndrew-me.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions