Universal Local Model Support via Ollama/OpenAI-Compatible Providers #24166
magudeshhmw
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Problem Statement
Gemini CLI currently only supports Google Gemini models and limited local model options (LiteRT-LM).
Many users want to run AI workflows entirely offline using local LLM providers like Ollama, LM
Studio, or LocalAI.
Proposed Solution
Extend the ContentGenerator interface to support multiple local model providers:
Implementation Plan
AuthType.OLLAMAandAuthType.OPENAI_COMPATIBLEto the enumOllamaContentGeneratorandOpenAiCompatibleContentGeneratorimplementing the
ContentGeneratorinterfacecreateContentGenerator()to route to the appropriate provider basedon AuthType
OPENAI_BASE_URL)
Timeline (350 hours)
Relevant Experience
I've built offline AI applications using Ollama + FastAPI, and I'm familiar with the OpenAI API
specification.
Questions for Maintainers
Looking forward to feedback!
Beta Was this translation helpful? Give feedback.
All reactions