An intelligent CLI companion for developers - An AI coding assistant powered by Ollama that executes your requests instead of just generating code blocks.
The Problem with Other AI Assistants:
You: "create a Python script that fetches weather data and run it"
Other AI: Here's the code: ```python [shows code block]
You: π€ Now I have to copy, paste, save, and run it manually...The Ollamacode CLI Solution:
You: "create a Python script that fetches weather data and run it"
Ollamacode CLI: π§ Creating weather_fetcher.py...
π§ Running python weather_fetcher.py...
β
Current weather: 72Β°F, sunny βοΈ
You: π It actually works!See Ollamacode CLI in action with these real-world demonstrations:
Watch Ollamacode CLI create a Python script from natural language and execute it automatically.
See how Ollamacode CLI can initialize a git repository and create commits with intelligent commit messages.
Observe Ollamacode CLI studying your codebase and automatically updating documentation.
- π― Direct Tool Execution - LLM calls tools directly instead of generating code blocks
- π€ AI-Powered Actions - Create files, run commands, manage git - all automatically
- π Smart File Operations - Intelligent code generation and execution from natural language
- π§ Git Workflow Integration - Complete version control operations with AI assistance
- π Code Search & Analysis - Find patterns, TODOs, and functions across your codebase
- π¨ Syntax Highlighting - Beautiful code display with auto-language detection (14+ languages)
- β‘ Caching System - Fast responses with intelligent caching
- π‘οΈ Safety First - Permission system for secure file operations
- π‘ Auto-Completion - Smart slash commands and file reference completion
- π¨ Enhanced Error Messages - Contextual error handling with actionable suggestions
- π― Project Context - Automatically understands your project structure
- π Session Management - Save and resume coding sessions
- π Network Endpoints - Connect to remote Ollama servers for powerful models
- βοΈ Persistent Config - Save preferred endpoints and models
- Python 3.9+
- Ollama with a model that supports tool calling (function calling)
- Tested with:
qwen3:latest- recommended for best tool calling support
# Install from source
git clone https://github.com/tooyipjee/ollamacode_cli.git
cd ollamacode_cli
pip install -e .
# Make sure Ollama is running
ollama serve
# Pull a compatible model with tool calling support
ollama pull qwen3:latest
# Start coding!
ollamacode# Interactive mode
ollamacode
# Direct command
ollamacode "explain this code" < script.py
# With specific model
ollamacode --model gemma3 "optimize this function"
# Use remote Ollama server
ollamacode --endpoint http://192.168.1.100:11434
# Set default endpoint permanently
ollamacode --set-endpoint http://gpu-server:11434Create functional code files from natural language descriptions:
You: write a file that generates sine wave data and saves as CSV
π¦ Ollamacode CLI creates: sine_wave_generator.pyimport numpy as np
import pandas as pd
import os
def generate_sine_wave_data(frequency, amplitude, duration, sample_rate):
"""Generate sine wave data with specified parameters."""
t = np.linspace(0, duration, int(sample_rate * duration))
sine_wave = amplitude * np.sin(2 * np.pi * frequency * t)
return pd.DataFrame({
'time': t,
'amplitude': sine_wave
})
def save_to_csv(data, filename='sine_wave.csv'):
"""Save data to CSV file."""
data.to_csv(filename, index=False)
print(f"β
Saved {len(data)} data points to {filename}")
if __name__ == "__main__":
# Generate 5 seconds of 440Hz sine wave
data = generate_sine_wave_data(frequency=440, amplitude=1.0, duration=5, sample_rate=1000)
save_to_csv(data)All code responses feature automatic language detection and rich formatting:
You: show me a Python function for fibonacci
π¦ Ollamacode CLI responds with highlighted code:
```python
def fibonacci(n: int) -> int:
"""Calculate the nth Fibonacci number using dynamic programming."""
if n <= 1:
return n
a, b = 0, 1
for _ in range(2, n + 1):
a, b = b, a + b
return b
```Get intelligent suggestions as you type:
You: /h
π‘ Completions: /help, /headless
You: @mai
π‘ File suggestions: @main.py, @main.js
You: help me with git
π‘ Suggestions: git status, git diff, git logStreamline your workflow with built-in commands:
/help # Show all available commands
/model gemma3 # Switch AI models
/status # View session information
/clear # Clear conversation history
/cache clear # Clear response cache
/permissions status # Check operation permissions
/config # View current configurationEasily reference files in your conversations:
You: explain @main.py and suggest improvements
π¦ Ollamacode CLI automatically reads and analyzes the file:File: main.py [Content displayed with syntax highlighting]
Based on your main.py file, here are some improvements...
### π‘οΈ Smart Permission System
Safe file operations with granular control:
```bash
β οΈ Permission needed to modify files: write to script.py
Allow? (1=once, 2=session, 3=no) : 2
β
All operations approved for this session
# Use /permissions to manage:
/permissions status # Check current permissions
/permissions reset # Reset all permissions
/permissions approve-all # Approve all for session
Get helpful, actionable error guidance:
β FileNotFoundError
File not found: config.txt
π‘ Suggestions:
β’ Check if the file path is correct
β’ Use tab completion or `/complete @filename` to find files
β’ Try using absolute paths instead of relative paths
For more help:
β’ Type `/help` for available commands
β’ Report issues: https://github.com/tooyipjee/ollamacode_cli/issuesLightning-fast responses for repeated queries:
You: explain how bubble sort works
β Thinking... # First time: AI generates response
You: explain how bubble sort works
π¨ Cached response # Instant response from cache!
/cache status # View cache statistics
Cache: 15 entries, 2.3 MB, 85% hit rateOllamacode CLI automatically understands your project:
π Project context automatically loaded
π Project: Python Web API (FastAPI)
π Found: requirements.txt, main.py, models/, tests/
You: add error handling to my API
π¦ Response considers your FastAPI project structure:
"I'll help you add comprehensive error handling to your FastAPI application..."Save and resume your coding sessions:
# Sessions are auto-saved
Session saved as session_1234567890
# Resume later
ollamacode --resume
Continuing session session_1234567890
# List all sessions
/sessions
β’ session_1234567890 - 15 messages - 2024-01-15 14:30
β’ session_1234567891 - 8 messages - 2024-01-15 15:45Ollamacode CLI supports connecting to Ollama servers running on different machines, allowing you to leverage more powerful hardware for heavy models.
# Use a remote server temporarily
ollamacode --endpoint http://192.168.1.100:11434 "explain this algorithm"
# Set as your default endpoint
ollamacode --set-endpoint http://gpu-server:11434
ollamacode --set-model llama3.1:70b
# Now all sessions use the remote server
ollamacode "help me optimize this code"Home Lab Setup:
# Powerful desktop as Ollama server (192.168.1.100)
# Laptop for development
ollamacode --set-endpoint http://192.168.1.100:11434
ollamacode --set-model llama3.1:70bCloud Deployment:
# Remote server with GPU acceleration
ollamacode --set-endpoint https://my-ollama-server.com:11434
ollamacode --set-model qwen2.5-coder:32bDevelopment Team:
# Shared team server
ollamacode --set-endpoint http://team-ai-server:11434
ollamacode --set-model codellama:34b# View current configuration
ollamacode --config-only
{
"ollama_url": "http://gpu-server:11434",
"default_model": "llama3.1:70b"
}
# Switch back to local temporarily
ollamacode --endpoint http://localhost:11434 "quick test"
# Switch back to local permanently
ollamacode --set-endpoint http://localhost:11434Switch between different Ollama models seamlessly:
You: /model
Current model: gemma3
You: /model codellama
β
Switched to model: codellama
You: /model qwen2
β
Switched to model: qwen2Perfect for scripts and automation:
# Single command
echo "def hello():" | ollamacode "complete this function"
# Batch processing
ollamacode "review this code for security issues" < app.py > review.md
# With specific model
cat large_file.py | ollamacode --model gemma3 "summarize this code"Built-in git operations awareness:
You: what files have I changed?
π¦ Git Repository Status:
π Branch: feature/new-api
β¨ Clean: False
π Modified: api/routes.py, models/user.py
π Staged: tests/test_auth.py
β Untracked: config/new_settings.py
You: help me write a commit message
π¦ Based on your changes, here's a suggested commit message:
"feat: enhance user authentication with new routes and tests"Syntax highlighting and intelligent assistance for:
- Python - Full support with pip, virtual environments
- JavaScript/TypeScript - Node.js, React, frameworks
- Rust - Cargo projects, error handling
- Go - Modules, standard library
- Java - Maven/Gradle projects
- C/C++ - CMake, standard libraries
- HTML/CSS - Web development
- Bash/Shell - Script automation
- SQL - Database queries
- JSON/YAML - Configuration files
- Markdown - Documentation
- And more!
# View current config
/config
Ollama URL: http://localhost:11434
Model: gemma3
Project Root: /Users/dev/my-project
Context Dir: .ollamacode# View current configuration
ollamacode --config-only
# Set default endpoint (saves to ~/.ollamacode/config.json)
ollamacode --set-endpoint http://192.168.1.100:11434
# Set default model
ollamacode --set-model llama3.1:70b
# Use different endpoint temporarily (doesn't save)
ollamacode --endpoint http://localhost:11434export OLLAMA_URL="http://localhost:11434"
export OLLAMA_MODEL="gemma3"
export OLLAMACODE_CONTEXT_DIR=".ollamacode"Create .ollamacode/config.json in your project:
{
"ollama_url": "http://gpu-server:11434",
"default_model": "llama3.1:70b",
"auto_save_sessions": true,
"show_diff_preview": true,
"cache_enabled": true,
"syntax_highlighting": true
}# Local development
ollamacode --set-endpoint http://localhost:11434
ollamacode --set-model gemma3
# Use powerful GPU server for heavy models
ollamacode --set-endpoint http://192.168.1.100:11434
ollamacode --set-model llama3.1:70b
# Cloud deployment
ollamacode --set-endpoint http://my-ollama-server.com:11434
ollamacode --set-model qwen2.5-coder:32bWe welcome contributions! Please see our Contributing Guide for details.
git clone https://github.com/tooyipjee/ollamacode_cli.git
cd ollamacode_cli
pip install -e .[dev]
# Run tests
python -m pytest tests/
# Run specific test suites
python tests/test_file_creation.py
python tests/test_error_handling.pyOllama not running:
# Start Ollama service
ollama serve
# Verify it's running
curl http://localhost:11434/api/versionModel not available:
# Pull the model
ollama pull gemma3
# List available models
ollama listPermission errors:
# Use auto-approval for session
/permissions approve-all
# Or approve individual operations as promptedCache issues:
# Clear the cache
/cache clear
# Check cache status
/cache statusEndpoint connection issues:
# Test if remote Ollama server is accessible
curl http://192.168.1.100:11434/api/version
# Check current endpoint configuration
ollamacode --config-only
# Reset to localhost if having issues
ollamacode --set-endpoint http://localhost:11434
# Test with temporary endpoint override
ollamacode --endpoint http://localhost:11434 "test connection"This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama for the amazing local LLM platform
- Rich for beautiful terminal formatting
- The open-source community for inspiration and feedback
Ready to supercharge your coding workflow? π
pip install ollamacode_cli
ollamacodeHappy coding with AI! π¦β¨


