Skip to content

robertjbass/local-code-raspi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

local-code

An offline AI coding assistant for Raspberry Pi 5. Works without internet - perfect for airplanes.

What it does

local-code is a wrapper around aider, which lets you chat with an AI that can:

  • Read your code files
  • Write and edit files for you
  • Suggest shell commands to run
  • Auto-commit changes to git

It runs entirely on your Pi using Ollama - no cloud, no API keys, no internet needed.

Quick start

cd ~/your-project
local-code

Then just tell it what you want:

  • "add a login page"
  • "fix the bug in auth.py"
  • "write tests for the utils module"

How it works

┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│  local-code │────▶│    aider    │────▶│   ollama    │
│   (script)  │     │ (AI coding) │     │ (runs LLM)  │
└─────────────┘     └─────────────┘     └─────────────┘
  1. Ollama runs the AI model locally on your Pi's CPU
  2. Aider talks to Ollama and handles file reading/writing
  3. local-code is a simple script that launches aider with the right settings

Commands

local-code                    # Start coding assistant
local-code -m qwen2.5-coder:3b  # Use bigger/smarter model (slower)
local-code --status           # Check if everything is running
local-code --list             # Show available models
local-code --help             # Show all options

Inside aider

Once running, you can:

  • /add file.py - Add a file to the conversation
  • /drop file.py - Remove a file
  • /run npm test - Run a command
  • /diff - See pending changes
  • /help - All commands
  • /exit or Ctrl+C - Quit

Models

Model Size Speed Quality
qwen2.5-coder:1.5b 986 MB Fast Good (default)
qwen2.5-coder:3b 1.9 GB Slower Better
codegemma:2b 1.6 GB Medium Good

Add more models with:

ollama pull <model-name>

Files

~/dev/local-code/
├── lc              # The main script
└── README.md       # This file

~/.local/bin/
└── local-code -> ~/dev/local-code/lc   # Symlink so you can run it anywhere

Troubleshooting

"Ollama not running"

sudo systemctl start ollama

Slow responses

  • First response is always slow (model loading)
  • Use the 1.5b model (default) for speed
  • Responses stream as they generate

Model not found

ollama pull qwen2.5-coder:1.5b

Setup (first time)

1. Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

2. Pull a coding model

ollama pull qwen2.5-coder:1.5b

3. Install uv (Python package manager)

curl -LsSf https://astral.sh/uv/install.sh | sh

4. Install aider

~/.local/bin/uv tool install --python 3.12 aider-chat

5. Create the symlink

mkdir -p ~/.local/bin
ln -sf ~/dev/local-code/lc ~/.local/bin/local-code

6. Add to PATH (if not already)

Add this to your ~/.bashrc or ~/.zshrc:

export PATH="$HOME/.local/bin:$PATH"

Then restart your terminal or run source ~/.bashrc.

7. Test it

local-code --status

Requirements

  • Raspberry Pi 5 (8GB recommended) or similar ARM64 Linux
  • Ollama
  • Aider (via uv)
  • ~1-2 GB disk space per model

About

A self-hosted Claude Code alternative for Raspberry Pi 5 8gb with Ollama

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages