Skip to content

awaescher/llmaid

Repository files navigation

Large Language Maid

Throw files against LLMs.

llmaid is a command-line tool designed to automate the process of AI supported file changes using large language models. It reads source code files, sends them to Ollama, LM Studio, MiniMax, or any OpenAI-compatible API, and writes back the models answers. The tool is highly configurable and supports every kind of text-based and most image input file formats.


Note

  1. Paid services such as ChatGPT can cause high API costs if they are used with many files. Double check your config.
  2. You may get lower quality when using local models with Ollama or LM Studio, but it's completely free and your files will never leave your computer.

💬 Is this thing competing with GitHub Copilot, Cursor, Claude Code, OpenCode and so many more?

No. It serves a slightly different use-case.

These partly autonomous agents are amazing and way more sophisticated. But they cannot be used to fix typos or add documentation in every single code file of a repository. These tools work in a different way and need to minimize their context while navigating through your file structure. They need something to find and work on - they can't possibly change everything.

llmaid is different: Every file is a new conversation. While there is no autonomous intelligence, it can review or edit every single file based on your instructions. This is handy to find or edit things in your files you could not search for. The feature of writing the LLM response back also enables batch-processing of every single file in the codebase, like "fix all typos".

tl;dr

If you ask Claude Code to fix every typo in your code base, it will try to find the most common typos with RegEx. But that won't help you. llmaid will send every single file you want with a prompt to an LLM and output the answer or even rewrite these files with the LLM response - and with that eliminate every typo in a text file.

image

Installation

Homebrew (macOS / Linux)

brew install awaescher/tap/llmaid

Manual Download

Download the latest binary for your platform from GitHub Releases.

Building from Source

If you want to build llmaid yourself, you need the .NET 10.0 SDK.

dotnet run --project llmaid -- --profile ./profiles/code-documenter.yaml --targetPath ./testfiles/code

What can it do?

llmaid will run through every file in a path you specify and rewrite, analyse or summarize it. Pretty much everything you can come up with, as long as you can write a good system prompt.

This repository provides a few profile files, for example:

Documenting code

With this prompt, llmaid will scan and rewrite each code file and generate missing summaries, fix typos, remove wrong comments and much more:

Code documenter

Finding unprofessional slang

This prompt will output one json code block for each file. There it lists findings such as insults, cringe comments, and much more including a severity rating and a description what it thinks about the things it found: Review files

Profiles & Examples

Each profile has demo files in ./testfiles/ you can run right away. Replace MODEL-HERE with a model available on your LLM provider.

Code profiles

Document public members in source code — rewrites files with XML/JSDoc/docstring comments:

llmaid --profile ./profiles/code-documenter.yaml --targetPath ./testfiles/code

Find unprofessional language in code — outputs a JSON report of findings per file:

llmaid --profile ./profiles/unprofessional-content-finder.yaml --targetPath ./testfiles/code

Fix unprofessional language in code — rewrites files with neutralized comments:

llmaid --profile ./profiles/unprofessional-content-fixer.yaml --targetPath ./testfiles/code

Image profiles

Rate content by age classification — outputs YAML with FSK/USK/PEGI/ESRB ratings for text and images:

llmaid --profile ./profiles/age-rater.yaml --targetPath ./testfiles/age-rater

Detect brand logos in images — outputs YAML listing all visible brands with confidence and location:

llmaid --profile ./profiles/brand-detector.yaml --targetPath ./testfiles/brand-detector

Generate alt text for images — outputs YAML with three detail levels (short ≤125 chars, medium, long):

llmaid --profile ./profiles/image-alt-text-generator.yaml --targetPath ./testfiles/alt-text-generator

Configuration

llmaid uses a layered configuration system where each layer can override the previous:

  1. appsettings.json – Connection settings only (provider, URI, API key)
  2. Profile file (.yaml) – Complete task configuration (model, paths, files, system prompt)
  3. Command line arguments – Runtime overrides (highest priority)

This means you can prepare self-contained profiles for different tasks and still override individual settings via CLI.

appsettings.json (Connection Settings)

This file only contains your LLM provider connection settings:

{
  "Provider": "lmstudio",
  "Uri": "http://localhost:1234/v1",
  "ApiKey": "",
  "WriteResponseToConsole": true,
  "CooldownSeconds": 0,
  "MaxFileTokens": 102400,
  "OllamaMinNumCtx": 24000 // Minimum context length for the Ollama provider to prevent unnecessary model reloads (default 20480)
}

Profile Files (.yaml)

Profiles are self-contained task definitions that include everything needed to run a specific job: model, target path, file patterns, and system prompt.

# profiles/code-documenter.yaml

model: deepseek-coder-v2:16b
targetPath: ./src
temperature: 0.25
applyCodeblock: true
maxRetries: 1

files:
  include:
    - "**/*.{cs,vb,js,ts}"
  exclude:
    - bin/
    - obj/

systemPrompt: |
  You are an AI documentation assistant.
  The user will provide a code snippet. Review and improve its documentation:
  
  - Add missing summaries for public members
  - Fix typos in comments
  - Do NOT change the executable code
  
  Return the entire file in a markdown code block.

Command Line Arguments

All settings can be overridden via CLI:

# Use a specific profile
llmaid --profile ./profiles/code-documenter.yaml

# Run without profile (all settings via CLI)
llmaid --provider openai --model gpt-4o --targetPath ./src --systemPrompt "..."

# Dry run to see which files would be processed
llmaid --profile ./profiles/code-documenter.yaml --dryRun

# Verbose output with detailed token and timing information
llmaid --profile ./profiles/code-documenter.yaml --verbose

Available arguments:

  • --provider – ollama, openai, lmstudio, openai-compatible, minimax
  • --uri – API endpoint URL
  • --apiKey – API key (if required)
  • --model – Model identifier
  • --profile – Path to YAML profile
  • --targetPath – Directory with files to process, or a single file (when specifying a file, glob patterns are ignored)
  • --applyCodeblocktrue extracts codeblock and overwrites file, false outputs response to console
  • --temperature – Model temperature (0-2)
  • --systemPrompt – System prompt text
  • --assistantStarter – String to start the assistant's message (can guide model output format)
  • --dryRun – Simulate without changes
  • --maxRetries – Retry count on failures
  • --verbose – Show detailed output (tokens, timing, settings)
  • --writeResponseToConsole – Whether to write the model's response to the console (default: true)
  • --cooldownSeconds – Cooldown time after processing each file to prevent overheating (default: 0)
  • --maxFileTokens – Maximum tokens a file may contain before it is skipped (default: 102400)
  • --resumeAt – Resume processing from a specific file (skips all files until a filename containing this pattern is found)
  • --ollamaMinNumCtx – Minimum context length for Ollama provider to prevent unnecessary model reloads (default: 20480)
  • --preserveWhitespace – Preserve original leading and trailing whitespace when writing files to avoid diff noise (default: false)
  • --showProgress – Show progress indicator during file processing (default: true)
  • --reasoningTimeoutSeconds – Maximum seconds a model may spend reasoning before the request is cancelled (default: 600, 0 = disabled)
  • --maxImageDimension – Maximum image dimension in pixels, images are resized to fit while preserving aspect ratio (default: 2048)
  • --judgeMaxRetries – Maximum judge review cycles per file (0 or omit = disabled). The judge evaluates the AI output against your task instructions and triggers a retry with specific violation feedback when rejected.
  • --judgeMode – Judge evaluation mode: response (default), git-diff, or both (see Judge section)
  • --judgeModel – Model for judge calls (falls back to --model when not set)
  • --judgeProvider – Provider for judge calls (falls back to --provider when not set)
  • --judgeUri – API endpoint for judge calls (falls back to --uri when not set)
  • --judgeApiKey – API key for judge calls (falls back to --apiKey when not set)
  • --judgeSystemPrompt – Custom system prompt for the judge LLM (uses a built-in default when not set)

Judge (Optional Quality Gate)

The judge is an optional second LLM call that verifies the AI's output against the task instructions. When it rejects the output, it feeds the specific violations back to the editing LLM so it can retry — up to judgeMaxRetries times.

Judge Modes

Mode When it evaluates Requirements
response Before writing — evaluates the raw LLM response None — works always, even without git or applyCodeblock
git-diff After writing — evaluates the actual git diff applyCodeblock: true + target files inside a git repository
both Response-judge first (pre-write), then git-diff judge after writing applyCodeblock: true + git repository (for the git-diff step)

response mode is the default. It works in all situations: with or without git, with or without applyCodeblock. The judge sees the original file content and the AI's full response and checks whether every change complies with the instructions.

git-diff mode is particularly useful for profiles that modify long files (such as code-documenter), because the diff shows only the changed lines — making it much easier for the judge to spot subtle, forbidden code changes.

both mode combines both: the response is validated before writing (catching formatting or hallucination issues early), and the final diff is validated after writing (catching subtle line-level changes).

Example configuration

judgeMaxRetries: 2
judgeMode: response      # or git-diff / both

# Optional: use a larger model for judging than for editing
# judgeModel: qwen3-235b-a22b
# judgeProvider: lmstudio
# judgeUri: http://localhost:1234/v1

Custom judge system prompt

Override the built-in judge prompt for fine-grained control:

judgeSystemPrompt: |
  You are a strict code review judge for documentation-only changes.
  You receive the task instructions and either the original + new file (response mode)
  or a git diff (git-diff mode).

  PASS if only documentation comments were added or improved and no executable code was touched.

  FAIL with a bullet list of violations if ANY of these occurred:
  - Executable code was modified
  - Existing documentation was removed without a replacement
  - Private, protected, or internal members received new documentation

Supported Providers

Provider URI API Key Required
ollama http://localhost:11434 No
lmstudio http://localhost:1234/v1 No (use empty string or any placeholder)
openai https://api.openai.com/v1 Yes
minimax https://api.minimax.io/v1 Yes
openai-compatible Your server's URL Depends on server

MiniMax

MiniMax provides high-performance models with a 204,800 token context window. Available models:

Model Description
MiniMax-M2.5 Default model – peak performance, ultimate value
MiniMax-M2.5-highspeed Same performance, faster and more agile
llmaid --provider minimax --apiKey YOUR_MINIMAX_API_KEY --model MiniMax-M2.5 --profile ./profiles/code-documenter.yaml --targetPath ./src

Note

MiniMax requires a temperature value between 0 and 1 (exclusive of 0). If you use --temperature, make sure to set it to a value greater than 0 (e.g. 0.01 instead of 0).

System Prompt Placeholders

llmaid automatically replaces {{PLACEHOLDER}} tokens in the systemPrompt with live system values before sending the prompt to the model. This lets you write date-aware, locale-aware, or environment-aware prompts without hardcoding anything.

Note

Placeholders are only replaced inside the systemPrompt. They are never applied to file contents or any other settings.

File (per-file, resolved for each processed file)

Placeholder Description
{{CODE}} Full content of the current file being processed
{{CODELANGUAGE}} Programming language derived from the file extension (e.g. csharp, javascript)
{{FILENAME}} Name of the current file without its directory path

Date & Time

Placeholder Example value Description
{{TODAY}} 2026-03-12 Current date in ISO 8601 format
{{NOW}} 2026-03-12T15:44:58 Current date and time in ISO 8601 format
{{YEAR}} 2026 Current four-digit year
{{MONTH}} 03 Current month (01–12)
{{WEEKDAY}} Thursday Current day of the week (English)

System & Environment

Placeholder Example value Description
{{USERNAME}} awaescher OS login name of the current user
{{MACHINENAME}} my-macbook Network hostname of the machine
{{TIMEZONE}} Europe/Berlin IANA time zone of the local system
{{NEWLINE}} (platform newline) Platform-specific line break (\n or \r\n)

Locale & Formatting

Placeholder Example value (de-DE) Example value (en-US) Description
{{CULTURE}} de-DE en-US BCP 47 locale tag of the current UI culture
{{DATEFORMAT}} dd.MM.yyyy M/d/yyyy Short date pattern of the current culture
{{TIMEFORMAT}} HH:mm:ss h:mm:ss tt Long time pattern of the current culture
{{DATESEPARATOR}} . / Date separator character
{{TIMESEPARATOR}} : : Time separator character
{{DECIMALSEPARATOR}} , . Decimal separator character
{{GROUPSEPARATOR}} . , Thousands group separator character
{{CURRENCYSYMBOL}} $ Currency symbol

Example usage

systemPrompt: |
  Today is {{TODAY}} ({{WEEKDAY}}). The user's locale is {{CULTURE}}.
  Numbers use '{{DECIMALSEPARATOR}}' as decimal separator and '{{CURRENCYSYMBOL}}' as currency.
  Dates are formatted as {{DATEFORMAT}}.
  
  Analyze the provided invoice and check all calculations.

FAQ

Can I continue where I left off earlier?

Yes! Use the --resumeAt parameter with a pattern matching the filename where you want to resume. All files before that match will be skipped (like a dry run).

Example: If you interrupted llmaid after hundreds of files while it was processing ~/Developer/MyApp/UserService.cs, you can continue like this`:

llmaid --profile ./profiles/code-documenter.yaml --resumeAt UserService

The pattern is case-insensitive and matches any part of the file path, so you don't need to specify the full path.

I get an 404 (Not Found)

It is very likely that Ollama returns this 404 because it doesn't know the model that's specified in the appsettings.json. Make sure to specify a model you have downloaded.


❤ Made with Spectre.Console and OllamaSharp.