This file is the default bootstrap for AI agents entering the Looma repository.
Use it to understand what Looma is, what matters most, what constraints should guide implementation, and which existing docs are the source of truth.
Looma is an open-source, privacy-first, local-first, AI-native knowledge operating system.
It is not just a note-taking app with an AI sidebar. The product direction is to turn a user's knowledge base into working memory that AI can read, cite, organize, and act on.
Core positioning:
- open-source
- privacy-first
- local-first
BYO API-keyAI-native- file-backed and inspectable
When making product or architecture decisions, preserve these principles:
file-first: user knowledge should live in portable files, not only in app-internal storagecard-first: cards are the primary semantic unit for thinking, linking, and AI operationssource-linked AI: important AI outputs should keep evidence, citations, and backlinks whenever possibleBYO API-key: users choose their own providers, models, keys, and spend- open and inspectable runtime: prompts, actions, and provider layers should be understandable and replaceable
This repository is currently in product-definition and architecture-shaping mode.
Important current state:
- the repo is docs-first and does not yet contain the application implementation
- the current work is to converge on workspace format, schemas, provider abstraction, retrieval with citation, and AI action design
- avoid acting as if major runtime or schema decisions are already implemented unless a later commit adds them
Looma is organized around four layers:
Files: the evidence layer; original source material such as PDFs, web captures, Markdown, images, transcripts, and attachmentsCards: the thinking layer; the smallest semantic unit for concepts, claims, questions, quotes, tasks, and related knowledgeBoards: the structure layer; spatial and relational organization of cards into themes, arguments, timelines, and outlinesAgents/Actions: the execution layer; auditable AI workflows that read from and act on the knowledge base
If a proposed feature does not strengthen one or more of these layers, question whether it fits the core product.
The intended workspace is file-backed and rebuildable. Current recommended structure:
workspace/
workspace.yaml
cards/
c_*.md
boards/
b_*.json
files/
inbox/
library/
sources/
s_*.json
annotations/
a_*.json
tasks/
t_*.md
views/
v_*.json
exports/
.looma/
index.sqlite
embeddings/
cache/
logs/
providers.jsonOperating rules for this model:
cards/,boards/,sources/, and related user-authored objects are canonical knowledge assets.looma/is derived state for indexes, caches, embeddings, logs, and runtime data- deleting
.looma/should not destroy the user's knowledge base; the system should be able to rebuild derived state from canonical files - favor human-readable, portable storage for durable knowledge objects
When choosing what to design or implement first, bias toward these foundations:
workspace formatsource-linked cardsAsk WorkspaceDraft from Board
These are the current "make or break" capabilities for proving Looma is more than a generic note app.
When contributing in this repo:
- do not drift into a cloud-first product shape
- do not reduce Looma to a chat shell or AI sidebar assistant
- preserve local-first and user-controlled data assumptions
- preserve
BYO API-keyand avoid designs that depend on proxying all inference through an official Looma service - prefer auditable AI flows over opaque magic
- prefer outputs with citations, evidence links, and traceable provenance
- prefer suggestion, draft, and patch-preview flows before direct writeback for higher-risk AI actions
- keep provider abstractions replaceable; avoid coupling core product logic to one model vendor
- separate canonical knowledge assets from caches and indexes
- avoid inventing schema details that have not yet been decided in the docs
If you are proposing or implementing new work, these heuristics should guide you:
- start with stable data formats before advanced UI or agent behavior
- treat citation integrity as a core product feature, not a polish item
- design AI outputs in levels such as suggestion, draft, patch, and automation where appropriate
- make writebacks reviewable
- keep model/provider interfaces interchangeable
- favor features that strengthen ingest, connect, recall, compose, maintain, or execute flows over generic assistant behavior
Avoid these common failure modes:
- building a "notes app + chat sidebar" instead of an AI-native knowledge system
- making hosted inference a hard dependency
- storing critical knowledge only in opaque internal databases
- shipping AI generation without evidence or review paths
- prioritizing surface polish over workspace format and source-linked knowledge primitives too early
For deeper context, use these documents as canonical references:
If this file and those docs ever diverge, update this file to match the current product documents rather than inventing a new direction here.