Skip to content

Latest commit

 

History

History
148 lines (107 loc) · 5.5 KB

File metadata and controls

148 lines (107 loc) · 5.5 KB

AGENTS.md

Purpose

This file is the default bootstrap for AI agents entering the Looma repository.

Use it to understand what Looma is, what matters most, what constraints should guide implementation, and which existing docs are the source of truth.

Project Mission

Looma is an open-source, privacy-first, local-first, AI-native knowledge operating system.

It is not just a note-taking app with an AI sidebar. The product direction is to turn a user's knowledge base into working memory that AI can read, cite, organize, and act on.

Core positioning:

  • open-source
  • privacy-first
  • local-first
  • BYO API-key
  • AI-native
  • file-backed and inspectable

Product Pillars

When making product or architecture decisions, preserve these principles:

  • file-first: user knowledge should live in portable files, not only in app-internal storage
  • card-first: cards are the primary semantic unit for thinking, linking, and AI operations
  • source-linked AI: important AI outputs should keep evidence, citations, and backlinks whenever possible
  • BYO API-key: users choose their own providers, models, keys, and spend
  • open and inspectable runtime: prompts, actions, and provider layers should be understandable and replaceable

Current Repo State

This repository is currently in product-definition and architecture-shaping mode.

Important current state:

  • the repo is docs-first and does not yet contain the application implementation
  • the current work is to converge on workspace format, schemas, provider abstraction, retrieval with citation, and AI action design
  • avoid acting as if major runtime or schema decisions are already implemented unless a later commit adds them

Canonical Concepts

Looma is organized around four layers:

  • Files: the evidence layer; original source material such as PDFs, web captures, Markdown, images, transcripts, and attachments
  • Cards: the thinking layer; the smallest semantic unit for concepts, claims, questions, quotes, tasks, and related knowledge
  • Boards: the structure layer; spatial and relational organization of cards into themes, arguments, timelines, and outlines
  • Agents / Actions: the execution layer; auditable AI workflows that read from and act on the knowledge base

If a proposed feature does not strengthen one or more of these layers, question whether it fits the core product.

Workspace Model

The intended workspace is file-backed and rebuildable. Current recommended structure:

workspace/
  workspace.yaml
  cards/
    c_*.md
  boards/
    b_*.json
  files/
    inbox/
    library/
  sources/
    s_*.json
  annotations/
    a_*.json
  tasks/
    t_*.md
  views/
    v_*.json
  exports/
  .looma/
    index.sqlite
    embeddings/
    cache/
    logs/
    providers.json

Operating rules for this model:

  • cards/, boards/, sources/, and related user-authored objects are canonical knowledge assets
  • .looma/ is derived state for indexes, caches, embeddings, logs, and runtime data
  • deleting .looma/ should not destroy the user's knowledge base; the system should be able to rebuild derived state from canonical files
  • favor human-readable, portable storage for durable knowledge objects

MVP Priorities

When choosing what to design or implement first, bias toward these foundations:

  • workspace format
  • source-linked cards
  • Ask Workspace
  • Draft from Board

These are the current "make or break" capabilities for proving Looma is more than a generic note app.

Working Rules for Agents

When contributing in this repo:

  • do not drift into a cloud-first product shape
  • do not reduce Looma to a chat shell or AI sidebar assistant
  • preserve local-first and user-controlled data assumptions
  • preserve BYO API-key and avoid designs that depend on proxying all inference through an official Looma service
  • prefer auditable AI flows over opaque magic
  • prefer outputs with citations, evidence links, and traceable provenance
  • prefer suggestion, draft, and patch-preview flows before direct writeback for higher-risk AI actions
  • keep provider abstractions replaceable; avoid coupling core product logic to one model vendor
  • separate canonical knowledge assets from caches and indexes
  • avoid inventing schema details that have not yet been decided in the docs

Delivery Heuristics

If you are proposing or implementing new work, these heuristics should guide you:

  • start with stable data formats before advanced UI or agent behavior
  • treat citation integrity as a core product feature, not a polish item
  • design AI outputs in levels such as suggestion, draft, patch, and automation where appropriate
  • make writebacks reviewable
  • keep model/provider interfaces interchangeable
  • favor features that strengthen ingest, connect, recall, compose, maintain, or execute flows over generic assistant behavior

What To Avoid

Avoid these common failure modes:

  • building a "notes app + chat sidebar" instead of an AI-native knowledge system
  • making hosted inference a hard dependency
  • storing critical knowledge only in opaque internal databases
  • shipping AI generation without evidence or review paths
  • prioritizing surface polish over workspace format and source-linked knowledge primitives too early

Source of Truth

For deeper context, use these documents as canonical references:

If this file and those docs ever diverge, update this file to match the current product documents rather than inventing a new direction here.