Skip to content

refactor(queue): simplify schema and remove conversation state#213

Merged
jlia0 merged 10 commits intomainfrom
jlia0/simplify-queue-schema
Mar 18, 2026
Merged

refactor(queue): simplify schema and remove conversation state#213
jlia0 merged 10 commits intomainfrom
jlia0/simplify-queue-schema

Conversation

@jlia0
Copy link
Copy Markdown
Collaborator

@jlia0 jlia0 commented Mar 14, 2026

Summary

Flatten agent communication by removing the conversation tracker entirely. Agents now communicate via direct flat DMs with immediate response streaming — no pending counters, no response aggregation, no conversation state. This significantly simplifies the architecture while maintaining all existing functionality.

Changes

  • Message IDs: Replace long timestamp+random IDs with shorter nanoid (8 char) prefixed IDs (api_a1b2c3d4, internal_x9y8z7w6)
  • Schema cleanup: Remove unused files column from messages table (channels download files but agents never receive them). Remove conversation_id tracking.
  • Remove conversation state: Delete Conversation/ChainStep types, conversations Map, pending counters, response aggregation, chat history file saving, and conversation TTL cleanup
  • Stateless team response: handleTeamResponse() now simply streams response → extracts mentions → enqueues flat DMs (no state machine)
  • Per-agent monitoring: Add getAgentQueueStatus() query and GET /api/queue/agents endpoint to track queue depth per agent
  • Dependencies: Add nanoid to core package for shorter ID generation
  • Documentation: Update MESSAGE-PATTERNS.md and QUEUE.md to reflect flat messaging model and new schema

Testing

  • Core and teams packages compile cleanly (tsc --build)
  • Migrations handle existing DBs gracefully (drop files/conversation_id columns)
  • No behavior changes to end users — responses still stream immediately, team orchestration via @mentions still works
  • agent_messages and chat_messages tables persist indefinitely as audit logs

Co-Authored-By: Claude Haiku 4.5 noreply@anthropic.com

Flatten agent communication by removing the conversation tracker entirely. Agents now communicate via direct flat DMs — no pending counters, no response aggregation, no conversation state. This simplifies the architecture significantly while maintaining the same functionality: responses stream immediately, teammate mentions become new messages.

Key changes:
- Replace timestamp+random message IDs with shorter nanoid (8 char) with prefixes
- Remove unused `files` column from messages table (not consumed by agent processing)
- Remove `conversation_id` column from messages table (no conversation tracking needed)
- Remove Conversation/ChainStep types, conversations Map, pending counters
- Make handleTeamResponse() stateless: stream response → extract mentions → enqueue flat DMs
- Add getAgentQueueStatus() query for per-agent queue depth monitoring
- Add GET /api/queue/agents endpoint
- Update server startApiServer() to not require conversations parameter
- Add nanoid dependency to core package
- Simplify docs to reflect flat messaging model

No behavior changes to end users — responses still stream immediately, team orchestration still works via @mentions and #chat_rooms. The agent_messages and chat_messages tables persist forever (never pruned) as permanent audit logs.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
@vercel
Copy link
Copy Markdown

vercel bot commented Mar 14, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
tinyoffice Ready Ready Preview, Comment Mar 14, 2026 5:37pm

Request Review

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Mar 14, 2026

Greptile Summary

This PR replaces the stateful conversation-tracker model with a flat, stateless DM model: agents now stream responses immediately rather than aggregating them, and every [@teammate: …] mention is enqueued as an independent message. It also introduces nanoid-based prefixed IDs, removes the files/conversation_id columns from the messages table, adds a new agent_messages audit table, and exposes a per-agent queue-depth endpoint.

Key changes:

  • handleTeamResponse() rewritten as a stateless stream-then-enqueue function — all Conversation, ChainStep, and pending-counter machinery deleted
  • groupChatroomMessages() added to batch consecutive chatroom messages into a single agent invocation
  • genId(prefix) added in packages/core/src/ids.ts using nanoid with an 8-char lowercase-alphanumeric alphabet
  • queues.ts reduced from ~427 to ~130 lines; includes migrations to drop files/conversation_id, add metadata to responses, and create the new agent_messages table
  • New GET /api/queue/agents endpoint for per-agent queue-depth monitoring

Issues found:

  • No loop protection: the old 50-message cap and pending-indicator that prevented "re-ask spirals" are gone with no replacement. Cyclic agent mention chains (A → B → A → …) will grow the queue unboundedly until the process is killed.
  • Chatroom grouping only applies to API-sourced posts: groupChatroomMessages checks msg.channel === 'chatroom', but agent-originated chatroom broadcasts carry the originating channel (e.g. 'discord'), so they are never batched — contradicting the documented behaviour.
  • getAgentMessages sort order: returns DESC (newest-first) while the analogous getChatMessages returns ASC (oldest-first), making chronological queries inconsistent.

Confidence Score: 2/5

  • Safe to merge for simple single-agent use; risky for multi-agent team configurations due to no loop protection and broken chatroom batching.
  • Two logic-level issues: (1) no circuit breaker for cyclic agent mention chains — a misconfigured or adversarial prompt can create an unbounded enqueue loop that was previously capped at 50 messages; (2) groupChatroomMessages silently skips batching for agent-originated chatroom posts because it discriminates on channel === 'chatroom' rather than message-ID prefix or from_agent, contradicting the documented behaviour. Both issues only manifest in multi-agent team scenarios, so single-agent deployments are unaffected.
  • packages/teams/src/conversation.ts (loop protection absent) and packages/teams/src/chatroom.ts (incorrect chatroom discriminator) need the most attention before merging for any team-enabled deployment.

Important Files Changed

Filename Overview
packages/teams/src/conversation.ts Core orchestration rewrite — removes all conversation state, pending counters, and response aggregation; now a simple stream-then-enqueue flow. Critical: no loop protection for cyclic agent mention chains.
packages/teams/src/chatroom.ts New chatroom grouping helper; groupChatroomMessages incorrectly uses msg.channel === 'chatroom' to identify chatroom broadcasts, missing all agent-originated posts (which carry the original channel name).
packages/core/src/queues.ts Simplified SQLite queue (~130 lines vs 427). Adds agent_messages table, migrations for dropped columns, and getAgentQueueStatus. getAgentMessages sort order is DESC while getChatMessages is ASC.
packages/core/src/ids.ts New nanoid-based ID generator using 8-char lowercase alphanumeric alphabet with configurable prefix. Clean, correct implementation.
packages/main/src/index.ts Queue processor entry point updated to use groupChatroomMessages, insertAgentMessage, and the new stateless handleTeamResponse. Logic is straightforward; relies on the loop-protection concern addressed in conversation.ts.
packages/server/src/routes/queue.ts Adds GET /api/queue/agents endpoint backed by getAgentQueueStatus. Clean, no issues.
packages/channels/src/discord.ts Updated to use genId('discord') for message IDs. Attachment file paths are now embedded in message text rather than a dedicated files column. No issues.
docs/MESSAGE-PATTERNS.md Documentation accurately reflects the flat-DM model and removal of conversation lifecycle. Pending-response-indicator and loop-protection sections removed without replacement guidance.

Sequence Diagram

sequenceDiagram
    participant User
    participant Channel as Channel (Discord/Telegram)
    participant Queue as SQLite Queue
    participant Processor as Queue Processor
    participant Leader as Leader Agent
    participant Teammate as Teammate Agent

    User->>Channel: Send message
    Channel->>Queue: enqueueMessage(genId('discord'))
    Queue-->>Processor: message:enqueued event
    Processor->>Queue: claimAllPendingMessages(agentId)
    Processor->>Leader: invokeAgent()
    Leader-->>Processor: response text
    Processor->>Queue: insertAgentMessage(role='assistant')
    Processor->>Channel: streamResponse() → enqueueResponse()
    Channel-->>User: Response streamed immediately

    alt Agent mentions teammate [@teammate: msg]
        Processor->>Queue: enqueueMessage(genId('internal'), agent=teammate)
        Queue-->>Processor: message:enqueued event
        Processor->>Queue: claimAllPendingMessages(teammate)
        Processor->>Teammate: invokeAgent()
        Teammate-->>Processor: response text
        Processor->>Channel: streamResponse() → enqueueResponse()
        Channel-->>User: Teammate response streamed independently
    end

    alt Agent posts to chatroom [#team_id: msg]
        Processor->>Queue: postToChatRoom() → enqueueMessage per recipient
        Note over Queue: channel='chatroom' only if API-originated
    end
Loading

Comments Outside Diff (2)

  1. packages/teams/src/chatroom.ts, line 21-24 (link)

    Chatroom grouping only works for API-originated messages

    groupChatroomMessages identifies chatroom messages by checking msg.channel === 'chatroom', but agent-originated chatroom broadcasts (posted via postToChatRoom from handleTeamResponse) are enqueued with the original channel (e.g. 'discord', 'telegram'), not 'chatroom'. Only messages posted through the REST API endpoint (POST /api/chatroom/:teamId) pass channel: 'chatroom'.

    As a result, when an agent posts a [#team_id: message] broadcast and multiple teammates receive it simultaneously, those messages will not be batched into a single invocation — each recipient gets a separate invocation per chatroom message. This contradicts the documented behaviour in QUEUE.md:

    "When multiple chat room messages are pending for an agent, they're batched into a single invocation via groupChatroomMessages()."

    A more reliable discriminator would be based on the message_id prefix (chat_), the from_agent field being set, or always setting channel: 'chatroom' inside postToChatRoom regardless of the call site:

  2. packages/core/src/queues.ts, line 217-221 (link)

    Inconsistent sort order vs. analogous function

    getAgentMessages returns rows in ORDER BY created_at DESC (newest-first), while getChatMessages on line 237 uses ORDER BY created_at ASC (oldest-first). For a history/audit table where callers typically want chronological order (oldest first), the descending sort is surprising. If the intent is to keep API parity with getChatMessages, consider aligning the order:

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Last reviewed commit: b93b411

Comment on lines +96 to 115
// Extract teammate mentions and enqueue as flat DMs
const teammateMentions = extractTeammateMentions(response, agentId, teamContext.teamId, teams, agents);
if (teammateMentions.length > 0) {
log('INFO', `Teammate mentions from @${agentId}: ${teammateMentions.map(m => `@${m.teammateId}`).join(', ')}`);
}

if (teammateMentions.length > 0 && conv.totalMessages < conv.maxMessages) {
incrementPending(conv, teammateMentions.length);
conv.outgoingMentions.set(agentId, teammateMentions.length);

for (const mention of teammateMentions) {
conv.pendingAgents.add(mention.teammateId);
log('INFO', `@${agentId} → @${mention.teammateId}`);
emitEvent('chain_handoff', { teamId: conv.teamContext.teamId, fromAgent: agentId, toAgent: mention.teammateId });
emitEvent('chain_handoff', { teamId: teamContext.teamId, fromAgent: agentId, toAgent: mention.teammateId });

const internalMsg = `[Message from teammate @${agentId}]:\n${mention.message}`;
enqueueInternalMessage(conv.id, agentId, mention.teammateId, internalMsg, {
channel, sender, senderId: data.senderId, messageId,
enqueueMessage({
channel,
sender,
senderId: data.senderId ?? undefined,
message: internalMsg,
messageId: genId('internal'),
agent: mention.teammateId,
fromAgent: agentId,
});
}
} else if (teammateMentions.length > 0) {
log('WARN', `Conversation ${conv.id} hit max messages (${conv.maxMessages}) — not enqueuing further mentions`);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No loop-protection for agent mention chains

The old conversation tracker capped the total number of in-flight messages at 50 (totalMessages / loop protection) and sent agents a [N other teammate response(s) are still being processed…] indicator so they wouldn't re-mention already-queued teammates. Both mechanisms are gone in this PR.

With the flat-DM model, if two agents end up in a ping-pong mention cycle (A mentions B, B mentions A, …), the queue will grow without bound until the process is killed or the DB fills up. A minimal circuit breaker — e.g. refusing to enqueue a genId('internal') message when the same (fromAgent, toAgent) pair already has a pending/processing row — would prevent runaway chains without reintroducing full conversation state:

// Before enqueueMessage inside the mentions loop
const existing = getDb().prepare(
    `SELECT 1 FROM messages WHERE agent=? AND from_agent=? AND status IN ('pending','processing') LIMIT 1`
).get(mention.teammateId, agentId);
if (existing) {
    log('WARN', `Loop guard: @${agentId}→@${mention.teammateId} already pending, skipping`);
    continue;
}

Without any guard, adversarial or misconfigured agent prompts can trivially create infinite enqueue loops.

… exports)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@jlia0
Copy link
Copy Markdown
Collaborator Author

jlia0 commented Mar 17, 2026

@greptileai review again is it safe to merge

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Mar 17, 2026

Not safe for multi-agent teams without loop protection.

The critical blocker from my previous review remains: no circuit breaker for cyclic agent mentions. If two agents get into a mention loop (A→B→A→...), the queue will grow unbounded until the process crashes. The old 50-message cap is gone with no replacement.

Safe to merge if:

  • Single-agent deployments only, OR
  • You add the loop protection I suggested in my review comment (check for existing pending/processing messages between the same agent pair before enqueuing)

Still present:

  • Chatroom batching won't work for agent-originated posts (logic bug, not critical)
  • getAgentMessages returns DESC while getChatMessages returns ASC (minor inconsistency)

For production multi-agent teams: Add loop protection first. The suggested fix in my review comment is minimal (~5 lines) and doesn't require reintroducing conversation state.

For controlled/single-agent use: Safe to merge as-is.

jlia0 and others added 2 commits March 18, 2026 00:43
- Set channel to 'chatroom' in postToChatRoom() so groupChatroomMessages()
  correctly batches agent-originated broadcasts (was passing originating
  channel like 'discord', bypassing the batching logic)
- Make getChatMessages use DESC order to match getAgentMessages/getAllAgentMessages
- Remove sinceId parameter from all message query functions and their API routes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Show full response text in streamResponse log instead of char count
- Show full message in processing and API enqueue logs (no truncation)
- Remove redundant "Routing to agent" log (agent is already in the processing line)
- Demote "Using Codex/Claude/OpenCode CLI" logs to DEBUG
- Collapse redundant per-mention logs into single line (@agent@A, @b, @c)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
jlia0 and others added 6 commits March 18, 2026 15:31
…nature

- Log combined chatroom messages when batching in groupChatroomMessages
- Use fromAgent as sender in agent_messages for internal/chatroom messages
  (was using the original user e.g. "Web", now correctly shows the agent name)
- Remove "— sender via channel" signature appended to API messages
  (was adding "— Web via web" noise to every web UI message)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update CLAUDE_MODEL_IDS and OPENCODE_MODEL_IDS so the 'sonnet' shorthand
resolves to claude-sonnet-4-6. Old explicit model IDs (4-5) still work.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update default model references across code, CLI, docs, and UI.
Backward-compat mappings for claude-sonnet-4-5 remain in CLAUDE_MODEL_IDS
and OPENCODE_MODEL_IDS so existing configs still resolve.

Also includes system prompt caching in agent.ts (user change).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove CLAUDE_MODEL_IDS, CODEX_MODEL_IDS, OPENCODE_MODEL_IDS and their
three resolve functions. Replace with a single MODEL_ALIASES map and
resolveModel(model, provider). Only shorthand aliases ('sonnet', 'opus')
are kept — everything else passes through as-is to the CLI.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add gpt-5.4-codex, gpt-5.4, gpt-5.4-mini, gpt-5.4-nano to OpenAI options.
Add openai/gpt-5.4-codex to OpenCode options. Remove gpt-5.2.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add gpt-5.3-codex-spark, glm-5, kimi-k2.5, kimi-k2.5-free,
minimax-m2.5, minimax-m2.5-free to OpenCode model selection.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@jlia0 jlia0 merged commit 8690ece into main Mar 18, 2026
@jlia0 jlia0 deleted the jlia0/simplify-queue-schema branch March 18, 2026 10:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant