refactor(queue): simplify schema and remove conversation state#213
refactor(queue): simplify schema and remove conversation state#213
Conversation
Flatten agent communication by removing the conversation tracker entirely. Agents now communicate via direct flat DMs — no pending counters, no response aggregation, no conversation state. This simplifies the architecture significantly while maintaining the same functionality: responses stream immediately, teammate mentions become new messages. Key changes: - Replace timestamp+random message IDs with shorter nanoid (8 char) with prefixes - Remove unused `files` column from messages table (not consumed by agent processing) - Remove `conversation_id` column from messages table (no conversation tracking needed) - Remove Conversation/ChainStep types, conversations Map, pending counters - Make handleTeamResponse() stateless: stream response → extract mentions → enqueue flat DMs - Add getAgentQueueStatus() query for per-agent queue depth monitoring - Add GET /api/queue/agents endpoint - Update server startApiServer() to not require conversations parameter - Add nanoid dependency to core package - Simplify docs to reflect flat messaging model No behavior changes to end users — responses still stream immediately, team orchestration still works via @mentions and #chat_rooms. The agent_messages and chat_messages tables persist forever (never pruned) as permanent audit logs. Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile SummaryThis PR replaces the stateful conversation-tracker model with a flat, stateless DM model: agents now stream responses immediately rather than aggregating them, and every Key changes:
Issues found:
Confidence Score: 2/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant User
participant Channel as Channel (Discord/Telegram)
participant Queue as SQLite Queue
participant Processor as Queue Processor
participant Leader as Leader Agent
participant Teammate as Teammate Agent
User->>Channel: Send message
Channel->>Queue: enqueueMessage(genId('discord'))
Queue-->>Processor: message:enqueued event
Processor->>Queue: claimAllPendingMessages(agentId)
Processor->>Leader: invokeAgent()
Leader-->>Processor: response text
Processor->>Queue: insertAgentMessage(role='assistant')
Processor->>Channel: streamResponse() → enqueueResponse()
Channel-->>User: Response streamed immediately
alt Agent mentions teammate [@teammate: msg]
Processor->>Queue: enqueueMessage(genId('internal'), agent=teammate)
Queue-->>Processor: message:enqueued event
Processor->>Queue: claimAllPendingMessages(teammate)
Processor->>Teammate: invokeAgent()
Teammate-->>Processor: response text
Processor->>Channel: streamResponse() → enqueueResponse()
Channel-->>User: Teammate response streamed independently
end
alt Agent posts to chatroom [#team_id: msg]
Processor->>Queue: postToChatRoom() → enqueueMessage per recipient
Note over Queue: channel='chatroom' only if API-originated
end
|
| // Extract teammate mentions and enqueue as flat DMs | ||
| const teammateMentions = extractTeammateMentions(response, agentId, teamContext.teamId, teams, agents); | ||
| if (teammateMentions.length > 0) { | ||
| log('INFO', `Teammate mentions from @${agentId}: ${teammateMentions.map(m => `@${m.teammateId}`).join(', ')}`); | ||
| } | ||
|
|
||
| if (teammateMentions.length > 0 && conv.totalMessages < conv.maxMessages) { | ||
| incrementPending(conv, teammateMentions.length); | ||
| conv.outgoingMentions.set(agentId, teammateMentions.length); | ||
|
|
||
| for (const mention of teammateMentions) { | ||
| conv.pendingAgents.add(mention.teammateId); | ||
| log('INFO', `@${agentId} → @${mention.teammateId}`); | ||
| emitEvent('chain_handoff', { teamId: conv.teamContext.teamId, fromAgent: agentId, toAgent: mention.teammateId }); | ||
| emitEvent('chain_handoff', { teamId: teamContext.teamId, fromAgent: agentId, toAgent: mention.teammateId }); | ||
|
|
||
| const internalMsg = `[Message from teammate @${agentId}]:\n${mention.message}`; | ||
| enqueueInternalMessage(conv.id, agentId, mention.teammateId, internalMsg, { | ||
| channel, sender, senderId: data.senderId, messageId, | ||
| enqueueMessage({ | ||
| channel, | ||
| sender, | ||
| senderId: data.senderId ?? undefined, | ||
| message: internalMsg, | ||
| messageId: genId('internal'), | ||
| agent: mention.teammateId, | ||
| fromAgent: agentId, | ||
| }); | ||
| } | ||
| } else if (teammateMentions.length > 0) { | ||
| log('WARN', `Conversation ${conv.id} hit max messages (${conv.maxMessages}) — not enqueuing further mentions`); | ||
| } |
There was a problem hiding this comment.
No loop-protection for agent mention chains
The old conversation tracker capped the total number of in-flight messages at 50 (totalMessages / loop protection) and sent agents a [N other teammate response(s) are still being processed…] indicator so they wouldn't re-mention already-queued teammates. Both mechanisms are gone in this PR.
With the flat-DM model, if two agents end up in a ping-pong mention cycle (A mentions B, B mentions A, …), the queue will grow without bound until the process is killed or the DB fills up. A minimal circuit breaker — e.g. refusing to enqueue a genId('internal') message when the same (fromAgent, toAgent) pair already has a pending/processing row — would prevent runaway chains without reintroducing full conversation state:
// Before enqueueMessage inside the mentions loop
const existing = getDb().prepare(
`SELECT 1 FROM messages WHERE agent=? AND from_agent=? AND status IN ('pending','processing') LIMIT 1`
).get(mention.teammateId, agentId);
if (existing) {
log('WARN', `Loop guard: @${agentId}→@${mention.teammateId} already pending, skipping`);
continue;
}Without any guard, adversarial or misconfigured agent prompts can trivially create infinite enqueue loops.
… exports) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@greptileai review again is it safe to merge |
|
Not safe for multi-agent teams without loop protection. The critical blocker from my previous review remains: no circuit breaker for cyclic agent mentions. If two agents get into a mention loop (A→B→A→...), the queue will grow unbounded until the process crashes. The old 50-message cap is gone with no replacement. Safe to merge if:
Still present:
For production multi-agent teams: Add loop protection first. The suggested fix in my review comment is minimal (~5 lines) and doesn't require reintroducing conversation state. For controlled/single-agent use: Safe to merge as-is. |
- Set channel to 'chatroom' in postToChatRoom() so groupChatroomMessages() correctly batches agent-originated broadcasts (was passing originating channel like 'discord', bypassing the batching logic) - Make getChatMessages use DESC order to match getAgentMessages/getAllAgentMessages - Remove sinceId parameter from all message query functions and their API routes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Show full response text in streamResponse log instead of char count - Show full message in processing and API enqueue logs (no truncation) - Remove redundant "Routing to agent" log (agent is already in the processing line) - Demote "Using Codex/Claude/OpenCode CLI" logs to DEBUG - Collapse redundant per-mention logs into single line (@agent → @A, @b, @c) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…nature - Log combined chatroom messages when batching in groupChatroomMessages - Use fromAgent as sender in agent_messages for internal/chatroom messages (was using the original user e.g. "Web", now correctly shows the agent name) - Remove "— sender via channel" signature appended to API messages (was adding "— Web via web" noise to every web UI message) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update CLAUDE_MODEL_IDS and OPENCODE_MODEL_IDS so the 'sonnet' shorthand resolves to claude-sonnet-4-6. Old explicit model IDs (4-5) still work. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update default model references across code, CLI, docs, and UI. Backward-compat mappings for claude-sonnet-4-5 remain in CLAUDE_MODEL_IDS and OPENCODE_MODEL_IDS so existing configs still resolve. Also includes system prompt caching in agent.ts (user change). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove CLAUDE_MODEL_IDS, CODEX_MODEL_IDS, OPENCODE_MODEL_IDS and their
three resolve functions. Replace with a single MODEL_ALIASES map and
resolveModel(model, provider). Only shorthand aliases ('sonnet', 'opus')
are kept — everything else passes through as-is to the CLI.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add gpt-5.4-codex, gpt-5.4, gpt-5.4-mini, gpt-5.4-nano to OpenAI options. Add openai/gpt-5.4-codex to OpenCode options. Remove gpt-5.2. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add gpt-5.3-codex-spark, glm-5, kimi-k2.5, kimi-k2.5-free, minimax-m2.5, minimax-m2.5-free to OpenCode model selection. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
Flatten agent communication by removing the conversation tracker entirely. Agents now communicate via direct flat DMs with immediate response streaming — no pending counters, no response aggregation, no conversation state. This significantly simplifies the architecture while maintaining all existing functionality.
Changes
api_a1b2c3d4,internal_x9y8z7w6)filescolumn from messages table (channels download files but agents never receive them). Removeconversation_idtracking.Testing
Co-Authored-By: Claude Haiku 4.5 noreply@anthropic.com