feat: add AgentSessionConfig for session governance (turn limits, handoff)#2321
feat: add AgentSessionConfig for session governance (turn limits, handoff)#2321ixiadao wants to merge 11 commits intoagentscope-ai:mainfrom
Conversation
…pt/TodoReminder middleware, teams tools, mailbox queue modes, task board, /poll command, wake agent callback
….art) - 过滤 x-stainless-* 遥测头 - 替换 AsyncAnthropic User-Agent 为通用 httpx UA - 注入 x-api-key 头兼容非标准中转鉴权 - 仅对非 api.anthropic.com 的自定义 base_url 生效
…hroma vector search
- shell.py: sudo检测、权限错误检查、临时文件替代pipe、Windows cmd修复 - file_search.py: 完整重构,超时控制、取消机制、输出截断、跳过_SKIP_DIRS - customized_skills: agent-teams/self-improvement/skill-creator/workspace-standard/role-factory
…nlock, skill docs, workflow summary)
…lock, TeamManager/RelationshipStore impl, workflow summary in team_task, SKILL.md docs update
…Hook (session governance + compression metadata), token counter list content fix
…ndard (name+description required, triggers/metadata optional)
…doff) - Add AgentSessionConfig model with max_session_turns, handoff_enabled, handoff_auto_interval, compression_mark fields - Add session field to AgentProfileConfig (default_factory, backward compat) - Enables MemoryCompactionHook to read session config from agent profile - Default: max_session_turns=0 (unlimited), handoff_enabled=True
|
Hi @ixiadao, this is your 8th Pull Request. 📋 About PR TemplateTo help maintainers review your PR faster, please make sure to include:
Complete PR information helps speed up the review process. You can edit the PR description to add these details. 🙌 Join Developer CommunityThanks so much for your contribution! We'd love to invite you to join the official CoPaw developer group! You can find the Discord and DingTalk group links under the "Developer Community" section on our docs page: We truly appreciate your enthusiasm—and look forward to your future contributions! 😊 We'll review your PR soon. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly upgrades the agent's operational intelligence and collaborative capabilities. It introduces robust session management features, a complete framework for multi-agent team collaboration, and a suite of middleware to enhance agent reliability and context awareness. These changes aim to make agents more autonomous, better integrated into team workflows, and more resilient to common operational challenges. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces significant enhancements to CoPaw's agent capabilities, focusing on multi-agent collaboration, session management, and robustness. Key changes include a new local embedding server, a skill validation script, and a comprehensive health check script. The agent's command handler now supports a /poll command for detailed status updates. New hooks and middleware have been integrated for session continuity (HandoffHook), memory compaction with metadata, loop detection, todo reminders, and graceful interruption handling. A major addition is the teams module, providing tools for managing agent teams, shared task boards with state transitions and skill-gated claiming, inter-agent mailbox communication, discussion rooms, and relationship management. File search and shell execution tools have been refined for better performance and cross-platform compatibility. Additionally, the Anthropic provider now handles custom proxy services by stripping specific headers, and a new security rule detects potentially hanging sudo commands. Several issues were identified, including an AttributeError in AutoPoll's followup message handling, a Unix-specific file locking mechanism in task_board.py, a hardcoded path in health-check.sh, an inefficient sorting method in _process_poll, and unconventional imports in auto_poll.py. The _process_poll method was also noted for its excessive length and multiple responsibilities, suggesting a need for refactoring.
| try: | ||
| auto_poll = getattr(self, "_auto_poll_middleware", None) | ||
| if auto_poll is not None: | ||
| followup_msg = await auto_poll.get_followup_msg() |
There was a problem hiding this comment.
The code attempts to call auto_poll.get_followup_msg(), but the AutoPollMiddleware class defines get_followup_summary(), which is not async and returns a string. This will result in an AttributeError at runtime. You should call the correct method and construct a Msg object from its result.
| followup_msg = await auto_poll.get_followup_msg() | |
| summary_str = auto_poll.get_followup_summary() | |
| if summary_str: | |
| followup_msg = Msg(name="system", role="system", content=summary_str) | |
| import json | ||
| import logging | ||
| import time | ||
| import fcntl |
There was a problem hiding this comment.
| set -uo pipefail | ||
|
|
||
| PORT="${1:-8088}" | ||
| COPAW_DIR="${COPAW_DIR:-/home/ixiadao/.copaw}" |
There was a problem hiding this comment.
The default value for COPAW_DIR is hardcoded to a specific user's home directory (/home/ixiadao). This will cause the script to fail for any other user. It's better to use a more generic path, such as $HOME/.copaw.
| COPAW_DIR="${COPAW_DIR:-/home/ixiadao/.copaw}" | |
| COPAW_DIR="${COPAW_DIR:-$HOME/.copaw}" |
|
|
||
| for tid, msgs in by_thread.items(): | ||
| # Latest message in thread drives display | ||
| latest = sorted(msgs, key=lambda m: float(m.get("created_at", 0) or 0))[-1] |
There was a problem hiding this comment.
Using sorted(msgs, key=...)[-1] to find the latest message is inefficient as it sorts the entire list. For better performance, especially with a large number of messages, you should use max() with a key function.
| latest = sorted(msgs, key=lambda m: float(m.get("created_at", 0) or 0))[-1] | |
| latest = max(msgs, key=lambda m: float(m.get("created_at", 0) or 0)) |
| async def _process_poll( | ||
| self, | ||
| _messages: list[Msg], | ||
| _args: str = "", | ||
| ) -> Msg: | ||
| """Process /poll command to expand and display all pending mailbox & task updates. | ||
|
|
||
| Reads inbox files and task board directly (bypasses AutoPoll collapse rules), | ||
| then returns a structured digest covering: | ||
| - Urgent / blocker items | ||
| - Pending followups (submit/review/rework) | ||
| - Silent / collapsed items (count) | ||
| - Per-thread summary | ||
| """ | ||
| try: | ||
| # Get workspace dir from agent config | ||
| agent_config = self._get_agent_config() | ||
| workspace_dir = Path(agent_config.workspace_dir) | ||
| except Exception: | ||
| return await self._make_system_msg( | ||
| "**无法获取 workspace 目录**\n\n" | ||
| "- Agent 配置可能未加载", | ||
| ) | ||
|
|
||
| parts: list[str] = ["**📡 轮询详情**\n"] | ||
| total_count = 0 | ||
|
|
||
| # 1) Inbox messages grouped by thread | ||
| inbox_dir = workspace_dir / "mailbox" / "inbox" | ||
| if inbox_dir.exists(): | ||
| files = sorted(inbox_dir.glob("*.json"), key=lambda p: p.stat().st_mtime) | ||
| if files: | ||
| parsed: list[dict] = [] | ||
| for f in files: | ||
| try: | ||
| parsed.append(json.loads(f.read_text(encoding="utf-8"))) | ||
| except Exception: | ||
| continue | ||
|
|
||
| # Group by thread | ||
| by_thread: dict[str, list[dict]] = {} | ||
| for msg in parsed: | ||
| tid = str(msg.get("thread_id") or msg.get("task_id") or msg.get("id", "")) | ||
| by_thread.setdefault(tid, []).append(msg) | ||
|
|
||
| urgent_items: list[str] = [] | ||
| followup_items: list[str] = [] | ||
| silent_count = 0 | ||
|
|
||
| for tid, msgs in by_thread.items(): | ||
| # Latest message in thread drives display | ||
| latest = sorted(msgs, key=lambda m: float(m.get("created_at", 0) or 0))[-1] | ||
| kind = str(latest.get("msg_kind", latest.get("msg_type", "general"))).lower() | ||
| priority = str(latest.get("priority", "normal")).lower() | ||
| queue_mode = str(latest.get("queue_mode", "")) | ||
| effective_mode = queue_mode or ( | ||
| "steer" if priority == "urgent" or kind in {"blocker", "blocked", "urgent"} | ||
| else "followup" if kind in {"submit", "review", "rework"} | ||
| else "collect" | ||
| ) | ||
|
|
||
| if effective_mode == "steer" or kind in {"blocker", "blocked"}: | ||
| lines = [] | ||
| for m in msgs[:5]: | ||
| content = str(m.get("content", "")).strip().splitlines()[0][:120] | ||
| agent = m.get("from_agent", "?") | ||
| ts = datetime.fromtimestamp( | ||
| float(m.get("created_at", 0) or 0) | ||
| ).strftime("%H:%M") | ||
| lines.append(f" [{ts}] {agent}: {content}") | ||
| urgent_items.append( | ||
| f"**[{tid}]** *(blocker/urgent)*\n" + "\n".join(lines) | ||
| ) | ||
| elif effective_mode == "followup": | ||
| content = str(latest.get("content", "")).strip().splitlines()[0][:100] | ||
| agent = latest.get("from_agent", "?") | ||
| ts = datetime.fromtimestamp( | ||
| float(latest.get("created_at", 0) or 0) | ||
| ).strftime("%H:%M") | ||
| kind_label = {"submit": "📤 submit", "review": "🔍 review", "rework": "🔧 rework"}.get( | ||
| kind, f"📋 {kind}" | ||
| ) | ||
| followup_items.append( | ||
| f"**[{tid}]** {kind_label} · {agent} @ {ts}\n" | ||
| f" {content}" | ||
| ) | ||
| else: | ||
| silent_count += 1 | ||
|
|
||
| total_count = len(parsed) | ||
|
|
||
| if urgent_items: | ||
| parts.append(f"\n🚨 **阻断/紧急** ({len(urgent_items)} 个线程)") | ||
| for item in urgent_items[:10]: | ||
| parts.append(item) | ||
| if followup_items: | ||
| parts.append(f"\n📋 **待跟进** ({len(followup_items)} 条)") | ||
| for item in followup_items[:15]: | ||
| parts.append(item) | ||
| if silent_count > 0: | ||
| parts.append(f"\n📦 **已折叠常规消息** ({silent_count} 条)") | ||
|
|
||
| if not urgent_items and not followup_items and not silent_count: | ||
| parts.append("\n✅ 暂无待处理消息") | ||
| else: | ||
| parts.append("\n📭 收件箱为空") | ||
| else: | ||
| parts.append("\n📭 无收件箱目录") | ||
|
|
||
| # 2) Task board quick snapshot | ||
| try: | ||
| teams_dir = workspace_dir.parent / "shared" / "teams" | ||
| if teams_dir.exists(): | ||
| team_lines: list[str] = [] | ||
| for team_d in sorted(teams_dir.iterdir()): | ||
| if not team_d.is_dir(): | ||
| continue | ||
| tasks_file = team_d / "tasks.json" | ||
| if not tasks_file.exists(): | ||
| continue | ||
| try: | ||
| tasks = json.loads(tasks_file.read_text(encoding="utf-8")) | ||
| except Exception: | ||
| continue | ||
| blocker_tasks = [ | ||
| t for t in tasks | ||
| if str(t.get("status", "")).lower() == "blocked" | ||
| ] | ||
| urgent_tasks = [ | ||
| t for t in tasks | ||
| if str(t.get("priority", "normal")).lower() == "urgent" | ||
| ] | ||
| if blocker_tasks or urgent_tasks: | ||
| team_lines.append(f"\n**团队: {team_d.name}**") | ||
| for t in blocker_tasks: | ||
| team_lines.append( | ||
| f" 🔴 BLOCKED [{t.get('id', '-')}]: {str(t.get('title', ''))[:80]}" | ||
| ) | ||
| for t in urgent_tasks: | ||
| team_lines.append( | ||
| f" 🟠 URGENT [{t.get('id', '-')}]: {str(t.get('title', ''))[:80]}" | ||
| ) | ||
| if team_lines: | ||
| parts.append("\n---\n**🚦 任务板急事**") | ||
| parts.extend(team_lines) | ||
| except Exception as e: | ||
| parts.append(f"\n⚠️ 任务板读取失败: {e}") | ||
|
|
||
| parts.append(f"\n---\n总计 {total_count} 条消息 · `输入 /poll 刷新`") | ||
|
|
||
| # F3: structured summary — 4 modules | ||
| try: | ||
| import json as _json | ||
| # Module 1: mailbox backlog by queue_mode | ||
| inbox_dir = workspace_dir / "mailbox" / "inbox" | ||
| if inbox_dir.exists(): | ||
| mode_counts: dict[str, int] = {"steer": 0, "collect": 0, "followup": 0, "other": 0} | ||
| for f in inbox_dir.glob("*.json"): | ||
| try: | ||
| d = _json.loads(f.read_text(encoding="utf-8")) | ||
| mode = d.get("queue_mode", "other") or "other" | ||
| if mode not in mode_counts: | ||
| mode = "other" | ||
| mode_counts[mode] += 1 | ||
| except Exception: | ||
| pass | ||
| parts.append( | ||
| f"\n📬 **收件箱积压**:steer={mode_counts['steer']} " | ||
| f"followup={mode_counts['followup']} collect={mode_counts['collect']} other={mode_counts['other']}" | ||
| ) | ||
|
|
||
| # Module 2: task board status distribution | ||
| try: | ||
| teams_root = workspace_dir.parent | ||
| status_total: dict[str, int] = {} | ||
| for tf in teams_root.glob("*/teams/*/tasks.json"): | ||
| try: | ||
| tasks_data = _json.loads(tf.read_text(encoding="utf-8")) | ||
| for t in tasks_data: | ||
| s = t.get("status", "unknown") | ||
| status_total[s] = status_total.get(s, 0) + 1 | ||
| except Exception: | ||
| pass | ||
| if status_total: | ||
| status_str = " ".join(f"{k}={v}" for k, v in sorted(status_total.items())) | ||
| parts.append(f"\n📋 **任务状态分布**:{status_str}") | ||
| except Exception: | ||
| pass | ||
|
|
||
| # Module 3: active rooms | ||
| try: | ||
| rooms_dir = workspace_dir / "mailbox" / "rooms" | ||
| if rooms_dir.exists(): | ||
| active_rooms = [] | ||
| for meta_f in rooms_dir.glob("*/meta.json"): | ||
| try: | ||
| meta = _json.loads(meta_f.read_text(encoding="utf-8")) | ||
| if meta.get("status") == "active": | ||
| active_rooms.append(f"{meta.get('name','?')}(round={meta.get('current_round',0)}") | ||
| except Exception: | ||
| pass | ||
| if active_rooms: | ||
| parts.append(f"\n💬 **活跃讨论室** ({len(active_rooms)}个):" + "、".join(active_rooms[:5])) | ||
| else: | ||
| parts.append("\n💬 **活跃讨论室**:无") | ||
| except Exception: | ||
| pass | ||
|
|
||
| # Module 4: AutoPoll metrics | ||
| try: | ||
| metrics_file = workspace_dir / "autopoll_metrics.json" | ||
| if metrics_file.exists(): | ||
| m = _json.loads(metrics_file.read_text(encoding="utf-8")) | ||
| last_updated = m.get("last_updated", "未知") | ||
| sent = m.get("notice_sent", 0) | ||
| skipped = m.get("cooldown_skipped", 0) | ||
| parts.append(f"\n📡 **AutoPoll**:已推送 {sent} 次,冷却跳过 {skipped} 次,最后更新 {last_updated}") | ||
| except Exception: | ||
| pass | ||
| except Exception as _fe: | ||
| logger.debug("F3 structured summary failed: %s", _fe) | ||
|
|
||
| return await self._make_system_msg("\n".join(parts)) | ||
|
|
There was a problem hiding this comment.
The _process_poll method is over 200 lines long and handles multiple distinct responsibilities (processing inbox, task board, generating summaries). This complexity makes the method difficult to read, test, and maintain. Consider refactoring it by breaking it down into smaller, more focused helper methods for each logical block (e.g., _process_inbox_for_poll, _get_task_board_snapshot, _generate_structured_summary).
| existing["last_updated"] = __import__('time').strftime("%Y-%m-%dT%H:%M:%SZ", __import__('time').gmtime()) | ||
| metrics_file.write_text( | ||
| __import__('json').dumps(existing, ensure_ascii=False, indent=2), | ||
| encoding="utf-8", |
There was a problem hiding this comment.
Using __import__('time') and __import__('json') inside a function is unconventional and harms readability. It's standard practice to place all imports at the top of the file. If there's a concern about name clashes, you can use aliasing (e.g., import json as json_lib).
| existing["last_updated"] = __import__('time').strftime("%Y-%m-%dT%H:%M:%SZ", __import__('time').gmtime()) | |
| metrics_file.write_text( | |
| __import__('json').dumps(existing, ensure_ascii=False, indent=2), | |
| encoding="utf-8", | |
| import time | |
| existing["last_updated"] = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) | |
| metrics_file.write_text( | |
| json.dumps(existing, ensure_ascii=False, indent=2), |

Summary
Adds
AgentSessionConfigtoconfig.pyto wire up the session governance features already present inMemoryCompactionHookandHandoffHook.Changes
AgentSessionConfigmodel with fields:max_session_turns(int, default 0 = unlimited): warn user to start new session when reachedhandoff_enabled(bool, default True): generate handoff manifest on compression or turn limithandoff_auto_interval(int, default 0): generate handoff manifest every N turnscompression_mark(bool, default True): annotate compressed messages with token metadatasession: AgentSessionConfigfield toAgentProfileConfigwithdefault_factoryfor backward compatibilityWhy
MemoryCompactionHookalready readsagent_config.session(see_pre_reasoninghook), butAgentSessionConfigwas never defined inconfig.py, causingAttributeErrorat runtime. This PR completes the implementation.Backward Compatibility
Fully backward compatible — existing
agent.jsonfiles without asessionkey get sensible defaults (unlimited turns, handoff enabled, no auto-interval).