Skip to content

feat: add ClawdHub skill registry as import source#183

Merged
marcusquinn merged 1 commit intomainfrom
feature/clawdhub-skill-import
Jan 24, 2026
Merged

feat: add ClawdHub skill registry as import source#183
marcusquinn merged 1 commit intomainfrom
feature/clawdhub-skill-import

Conversation

@marcusquinn
Copy link
Copy Markdown
Owner

@marcusquinn marcusquinn commented Jan 24, 2026

Summary

  • Adds ClawdHub (clawdhub.com) as a new skill import source alongside GitHub
  • Uses Playwright browser automation to extract SKILL.md content from ClawdHub's SPA (no public API for file content)
  • Imports two initial skills: caldav-calendar and proxmox-full

Changes

New Files

  • .agent/scripts/clawdhub-helper.sh — Playwright-based SKILL.md fetcher with API metadata, search, and CLI fallback
  • .agent/tools/productivity/caldav-calendar-skill.md — CalDAV calendar sync via vdirsyncer + khal (ClawdHub v1.0.1)
  • .agent/services/hosting/proxmox-full-skill.md — Complete Proxmox VE management via REST API (ClawdHub v1.0.0)

Modified Files

  • .agent/scripts/add-skill-helper.sh — Added clawdhub: prefix and clawdhub.com URL detection, new cmd_add_clawdhub function, expanded category detection (proxmox, calendar)
  • .agent/configs/skill-sources.json — Registered both new skills with format_detected: "clawdhub"
  • .agent/scripts/commands/add-skill.md — Updated slash command docs with ClawdHub examples
  • .agent/tools/build-agent/add-skill.md — Updated architecture docs, category table, popular skills section

Usage

# Import from ClawdHub (shorthand)
aidevops skill add clawdhub:caldav-calendar

# Import from ClawdHub (full URL)
aidevops skill add https://clawdhub.com/mSarheed/proxmox-full

# Search ClawdHub
.agent/scripts/clawdhub-helper.sh search "kubernetes"

# Get skill info
.agent/scripts/clawdhub-helper.sh info caldav-calendar

Quality

  • ShellCheck: zero violations on both scripts
  • JSON: valid skill-sources.json
  • Follows existing patterns: -skill.md suffix, category detection, imported_from: clawdhub frontmatter

Summary by CodeRabbit

Release Notes

  • New Features

    • Added ClawdHub as a new skill source alongside GitHub for importing skills
    • Registered two new skills: CalDAV Calendar for calendar management and Proxmox VE for virtualization management
  • Documentation

    • Added comprehensive guides for CalDAV Calendar integration with command examples
    • Added Proxmox VE REST API documentation with endpoint references and setup instructions
    • Updated skill import documentation to reflect ClawdHub support

✏️ Tip: You can customize this high-level summary in your review settings.

…ation

- Add clawdhub-helper.sh: Playwright-based SKILL.md fetcher for ClawdHub SPA
- Add clawdhub: prefix and clawdhub.com URL detection to add-skill-helper.sh
- Import caldav-calendar skill (v1.0.1) from ClawdHub to tools/productivity/
- Import proxmox-full skill (v1.0.0) from ClawdHub to services/hosting/
- Add productivity and virtualization category detection
- Update add-skill documentation with ClawdHub examples and architecture
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the aidevops system's ability to acquire and manage external skills by introducing ClawdHub as a new, primary source. It establishes a robust mechanism for fetching skill definitions from ClawdHub's web interface using browser automation, ensuring compatibility and expanding the ecosystem of available tools. This integration broadens the utility of the aidevops agent by providing access to a wider array of specialized functionalities.

Highlights

  • ClawdHub Integration: The system now supports importing skills directly from ClawdHub (clawdhub.com), significantly expanding the available skill sources beyond GitHub repositories.
  • Playwright-based Content Extraction: A new helper script (.agent/scripts/clawdhub-helper.sh) leverages Playwright for browser automation to extract SKILL.md content from ClawdHub's single-page application, addressing the lack of a public API for raw file content.
  • New Skills Added: Two initial skills, 'caldav-calendar' (for CalDAV calendar sync) and 'proxmox-full' (for Proxmox VE management), have been imported from ClawdHub and registered within the system.
  • Enhanced Skill Management Script: The add-skill-helper.sh script has been updated to recognize and process ClawdHub URLs and slugs, and includes new category detection logic for 'proxmox' and 'calendar' related skills.
  • Comprehensive Documentation Updates: All relevant documentation, including usage examples, architectural overviews, and troubleshooting guides, has been updated to reflect the new ClawdHub integration and its usage patterns.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Jan 24, 2026

Walkthrough

This PR integrates ClawdHub as a skill source into the aidevops framework alongside GitHub. It introduces a new helper script for fetching skill content via ClawdHub APIs and Playwright automation, extends the add-skill workflow to recognize and process ClawdHub imports, registers two new skills in the registry, and adds corresponding documentation.

Changes

Cohort / File(s) Summary
Skill Registry
.agent/configs/skill-sources.json
Added two new skill entries: caldav-calendar (CalDAV/vdirsyncer/khal integration) and proxmox-full (Proxmox VE REST API management), both sourced from ClawdHub with metadata tracking (imported_at, last_checked, merge_strategy).
ClawdHub Integration Core
.agent/scripts/clawdhub-helper.sh
New 489-line script providing CLI interface for ClawdHub interaction: cmd_fetch() retrieves SKILL.md via Playwright-based HTML extraction or npx fallback; cmd_search() queries ClawdHub API; cmd_info() fetches skill metadata. Includes input parsing, API interaction, and error handling with graceful degradation.
Skill Import Workflow
.agent/scripts/add-skill-helper.sh
Extended by 208 lines to support ClawdHub sources: new cmd_add_clawdhub() function detects clawdhub:slug shorthand and clawdhub.com URLs, invokes clawdhub-helper for content fetching, converts to aidevops format with frontmatter, handles conflicts (Replace/Separate/Skip), and registers in skill-sources.json. Maintains backward compatibility with GitHub imports.
Command Documentation
.agent/scripts/commands/add-skill.md
.agent/tools/build-agent/add-skill.md
Updated user-facing docs: broadened descriptions from GitHub-only to multi-source (GitHub + ClawdHub), added ClawdHub examples and shorthand syntax, expanded Supported Sources & Formats sections, updated troubleshooting guidance, added references to clawdhub-helper script.
Skill Documentation
.agent/tools/productivity/caldav-calendar-skill.md
.agent/services/hosting/proxmox-full-skill.md
New skill guide files: CalDAV provides vdirsyncer + khal workflow for calendar sync/management; Proxmox provides comprehensive REST API reference with endpoint categorization, setup instructions, token auth, and usage patterns for VE cluster/node/VM/container operations.

Sequence Diagram

sequenceDiagram
    participant User
    participant AddSkill as add-skill-helper.sh
    participant ClawdHub as clawdhub-helper.sh
    participant API as ClawdHub API
    participant Playwright
    participant Registry as skill-sources.json

    User->>AddSkill: add-skill clawdhub:slug
    AddSkill->>AddSkill: parse_url (detect ClawdHub)
    AddSkill->>ClawdHub: fetch slug --output /tmp
    ClawdHub->>API: resolve owner/slug
    API-->>ClawdHub: owner info
    ClawdHub->>Playwright: launch temp project + script
    Playwright->>Playwright: navigate to skill page
    Playwright->>Playwright: extract HTML → markdown
    Playwright-->>ClawdHub: SKILL.md content
    ClawdHub-->>AddSkill: SKILL.md file
    AddSkill->>AddSkill: convert to aidevops format
    AddSkill->>AddSkill: check for conflicts
    AddSkill->>Registry: register_skill (upstream: clawdhub)
    Registry-->>AddSkill: ack
    AddSkill-->>User: import complete
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Multiple new scripts with substantial logic density, API integration patterns, Playwright automation setup, bash error handling edge cases, and integration points across skill import workflow require careful review of security posture, CLI argument parsing, and shell script best practices.

Possibly related PRs

  • PR #170: Modifies .agent/configs/skill-sources.json by adding skill entries; directly overlaps with skill registry updates in this PR.
  • PR #135: Extends the add-skill system architecture and introduces script/registry patterns that this PR builds upon for ClawdHub support.
  • PR #154: Updates add-skill tooling, skill naming conventions, and conflict handling workflows that align with the ClawdHub-specific extensions here.

Poem

🌊 ClawdHub flows into aidevops streams,
Playwright draws skill.md from digital dreams,
Two new companions—Proxmox prowess, caldav care,
Shell scripts unite them with DevOps flair,
Zero debt maintained, A-grade prepared! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 47.06% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly summarizes the main change: adding ClawdHub as a new skill import source, which is the primary objective reflected across all modified and new files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sonarqubecloud
Copy link
Copy Markdown

@github-actions
Copy link
Copy Markdown
Contributor

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 446 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sat Jan 24 22:27:11 UTC 2026: Code review monitoring started
Sat Jan 24 22:27:11 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 446
Sat Jan 24 22:27:11 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Sat Jan 24 22:27:13 UTC 2026: Codacy analysis completed with auto-fixes
Sat Jan 24 22:27:14 UTC 2026: Applied 1 automatic fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 446
  • VULNERABILITIES: 0

Generated on: Sat Jan 24 22:28:36 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces ClawdHub as a new skill import source, which is a great addition. The implementation includes a new helper script, clawdhub-helper.sh, that cleverly uses Playwright for web scraping due to the lack of a public API for skill content. The main add-skill-helper.sh script is updated to integrate this new source. The changes are well-structured and the documentation has been updated accordingly. My review focuses on improving the robustness of the new helper script by making error handling more transparent and promoting consistency by using jq for JSON processing, which is already a project dependency.


# Install playwright and run the fetch script
log_info "Installing Playwright (temporary)..."
if (cd "$pw_dir" && npm install --silent 2>/dev/null && npx playwright install chromium --with-deps 2>/dev/null); then
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Suppressing all error output from npm install and npx playwright install with 2>/dev/null can make debugging very difficult if the installation fails. It's better to let error messages be displayed so that the user can understand and fix any issues with their Node.js environment, network, or permissions.

Suggested change
if (cd "$pw_dir" && npm install --silent 2>/dev/null && npx playwright install chromium --with-deps 2>/dev/null); then
if (cd "$pw_dir" && npm install --silent && npx playwright install chromium --with-deps); then

Comment on lines +792 to +795
display_name=$(echo "$api_response" | python3 -c "import sys,json; print(json.load(sys.stdin).get('skill',{}).get('displayName',''))" 2>/dev/null)
summary=$(echo "$api_response" | python3 -c "import sys,json; print(json.load(sys.stdin).get('skill',{}).get('summary',''))" 2>/dev/null)
owner_handle=$(echo "$api_response" | python3 -c "import sys,json; print(json.load(sys.stdin).get('owner',{}).get('handle',''))" 2>/dev/null)
version=$(echo "$api_response" | python3 -c "import sys,json; print(json.load(sys.stdin).get('latestVersion',{}).get('version',''))" 2>/dev/null)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These repeated calls to python3 to parse the JSON response are inefficient as each call starts a new Python interpreter. Since jq is a dependency of this project, you can use it to extract all required values. This is more performant and consistent with other parts of the codebase.

Suggested change
display_name=$(echo "$api_response" | python3 -c "import sys,json; print(json.load(sys.stdin).get('skill',{}).get('displayName',''))" 2>/dev/null)
summary=$(echo "$api_response" | python3 -c "import sys,json; print(json.load(sys.stdin).get('skill',{}).get('summary',''))" 2>/dev/null)
owner_handle=$(echo "$api_response" | python3 -c "import sys,json; print(json.load(sys.stdin).get('owner',{}).get('handle',''))" 2>/dev/null)
version=$(echo "$api_response" | python3 -c "import sys,json; print(json.load(sys.stdin).get('latestVersion',{}).get('version',''))" 2>/dev/null)
display_name=$(echo "$api_response" | jq -r '.skill.displayName // ""')
summary=$(echo "$api_response" | jq -r '.skill.summary // ""')
owner_handle=$(echo "$api_response" | jq -r '.owner.handle // ""')
version=$(echo "$api_response" | jq -r '.latestVersion.version // ""')

local response
response=$(curl -s --connect-timeout 10 --max-time 30 "${CLAWDHUB_API}/skills/${slug}")

if echo "$response" | python3 -c "import sys,json; json.load(sys.stdin)" 2>/dev/null; then
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

You're using python3 to validate the JSON response. Since jq is a project dependency, it would be more consistent and potentially more performant to use jq for this check. The -e flag in jq sets the exit code based on the result of the last filter, which is perfect for checks in if statements.

Suggested change
if echo "$response" | python3 -c "import sys,json; json.load(sys.stdin)" 2>/dev/null; then
if echo "$response" | jq -e . >/dev/null 2>&1; then

info=$(fetch_skill_info "$slug") || return 1

local owner
owner=$(echo "$info" | python3 -c "import sys,json; print(json.load(sys.stdin).get('owner',{}).get('handle',''))" 2>/dev/null)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and performance, you can use jq here instead of python3 to parse the JSON and extract the owner's handle. This avoids spawning a separate Python process.

Suggested change
owner=$(echo "$info" | python3 -c "import sys,json; print(json.load(sys.stdin).get('owner',{}).get('handle',''))" 2>/dev/null)
owner=$(echo "$info" | jq -r '.owner.handle // ""')

Comment on lines +426 to +443
echo "$response" | python3 -c "
import sys, json
data = json.load(sys.stdin)
skill = data.get('skill', {})
owner = data.get('owner', {})
version = data.get('latestVersion', {})
stats = skill.get('stats', {})

print(f' Name: {skill.get(\"displayName\", \"?\")}')
print(f' Slug: {skill.get(\"slug\", \"?\")}')
print(f' Owner: @{owner.get(\"handle\", \"?\")}')
print(f' Version: {version.get(\"version\", \"?\")}')
print(f' Summary: {skill.get(\"summary\", \"\")}')
print(f' Stars: {stats.get(\"stars\", 0)}')
print(f' Downloads: {stats.get(\"downloads\", 0)}')
print(f' Installs: {stats.get(\"installsCurrent\", 0)}')
print()
"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This Python script for formatting the output can be replaced with a jq command. This would make the script more consistent by relying on a single tool for JSON processing and avoid the overhead of starting a Python interpreter.

Suggested change
echo "$response" | python3 -c "
import sys, json
data = json.load(sys.stdin)
skill = data.get('skill', {})
owner = data.get('owner', {})
version = data.get('latestVersion', {})
stats = skill.get('stats', {})
print(f' Name: {skill.get(\"displayName\", \"?\")}')
print(f' Slug: {skill.get(\"slug\", \"?\")}')
print(f' Owner: @{owner.get(\"handle\", \"?\")}')
print(f' Version: {version.get(\"version\", \"?\")}')
print(f' Summary: {skill.get(\"summary\", \"\")}')
print(f' Stars: {stats.get(\"stars\", 0)}')
print(f' Downloads: {stats.get(\"downloads\", 0)}')
print(f' Installs: {stats.get(\"installsCurrent\", 0)}')
print()
"
echo "$response" | jq -r '
. as $data |
" Name: \($data.skill.displayName // "?")",
" Slug: \($data.skill.slug // "?")",
" Owner: @\($data.owner.handle // "?")",
" Version: \($data.latestVersion.version // "?")",
" Summary: \($data.skill.summary // "")",
" Stars: \($data.skill.stats.stars // 0)",
" Downloads: \($data.skill.stats.downloads // 0)",
" Installs: \($data.skill.stats.installsCurrent // 0)",
""
'

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
.agent/scripts/add-skill-helper.sh (2)

265-299: ClawdHub imports can land in the wrong category.

determine_target_path only scans file content, but in the ClawdHub flow it runs before fetching, so content is empty and keyword matching (calendar/proxmox/etc.) never triggers. Consider incorporating the summary/description into the scan.

♻️ Proposed fix
 determine_target_path() {
     local skill_name="$1"
     local description="$2"
     local source_dir="$3"
@@
-    local content=""
+    local content="${description}"
     if [[ -f "$source_dir/SKILL.md" ]]; then
-        content=$(cat "$source_dir/SKILL.md")
+        content="${content}"$'\n'"$(cat "$source_dir/SKILL.md")"
     elif [[ -f "$source_dir/AGENTS.md" ]]; then
-        content=$(cat "$source_dir/AGENTS.md")
+        content="${content}"$'\n'"$(cat "$source_dir/AGENTS.md")"
     fi

Also applies to: 807-810


969-979: Skip ClawdHub URLs in update checks to prevent noisy warnings.

The cmd_check_updates function calls parse_github_url on all skills, but ClawdHub URLs stored as https://clawdhub.com/{owner}/{slug} fail parsing and trigger a warning for every ClawdHub-sourced skill during update checks. Add a check to skip ClawdHub entries cleanly, deferring version checking support until it's implemented.

Suggested fix
     while IFS='|' read -r name url commit; do
+        if [[ "$url" == *clawdhub.com/* ]]; then
+            log_info "Skipping ClawdHub skill ($name) — update checks not yet supported"
+            continue
+        fi
         # Extract owner/repo from URL
         local parsed
🤖 Fix all issues with AI agents
In @.agent/scripts/add-skill-helper.sh:
- Around line 774-784: The ClawdHub `slug` is used to build filesystem paths
(e.g., `fetch_dir`) and must be strictly validated and sanitized before any path
operations or `rm -rf`; update the code that reads `slug` to (1) strip any URL
query/fragment and whitespace, (2) reject or normalize values containing path
separators or traversal tokens (like `../`, `/`, `\`, or leading `-`), and (3)
allow only a safe whitelist of characters (e.g., lowercase letters, digits,
hyphen, underscore, dot). Apply this validation where `slug` is used
(referencing the `slug` variable and the `fetch_dir` usage and the removal block
around lines 851-856) and fail early with an error if validation fails to
prevent directory escape.

In @.agent/scripts/clawdhub-helper.sh:
- Around line 335-346: The --output option parsing in the while loop can access
an unbound $2 (with set -u) when the argument is missing; update the case for
--output to validate that a next argument exists and is not another option
(e.g., check [[ -n "${2-}" && "${2:0:1}" != "-" ]]) before assigning to
output_dir, and if the check fails call log_error with a clear message and
return 1 instead of blindly using $2; adjust shifts only on success.
- Around line 379-381: Replace the current interpolation of $query into the
Python one-liner by passing the query as an argv parameter to Python to avoid
quoting issues: call python3 -c that uses sys.argv[1] and urllib.parse.quote to
produce the encoded value, and pass "$query" as the argument (preserving the
stderr redirect and the fallback to echo "$query"); update the assignment to
encoded_query so it uses this argv-based Python invocation and still falls back
to echo "$query" on error.
- Around line 304-317: The current fallback using find/head to locate SKILL.md
is non-deterministic; instead construct the deterministic path
"$output_dir/skills/$slug/SKILL.md" and check that file first (use that as
installed_skill), and only if that path doesn't exist fall back to the existing
find "$output_dir" -name "SKILL.md" -type f logic; update the block that sets
installed_skill (and the subsequent existence check that compares to
$output_file) to prefer the deterministic path and then fallback to the find
result so the script reliably copies the correct SKILL.md for the given $slug.
🧹 Nitpick comments (2)
.agent/scripts/clawdhub-helper.sh (2)

121-135: Surface HTTP/network failures before JSON validation.
curl -s can mask transport/HTTP errors; using -fS with explicit handling keeps A‑grade reliability.

♻️ Suggested change
-    response=$(curl -s --connect-timeout 10 --max-time 30 "${CLAWDHUB_API}/skills/${slug}")
+    response=$(curl -fsS --connect-timeout 10 --max-time 30 "${CLAWDHUB_API}/skills/${slug}") || {
+        log_error "Failed to fetch skill info (HTTP/network) for: $slug"
+        return 1
+    }
As per coding guidelines, improve error feedback for automation scripts.

162-168: Add a trap to guarantee temp directory cleanup.
Prevents orphaned temp dirs on early exits.

♻️ Suggested change
     pw_dir=$(mktemp -d "${TMPDIR:-/tmp}/clawdhub-pw-XXXXXX")
+    trap 'rm -rf "$pw_dir"' EXIT

Comment on lines +774 to +784
if [[ -z "$slug" ]]; then
log_error "ClawdHub slug required"
return 1
fi

log_info "Importing from ClawdHub: $slug"

# Get skill metadata from API
local api_response
api_response=$(curl -s --connect-timeout 10 --max-time 30 "${CLAWDHUB_API:-https://clawdhub.com/api/v1}/skills/${slug}" 2>/dev/null)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate ClawdHub slugs before using them in filesystem paths.

slug is user-controlled and is embedded in fetch_dir that gets removed with rm -rf. A crafted value like ../ can escape the temp directory. Add strict validation (and strip query/fragment) before any path usage to keep the automation A‑grade secure.

🔒 Proposed fix
 cmd_add_clawdhub() {
     local slug="$1"
@@
     if [[ -z "$slug" ]]; then
         log_error "ClawdHub slug required"
         return 1
     fi
+    # Strip query/fragment and validate to prevent path traversal
+    slug="${slug%%\?*}"
+    slug="${slug%%#*}"
+    if [[ ! "$slug" =~ ^[A-Za-z0-9][A-Za-z0-9-]{0,63}$ ]]; then
+        log_error "Invalid ClawdHub slug: $slug"
+        return 1
+    fi

Also applies to: 851-856

🤖 Prompt for AI Agents
In @.agent/scripts/add-skill-helper.sh around lines 774 - 784, The ClawdHub
`slug` is used to build filesystem paths (e.g., `fetch_dir`) and must be
strictly validated and sanitized before any path operations or `rm -rf`; update
the code that reads `slug` to (1) strip any URL query/fragment and whitespace,
(2) reject or normalize values containing path separators or traversal tokens
(like `../`, `/`, `\`, or leading `-`), and (3) allow only a safe whitelist of
characters (e.g., lowercase letters, digits, hyphen, underscore, dot). Apply
this validation where `slug` is used (referencing the `slug` variable and the
`fetch_dir` usage and the removal block around lines 851-856) and fail early
with an error if validation fails to prevent directory escape.

Comment on lines +304 to +317
# Fallback: try clawdhub CLI
if command -v npx &>/dev/null; then
log_info "Trying: npx clawdhub install $slug"
if (cd "$output_dir" && npx --yes clawdhub@latest install "$slug" --force 2>/dev/null); then
# clawdhub installs to ./skills/<slug>/SKILL.md
local installed_skill
installed_skill=$(find "$output_dir" -name "SKILL.md" -type f 2>/dev/null | head -1)
if [[ -n "$installed_skill" && -f "$installed_skill" ]]; then
if [[ "$installed_skill" != "$output_file" ]]; then
cp "$installed_skill" "$output_file"
fi
log_success "Fetched via clawdhub CLI"
return 0
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the actual file and its context
cat -n .agent/scripts/clawdhub-helper.sh | sed -n '295,325p'

Repository: marcusquinn/aidevops

Length of output: 1435


🏁 Script executed:

# Search for any documentation or comments about clawdhub output structure
rg -i "clawdhub" --type sh -A 5 -B 5 .agent/scripts/

Repository: marcusquinn/aidevops

Length of output: 27391


🌐 Web query:

clawdhub CLI install output directory structure where does npx clawdhub install write SKILL.md

💡 Result:

npx clawdhub install <skill-slug> installs a skill as a folder containing SKILL.md at:

  • Default (no flags): ./skills/<skill-slug>/SKILL.md (relative to your current working directory) [1]
  • If a Clawdbot workspace is configured: it may instead use the workspace root as the workdir, installing to <workspace>/skills/<skill-slug>/SKILL.md (unless you override) [1][2]

You can change where it writes by setting:

  • --workdir <dir> (or CLAWDHUB_WORKDIR) to change the base directory
  • --dir <dir> to change the skills subdirectory name (default is skills) [1]

Sources: [1] [2]


Use deterministic path when clawdhub CLI copies SKILL.md.

When npx clawdhub install is run in $output_dir, it writes to ./skills/<slug>/SKILL.md relative to the working directory. The current find ... | head -1 approach is non-deterministic and will grab any SKILL.md in the output directory if multiple skills exist, potentially copying the wrong file. Use the expected path directly with a fallback:

Suggested fix
-            installed_skill=$(find "$output_dir" -name "SKILL.md" -type f 2>/dev/null | head -1)
+            installed_skill="${output_dir}/skills/${slug}/SKILL.md"
+            if [[ ! -f "$installed_skill" ]]; then
+                installed_skill=$(find "$output_dir" -name "SKILL.md" -type f -path "*/${slug}/*" 2>/dev/null | head -1)
+            fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Fallback: try clawdhub CLI
if command -v npx &>/dev/null; then
log_info "Trying: npx clawdhub install $slug"
if (cd "$output_dir" && npx --yes clawdhub@latest install "$slug" --force 2>/dev/null); then
# clawdhub installs to ./skills/<slug>/SKILL.md
local installed_skill
installed_skill=$(find "$output_dir" -name "SKILL.md" -type f 2>/dev/null | head -1)
if [[ -n "$installed_skill" && -f "$installed_skill" ]]; then
if [[ "$installed_skill" != "$output_file" ]]; then
cp "$installed_skill" "$output_file"
fi
log_success "Fetched via clawdhub CLI"
return 0
fi
# Fallback: try clawdhub CLI
if command -v npx &>/dev/null; then
log_info "Trying: npx clawdhub install $slug"
if (cd "$output_dir" && npx --yes clawdhub@latest install "$slug" --force 2>/dev/null); then
# clawdhub installs to ./skills/<slug>/SKILL.md
local installed_skill
installed_skill="${output_dir}/skills/${slug}/SKILL.md"
if [[ ! -f "$installed_skill" ]]; then
installed_skill=$(find "$output_dir" -name "SKILL.md" -type f -path "*/${slug}/*" 2>/dev/null | head -1)
fi
if [[ -n "$installed_skill" && -f "$installed_skill" ]]; then
if [[ "$installed_skill" != "$output_file" ]]; then
cp "$installed_skill" "$output_file"
fi
log_success "Fetched via clawdhub CLI"
return 0
fi
🤖 Prompt for AI Agents
In @.agent/scripts/clawdhub-helper.sh around lines 304 - 317, The current
fallback using find/head to locate SKILL.md is non-deterministic; instead
construct the deterministic path "$output_dir/skills/$slug/SKILL.md" and check
that file first (use that as installed_skill), and only if that path doesn't
exist fall back to the existing find "$output_dir" -name "SKILL.md" -type f
logic; update the block that sets installed_skill (and the subsequent existence
check that compares to $output_file) to prefer the deterministic path and then
fallback to the find result so the script reliably copies the correct SKILL.md
for the given $slug.

Comment on lines +335 to +346
# Parse options
while [[ $# -gt 0 ]]; do
case "$1" in
--output)
output_dir="$2"
shift 2
;;
*)
log_error "Unknown option: $1"
return 1
;;
esac
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard --output against a missing argument.
With set -u, an empty value triggers an unbound variable error.

🐛 Suggested fix
             --output)
-                output_dir="$2"
-                shift 2
+                if [[ $# -lt 2 || -z "${2:-}" ]]; then
+                    log_error "--output requires a directory path"
+                    return 1
+                fi
+                output_dir="$2"
+                shift 2
                 ;;
As per coding guidelines, fail fast with clear CLI errors.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Parse options
while [[ $# -gt 0 ]]; do
case "$1" in
--output)
output_dir="$2"
shift 2
;;
*)
log_error "Unknown option: $1"
return 1
;;
esac
# Parse options
while [[ $# -gt 0 ]]; do
case "$1" in
--output)
if [[ $# -lt 2 || -z "${2:-}" ]]; then
log_error "--output requires a directory path"
return 1
fi
output_dir="$2"
shift 2
;;
*)
log_error "Unknown option: $1"
return 1
;;
esac
🤖 Prompt for AI Agents
In @.agent/scripts/clawdhub-helper.sh around lines 335 - 346, The --output
option parsing in the while loop can access an unbound $2 (with set -u) when the
argument is missing; update the case for --output to validate that a next
argument exists and is not another option (e.g., check [[ -n "${2-}" &&
"${2:0:1}" != "-" ]]) before assigning to output_dir, and if the check fails
call log_error with a clear message and return 1 instead of blindly using $2;
adjust shifts only on success.

Comment on lines +379 to +381
local encoded_query
encoded_query=$(python3 -c "import urllib.parse; print(urllib.parse.quote('$query'))" 2>/dev/null || echo "$query")

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid interpolating the search query into Python code.
Single quotes break the command; argv-based encoding is safer.

🐛 Suggested fix
-    encoded_query=$(python3 -c "import urllib.parse; print(urllib.parse.quote('$query'))" 2>/dev/null || echo "$query")
+    encoded_query=$(python3 -c 'import urllib.parse, sys; print(urllib.parse.quote(sys.argv[1]))' "$query" 2>/dev/null || echo "$query")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
local encoded_query
encoded_query=$(python3 -c "import urllib.parse; print(urllib.parse.quote('$query'))" 2>/dev/null || echo "$query")
local encoded_query
encoded_query=$(python3 -c 'import urllib.parse, sys; print(urllib.parse.quote(sys.argv[1]))' "$query" 2>/dev/null || echo "$query")
🤖 Prompt for AI Agents
In @.agent/scripts/clawdhub-helper.sh around lines 379 - 381, Replace the
current interpolation of $query into the Python one-liner by passing the query
as an argv parameter to Python to avoid quoting issues: call python3 -c that
uses sys.argv[1] and urllib.parse.quote to produce the encoded value, and pass
"$query" as the argument (preserving the stderr redirect and the fallback to
echo "$query"); update the assignment to encoded_query so it uses this
argv-based Python invocation and still falls back to echo "$query" on error.

@marcusquinn marcusquinn merged commit ab7de61 into main Jan 24, 2026
25 checks passed
@marcusquinn marcusquinn deleted the feature/clawdhub-skill-import branch February 21, 2026 01:59
@marcusquinn
Copy link
Copy Markdown
Owner Author

marcusquinn commented Feb 25, 2026

Parent task t012 (Separate generic code from app-specific code for webapp) is tracked with 8 subtasks. Current subtask status:

Subtasks are being worked through the pipeline. This parent issue tracks overall progress.


Posted by AI Supervisor (automated reasoning cycle)

marcusquinn added a commit that referenced this pull request Feb 26, 2026
…urrency awareness

Three refinements from live testing of the opus strategic review (t1340):

1. Cross-repo: read pulse-repos.json and iterate ALL managed repos,
   not just aidevops. Product repos flagged as higher priority.
   Caught awardsapp #183 (parent open, all subtasks done) that the
   first version missed entirely.

2. Action/TODO split: act directly on safe mechanical things (prune,
   merge green PRs, file issues), create TODOs for state changes
   that need verification (marking tasks complete, unblocking chains).

3. Concurrency: treat worker count as informational, not a hard limit.
   Only flag if there's evidence of harm (rate limits, OOM, timeouts).
marcusquinn added a commit that referenced this pull request Feb 26, 2026
…urrency awareness (#2336)

Three refinements from live testing of the opus strategic review (t1340):

1. Cross-repo: read pulse-repos.json and iterate ALL managed repos,
   not just aidevops. Product repos flagged as higher priority.
   Caught awardsapp #183 (parent open, all subtasks done) that the
   first version missed entirely.

2. Action/TODO split: act directly on safe mechanical things (prune,
   merge green PRs, file issues), create TODOs for state changes
   that need verification (marking tasks complete, unblocking chains).

3. Concurrency: treat worker count as informational, not a hard limit.
   Only flag if there's evidence of harm (rate limits, OOM, timeouts).
marcusquinn added a commit that referenced this pull request Mar 14, 2026
…l helpers

Address medium quality-debt review feedback from PR #183 (Gemini):
- clawdhub-helper.sh: replace python3 JSON validation with jq -e in fetch_skill_info
- clawdhub-helper.sh: surface HTTP/network errors with curl -fsS instead of -s
- clawdhub-helper.sh: replace python3 owner extraction with jq in fetch_skill_content_playwright
- clawdhub-helper.sh: add EXIT trap for guaranteed temp dir cleanup
- clawdhub-helper.sh: surface npm/playwright install errors (redirect stderr to stdout)
- clawdhub-helper.sh: replace python3 info display with jq in cmd_info
- clawdhub-helper.sh: replace python3 search output with jq in cmd_search
- clawdhub-helper.sh: pass query as argv to python3 in URL encoding (injection-safe)
- add-skill-helper.sh: replace python3 metadata extraction with jq in cmd_add_clawdhub
- add-skill-helper.sh: skip ClawdHub URLs in cmd_check_updates with informational log

Closes #3353
marcusquinn added a commit that referenced this pull request Mar 14, 2026
…l helpers (#4810)

Address medium quality-debt review feedback from PR #183 (Gemini):
- clawdhub-helper.sh: replace python3 JSON validation with jq -e in fetch_skill_info
- clawdhub-helper.sh: surface HTTP/network errors with curl -fsS instead of -s
- clawdhub-helper.sh: replace python3 owner extraction with jq in fetch_skill_content_playwright
- clawdhub-helper.sh: add EXIT trap for guaranteed temp dir cleanup
- clawdhub-helper.sh: surface npm/playwright install errors (redirect stderr to stdout)
- clawdhub-helper.sh: replace python3 info display with jq in cmd_info
- clawdhub-helper.sh: replace python3 search output with jq in cmd_search
- clawdhub-helper.sh: pass query as argv to python3 in URL encoding (injection-safe)
- add-skill-helper.sh: replace python3 metadata extraction with jq in cmd_add_clawdhub
- add-skill-helper.sh: skip ClawdHub URLs in cmd_check_updates with informational log

Closes #3353
superdav42 added a commit to superdav42/aidevops that referenced this pull request Mar 16, 2026
* fix(t3578): address PR #311 review notes in GLM-OCR docs (#4716)

* fix(t3588): align google analytics MCP tool filter naming (#4714)

* fix: add dspyground install command to docs note (#4713)

* fix(t3571): use portable ERE task-id boundaries in PR matching (#4723)

* fix(t3576): sort LAZY_MCPS entries for maintainability (#4722)

* fix(t3575): clarify gh_grep on-demand guidance (#4721)

* fix(t3170): separate local declaration from assignment in get_shell_rc (#4711)

Addresses Gemini cross-PR review feedback (PR #1253): local var="$1"
combined form masks exit codes; use separate declare+assign per styleguide.

Closes #3170

* fix: replace repeated grep-per-field with single-pass while/case in email-signature-parser-helper.sh (#4724)

Address 3 Gemini review findings from PR #3055:
- merge_toon_contact: replace 6 grep calls with one while/case pass over $existing (HIGH)
- resolve_contact_filename: replace 2 grep calls with one while/case pass over file (MEDIUM)
- list_contacts: replace 2 grep calls per file with one while/case pass per file (MEDIUM)

All three sites now parse fields in a single read loop, eliminating redundant
subshell forks and grep invocations. ShellCheck: zero new violations.

Closes #3161

* fix(t3591): externalize worker efficiency dispatch prompt (#4725)

* fix: refactor build_curl_args to accept protocol param, add SSL warning (#4726)

- Pass pre-computed protocol to build_curl_args() in both cron-dispatch.sh
  and cron-helper.sh, eliminating the redundant get_protocol() subshell call
  inside the function when callers already have the value
- Add log_warn when OPENCODE_INSECURE=1 in cron-helper.sh to match the
  existing warning in cron-dispatch.sh (operator visibility parity)
- Use log_warn (stderr) instead of log_info (stdout) for the SSL warning in
  cron-dispatch.sh for correct severity routing

Addresses gemini review feedback from PR #305.
Closes #3529

* docs(remote-dispatch): address PR #2109 review feedback (#4728)

- Clarify credential transport: keys embedded as export lines in uploaded
  shell script, not via AcceptEnv/SendEnv (which cannot silently fail)
- Add /proc/<pid>/environ exposure note for security-conscious deployments
- Add mitigation guidance: restrict host access or use short-lived tokens
- Add opencode-ai as preferred npm install option; keep @anthropic-ai/claude-code
  as the claude CLI alternative

Closes #3445

* fix(supervisor): guard cooldown file write against unset SUPERVISOR_STATE_DIR and move timestamp after success (#4729)

- Use ${SUPERVISOR_STATE_DIR:-/var/lib/supervisor} instead of SUPERVISOR_DIR for
  task_creation_cooldown_file to guard against unset variable under set -u/-e
- Add mkdir -p before writing the cooldown timestamp to ensure directory exists
- Move date +%s write to AFTER confirming TODO.md exists and task creation runs,
  preventing the cooldown from throttling retries when prerequisites are missing

Addresses PR #1170 review feedback (issue #3526).

* fix(t3528): address PR #317 review feedback (#4730)

- Move NODE_PATH snippet from build-agent.md into node-helpers.md and
  replace with a file:line reference (CodeRabbit feedback)
- Add FTS5 capability probe to sqlite3 status check in onboarding-helper.sh
  so partial installs (sqlite3 present, FTS5 missing) are correctly reported
  as 'partial' rather than 'ready' (CodeRabbit + Gemini feedback)
- Add fts5 field to JSON output using jq for type-safe boolean construction

Closes #3528

* fix(supervisor): redirect jq stderr to SUPERVISOR_LOG in dismiss_bot_reviews (#4731)

Replace 2>/dev/null with 2>>"${SUPERVISOR_LOG:-/dev/null}" on three jq
commands in the dismiss_bot_reviews function and check_pr_status function.

This aligns with the repository style guide (no blanket stderr suppression)
and allows jq parsing errors from malformed gh api responses to be captured
in the supervisor log for debugging, rather than silently discarded.

Closes #3564
Addresses gemini-code-assist review feedback on PR #952

* fix(t3566): deduplicate ai bot review verification guidance (#4732)

* fix: align merge-conflict error token with retry filter in deploy.sh (#4734)

Change the error string written on auto-rebase failure from the human-readable
'Merge conflict — auto-rebase failed' to the machine-readable token
'merge_conflict:auto_rebase_failed'. This aligns with the case-match in
evaluate.sh:718 and dispatch.sh:673 which filter on 'merge_conflict' — without
this fix, tasks blocked by auto-rebase failure were not picked up for retry.

Fix 2 (git add before diff --check) was already present in deploy.sh at line
2248 from a prior refactor — no change needed there.

Closes #3524

* chore(version-manager): extract badge patterns into local variables (#4735)

Addresses Gemini code review suggestion from PR #134: store the
dynamic and hardcoded badge grep patterns in local variables to
improve maintainability and avoid repeating the pattern strings.

Closes #3522

* fix(linters): handle markdownlint execution errors separately from rule violations (#4737)

When markdownlint fails due to bad config, invalid arguments, or other
non-rule errors, the output won't match the MD[0-9] pattern, causing
violation_count=0 and a false success return—even in blocking mode.

Capture lint_exit separately (|| lint_exit=$?) and treat non-zero exit
codes as blocking errors in changed-file mode and advisory warnings in
full-scan mode. Covers both cases: output present (non-rule error message)
and no output (silent config parse failure).

Closes #3505
Addresses CodeRabbit review on PR #271

* refactor(dispatch): consolidate pro tier into sonnet case statement (#4736)

Merge the redundant `pro` case into `sonnet | eval | health | pro` since
both resolve to the same model (anthropic/claude-sonnet-4-6). Reduces
duplication and improves maintainability as suggested in PR #799 review.

Closes #3519

* fix(t3521): harden version validator script invocation and JSON parsing (#4738)

* fix(t3510): harden grep count handling in setup modules (#4739)

* docs(#3492): restore sentry setup context and token access note (#4740)

* chore(t3517): convert status label lists to bash arrays (#4741)

Replace comma-separated string iteration with bash arrays in:
- supervisor-archived/issue-sync.sh: ALL_STATUS_LABELS constant + sync_issue_status_label() loop
- issue-sync-helper.sh: _DONE_REMOVE_LABELS constant + _mark_issue_done()

Eliminates IFS manipulation and here-string splitting for safer, more
idiomatic bash iteration. Addresses Gemini review feedback on PR #1375.

ShellCheck: zero new violations.

* fix(t3504): clarify 4-hour max runtime comment (#4743)

* fix(t3490): improve terminal capability guidance readability (#4744)

* fix(markdown): normalize remember command example spacing (#4745)

* fix: consolidate duplicate pro tier mapping (#4746)

* fix(t3496): escape task ID regex in completion filters (#4747)

* fix(auto-update): detect script drift when VERSION matches to prevent stale pulse (#4749)

When a script fix is merged without a version bump, the deployed copy in
~/.aidevops/ stays stale until setup.sh is run manually. The auto-update
stale check only compared VERSION files, missing intra-version script changes.

Add a sentinel-based script drift check: compare SHA-256 of
gh-failure-miner-helper.sh between repo and deployed. If they differ,
re-deploy all agents via setup.sh --non-interactive.

Root cause of GH#4727: PR #4704 fixed gh-failure-miner-helper.sh (merged
07:43) but the pulse ran at 08:40 using the old deployed version, which
still treated Codacy ACTION_REQUIRED as a CI failure and produced a false
systemic cluster, causing the pulse LLM to create a duplicate issue.

Closes #4727

* feat: add --include-positive flag to scan-merged for debugging positive-review filters (#4748)

Closes #4733

Adds --include-positive to quality-feedback-helper.sh scan-merged to bypass
the positive-review suppression filters (summary-only, approval-only,
no-actionable-sentiment). Intended for use with --dry-run to audit which
reviews are being suppressed and verify the filters are working correctly.

Changes:
- cmd_scan_merged: parse --include-positive flag, pass to _scan_single_pr
- _scan_single_pr: accept include_positive arg; bypass summary_only filter
  and approval/sentiment select() when true; use select() pattern instead
  of pipe-through-boolean to avoid jq object-construction errors
- Help text: document --include-positive with usage example
- Tests: 5 new tests covering flag unit behaviour and _scan_single_pr
  integration (27/27 passing, 0 shellcheck violations)

* fix: address PR #254 review feedback on worktree cleanup and divergence handling (#4750)

- Extract worktree cleanup bash block from full-loop.md into new
  worktree-cleanup.md subagent doc; replace inline snippet with
  progressive-disclosure pointer (CodeRabbit finding)
- Replace destructive git reset --hard suggestion in version-manager.sh
  diverged-branch path with safer guidance: inspect divergence first,
  then choose reset (squash-merged) or rebase (unmerged commits)
  (CodeRabbit finding)

Closes #3518

* fix(t3455): add Windows and Linux Claude Desktop config paths to mcp-integrations.md (#4751)

The Claude Desktop config path was macOS-only. Added cross-platform paths
for all three OS in both the OpenAPI Search MCP and Cloudflare Code Mode MCP
sections, addressing Gemini review feedback from PR #2077.

Closes #3455

* fix: standardize Claude Code terminology in mcp-integrations.md (#4752)

Remove 'CLI' suffix from 'Claude Code CLI' comment in mcp-integrations.md
to match the consistent 'Claude Code' naming used throughout the project.

Addresses Gemini review feedback from PR #217.
Closes #3485

* fix(clawdhub-helper): remove 2>/dev/null suppressions to improve debuggability (#4754)

Addresses Gemini code review feedback from PR #288. Three stderr suppressions
were hiding error output from npm install, npx playwright install, node fetch.mjs,
and npx clawdhub install, making it impossible to diagnose failures in the
Playwright/CLI skill-fetch fallback chain.

The find 2>/dev/null on line 293 is intentionally retained — it suppresses
permission-denied noise from filesystem traversal, not operational errors.

Closes #3474

* fix(t3480): mark Twilio governance template fields as informational (#4756)

* docs: clarify runtime identity guidance (#4757)

* fix: address Gemini style violations from PR #1401 review (t3487) (#4758)

Three style guide violations flagged by Gemini on PR #1401 but not
addressed before the supervisor was archived in PR #2291:

1. Replace 2>/dev/null with 2>>"$SUPERVISOR_LOG" on db() call (line 576)
   — blanket error suppression hides db failures; log them instead
2. Split local repo_slug="$1" / local pr_number="$2" in check_review_threads()
   — separate declaration from assignment for set -e safety (Rule #11)
3. Split local repo_slug="$1" / local pr_number="$2" in resolve_bot_review_threads()
   — same Rule #11 fix
4. Replace 2>/dev/null with 2>>"$SUPERVISOR_LOG" on jq call (line 1061)
   — log jq parse errors for diagnostics (Rule #50)

ShellCheck: zero violations. File is archived but kept consistent with
the style guide for reference integrity.

Closes #3487

* fix(t3484): address PR #187 MCP review feedback (#4761)

* fix(t3472): align agent tool map line wrapping (#4760)

* fix: explicit return propagation and remove jq stderr suppression in muapi-helper.sh (#4759)

Address PR #2013 review feedback (issue #3372):
- Add explicit 'return $?' to submit_specialized() and all cmd_* functions
  that call submit_specialized, so exit codes propagate to callers
- Remove '2>/dev/null' from jq calls in cmd_balance() and cmd_usage()
  so jq parse errors are visible for debugging

* fix(#3475): reduce repeated option variable declarations in SEO export parsers (#4762)

* fix: address PR #327 review feedback on blank line and phrasing clarity (#4763)

- Replace print_info "" with echo for blank line in setup-mcp-integrations.sh
  (print_info adds [INFO] prefix, making blank lines non-blank in output)
- Clarify @github-search note in github-search.md to avoid implying it
  replaces grep_app when Oh-My-OpenCode is installed; now explicit it is
  the built-in aidevops alternative

Closes #3463

* fix: improve add-skill-helper.sh comment clarity and tighten diagram pattern (#4764)

- Add ordering notes to database, diagrams, and programming language
  category comments (Gemini PR #297 review suggestions)
- Remove overly broad 'diagram' token from diagrams grep pattern; retain
  specific alternatives (mermaid, flowchart, sequence.diagram, er.diagram,
  uml) to avoid false-positive matches in architecture docs (Augment)
- Update add-skill.md Category Detection table to include all new
  categories added in PR #297 (architecture, database, diagrams,
  programming) so maintainer docs stay in sync with the script (Augment)

Closes #3461

* fix: guard empty eval arrays and unset _emit_token in dispatch.sh (#4766)

Address CodeRabbit review feedback from PR #2053 (t3459):

1. check_cli_health: guard against empty version_cmd after eval — if
   build_cli_cmd returns non-zero or produces no tokens, log an error
   and return 1 instead of executing an empty command.

2. check_model_health: same guard for probe_cmd — prevents silent
   empty exec when build_cli_cmd fails for probe action.

3. build_cli_cmd: unset -f _emit_token before returning — the nested
   helper leaked into the global function namespace after first call;
   cleanup prevents unexpected collisions with future callers.

shellcheck -x -S warning: zero violations
bash -n: syntax OK

Closes #3459

* fix: add cross-platform Claude Desktop paths and clarify --transport http flag in cloudflare-mcp.md (#4765)

Addresses PR #2077 review feedback (issue #3456):
- Add Windows (%APPDATA%) and Linux (~/.config/Claude) config paths alongside macOS
- Add note explaining --transport http is the MCP transport type name, not the URL
  scheme; http is correct even for HTTPS endpoints (selects protocol framing, not TLS)

Closes #3456

* fix: remove stale 'API references' phrase and fix broken path refs in cloudflare-platform.md (#4767)

- Line 11: remove 'and API references' from Role declaration — api.md files
  are superseded by Code Mode MCP; phrase implied they still exist
- Lines 13, 21, 57: fix 3 broken path refs tools/api/cloudflare-mcp.md →
  ../../tools/api/cloudflare-mcp.md (correct relative path from services/hosting/)

Closes #3454

* fix: remove openai/gpt-4o from DEFAULT_HEADLESS_MODELS — only anthropic configured (#4768)

openai provider is not configured in opencode.json, causing ProviderModelNotFoundError
on every other worker dispatch. DEFAULT_HEADLESS_MODELS now uses only the configured
anthropic/claude-sonnet-4-6 model.

Closes #4755

* fix(t3441): address Gemini review feedback from PR #2143 (#4769)

- Separate local self_pid declaration from assignment (local self_pid; self_pid=$$)
  to follow the repo style guide (declare and assign separately for exit code safety)
- Remove 2>/dev/null from while condition ([[ "$self_pid" -gt 1 ]]) — blanket
  suppression on control structures masks syntax errors and is unnecessary
- Remove 2>/dev/null from cat pid_file — file existence already checked by [[ -f ]]
  guard on the preceding line; suppression is redundant and hides permission errors
- Remove 2>/dev/null from pgrep — pgrep returns exit 1 on no match (already guarded
  by || true); suppression masks real errors like missing binary or invalid args

Closes #3441

* fix: address Gemini review feedback from PR #2120 (t3442) (#4770)

- Separate local declaration from assignment for ai_pid_file (exit code safety)
- Remove blanket 2>/dev/null suppression on ai_pid_file write and rm — use || true
  so filesystem errors (permission denied, missing dir) remain visible in logs
- Remove 2>/dev/null from _list_descendants call — errors should surface for diagnosis
- Normalize newline-delimited PID output from _list_descendants to space-delimited
  protected_pids via while/read loop, fixing grep -q match reliability

Closes #3442

* fix(t3471): use generic placeholders in file discovery table (#4772)

Replace '*.md'/-e md examples with '<pattern>'/<ext>/<dir> placeholders
in context-guardrails.md so AI agents understand the commands apply to
any file type, not just Markdown files.

Addresses Gemini review feedback from PR #125.
Closes #3471

* fix(t3462): tighten task ID extraction matching and simplify parsing loop (#4773)

* fix(t3419): address PR #156 review feedback in video-prompt-design (#4774)

- Fix typo: 'thats' → 'that's' in camera positioning instruction (line 43)
- Fix typo: 'thats' → 'that's' in camera positioning example (line 95)
- Align dialogue format in quick-reference with detailed example:
  '(Character Name): "Speech" (Tone: descriptor)' — colon syntax prevents
  subtitle generation (Gemini suggestion for clarity/consistency)

* fix: address PR #2219 review feedback in ai-deploy-decisions.sh (#4775)

- Fix severity merge to use id-based lookup instead of positional index
  (prevents misassignment when AI reorders/omits threads)
- Add hard gate: PR must be MERGED before AI can set verified=true
  (prevents non-merged PRs from being marked as verified)
- Include thread id in AI prompt so AI can echo it back for stable mapping

Closes #3431

* fix: address PR #2156 quality-debt review feedback (t3440) (#4776)

- Remove 2>/dev/null from blocked/retrying/verify_failed task-detail DB
  queries so errors (DB locked, SQL syntax) surface instead of being
  silently swallowed; || echo "" fallback preserved for set -e safety
- Extract _format_task_alert_list() helper to eliminate duplicated
  blocked/verify_failed alert formatting logic (DRY refactor)
- ShellCheck: zero violations

* fix(t3420): address PR #219 review feedback (#4777)

- get_git_context: capture toplevel before basename to prevent '.' output
  when not inside a git repo (CodeRabbit)
- generate-opencode-agents: update UPDATE_AVAILABLE example from 3-field
  to 4-field format matching actual output (CodeRabbit)
- detect_app fallback: normalize parent process name to lowercase before
  case matching to handle capitalized names on some platforms (Augment)
- detect_app fallback: add windsurf and continue process name patterns
  to match env-var detection coverage (Gemini)

Closes #3420

* fix(t3416): mark t1330 acceptance criteria complete, confirm MD031 clean (#4784)

The MD031 fenced-block spacing violations in todo/tasks/t1330-brief.md were
fixed in PR #2273 (commits 21e4b9e..2807eb9). markdownlint now reports 0
errors. This commit marks the acceptance criteria checkboxes as complete to
reflect t1330's verified delivery (pr:#2389 completed:2026-02-26).

Closes #3416

* fix: make cloudflare-platform.md references clickable links (#4778)

Addresses review feedback from PR #147 (gemini-code-assist, augmentcode):
inline code formatting on cross-references is not clickable in GitHub
render. Convert all three Markdown-renderable references to link syntax.

Closes #3405

* fix(t3388): correct t1332-brief.md inaccuracies from PR #2274 review (#4790)

- Phase 4 → Phase 0.75 (actual phase where stuck detection runs in pulse.sh)
- suggestions: [string] → suggested_actions: string (single string, not array)
- ai-reason.sh → dispatch.sh (stuck-detection.sh depends on dispatch.sh for
  resolve_ai_cli and resolve_model, not ai-reason.sh)
- Update Estimate Breakdown table to match corrected phase reference

Closes #3388

* fix(t3391): address PR #2284 review feedback on circuit breaker (#4792)

- Serialize manual 'trip' path with _cb_with_state_lock via new
  _cb_trip_impl() to prevent interleaving with concurrent pulse writes
- Count ENVIRONMENT failures in circuit-breaker accounting so repeated
  infra failures can trip the breaker (prefixed 'environment:' for
  downstream reporting distinction)

All other review findings (numeric validation, lock wrapper, repo-scoping,
jq hardening, tripped_at parse safety, cb_record_success on early-success
path, jq empty-string fallbacks, elapsed helper deduplication) were
already addressed in prior commits on this branch.

Closes #3391

* fix: t3381 address PR #2201 review feedback on t1305 opencode streaming hooks doc (#4791)

- Add date (2026-02-22) for PR #14727 in timeline for consistency
- Add reasoning-delta case to processor.ts code sketch for completeness
- Backtick processor.ts, Bun.file(), Filesystem in architecture section
- Use 109k+ star count consistently with Target Repository section

Closes #3381

* fix: add missing GET /payments/credits and /payments/usage endpoints to muapi.md (#4789)

Addresses quality-debt review feedback from PR #2013 (gemini-code-assist).
The Payments & Credits section was missing the balance and usage check endpoints
that are implemented in muapi-helper.sh. Also corrects the checkout session
endpoint method (GET→POST) and adds the /api/v1/ version prefix for consistency
with the rest of the document.

Closes #3373

* fix(t3403): make setup-aidevops repo path resolution dynamic (#4787)

* fix(t3410): centralize supervisor terminal status SQL fragments (#4785)

* fix(t3415): harden blocked task DB registration (#4782)

* fix(t3407): return non-zero from cloudron log_error (#4781)

* fix(t3412): remove 2>/dev/null suppression from gh/jq calls in supervisor scripts (#4780)

Addresses gemini review feedback on PR #2114. The 2>/dev/null redirections
on parse_pr_url(), gh pr view, and detect_repo_slug() calls were hiding
authentication failures, network errors, and jq parse errors. The || fallback
constructs already handle failure safely under set -e; stderr is now routed
to SUPERVISOR_LOG for diagnostics.

Closes #3412

* fix(t3409): apply PR #184 review feedback from gemini (#4779)

- AGENTS.md: clarify runtime identity to include MCP persona guidance
  without the restrictive '(backup tools)' phrasing that could cause
  agents to avoid using MCP tools unless a primary tool fails
- architecture.md: sort tier 2 MCP tool list alphabetically for
  readability and to prevent duplicate additions

Closes #3409

* fix(t3401): replace hardcoded ~/.aidevops/ paths with ${AIDEVOPS_DIR:-$HOME/.aidevops}/ in generate-opencode-commands.sh (#4783)

Addresses Gemini code review feedback from PR #95: hardcoded ~/.aidevops/
paths make it difficult to customise the install location. Replaces all 43
occurrences (tilde and $HOME variants) with the ${AIDEVOPS_DIR:-$HOME/.aidevops}
pattern, consistent with the convention already used in git-workflow.md and
deploy.sh.

Closes #3401

* fix(t3397): ignore no-suggestion review summaries in debt scan (#4786)

* fix(t3427): preserve AI stderr context in staleness checks (#4793)

* fix(t3428): keep timeout classifier stderr and clarify AI prompt (#4794)

Route stderr from timeout-classification AI calls into SUPERVISOR_LOG so failures remain diagnosable instead of being silently dropped.

Also remove contradictory prompt instructions by requiring a single JSON-only response format for category output.

Closes #3428

* fix(t3393): fix mentions type to Record<string, number> and add BigInt/Int64 JSON string note (#4795)

- Replace {string: int64} with Record<string, number> for mentions field (correct TypeScript index map syntax)
- Replace int64 with number for quotedItemId (TypeScript/JSON representation)
- Add type note explaining Int64→number mapping and recommending JSON string encoding for large IDs to avoid precision loss with BigInt(value) on already-rounded JSON numbers

Addresses CodeRabbit CHANGES_REQUESTED on PR #4788.

* fix: deduplicate React.memo bullet and clarify useDeferredValue in expo.md performance section (#4796)

Resolves redundancy introduced by PR #2011 review feedback application.
Line 181 covers React.memo for list items; line 182 now distinctly covers
useDeferredValue for heavy components, removing the duplicate React.memo reference.

Closes #3369

* fix(t3421): add uv tool subcommand check to setup prerequisites (#4797)

Addresses Gemini review feedback from PR #186: checking only
'command -v uv' is insufficient when 'uv tool' subcommand is the
actual invocation, since older uv versions lack the 'tool' subcommand
and would pass the guard but fail at runtime.

- setup-modules/mcp-setup.sh: guard outscraper-mcp-server install with
  'uv tool --help' check; add descriptive warning + update hint when uv
  is present but too old
- setup-modules/plugins.sh: guard cisco-ai-skill-scanner uv-path with
  same 'uv tool --help' check (fallback chain to pipx/venv/pip3 still
  applies when uv tool is unavailable)

Closes #3421

* fix(t3367): fail check on stale TOON subagent counts (#4798)

* fix(issue3359): replace brittle sleep dispatch checks with PID tracking guidance (#4799)

* fix(t3365): add regression coverage for non-actionable Gemini summary reviews (#4806)

* fix(t3362): dedupe pulse timestamp parsing via helper (#4805)

* fix(t3368): clean phase1 eval checkpoint on signal (#4804)

* fix(issue3363): add regression test for non-actionable Gemini review (#4803)

* fix(t3433): reduce jq churn and surface adopt-untracked errors (#4802)

* fix(t3366): remove blanket tail stderr suppression in stale diagnosis (#4800)

* fix(t3424): preserve supervisor health issue lookup reliability (#4801)

Remove stderr suppression from supervisor health issue gh/jq calls so auth and API failures remain debuggable, and ensure supervisor labels are backfilled/created consistently to keep label-based lookup effective after title edits.

* fix: improve DOM style extraction to use representative set with deduplication (#4807)

Addresses medium quality-debt review feedback from PR #2693 (GH#3350).
Replaces fixed tag-list querySelectorAll approach with a representative-set
traversal + style grouping strategy, which is more efficient on large pages
and captures styled elements beyond the original fixed tag list (e.g. div cards).

Closes #3350

* fix: correct imported_at timestamps for Cloudron skills to actual merge time (#4808)

PR #2651 merged at 2026-03-01T16:19:15Z. The three Cloudron skill entries
(cloudron-app-packaging, cloudron-app-publishing, cloudron-server-ops) had
placeholder timestamps of 18:00:00Z instead of the actual import time.

Addresses quality-debt review feedback from gemini on PR #2651 (issue #3351).

Closes #3351

* fix: bump Google model tiers to Gemini 3 in MODEL_TIERS (#4809)

Update flash/pro tier mappings from gemini-2.5 to gemini-3-preview models,
addressing CodeRabbit review feedback on PR #2126. Both models are confirmed
present in OpenCode's Google provider config and available via OpenRouter.

Closes #3341

* fix(t3353): replace python3 JSON parsing with jq in clawdhub/add-skill helpers (#4810)

Address medium quality-debt review feedback from PR #183 (Gemini):
- clawdhub-helper.sh: replace python3 JSON validation with jq -e in fetch_skill_info
- clawdhub-helper.sh: surface HTTP/network errors with curl -fsS instead of -s
- clawdhub-helper.sh: replace python3 owner extraction with jq in fetch_skill_content_playwright
- clawdhub-helper.sh: add EXIT trap for guaranteed temp dir cleanup
- clawdhub-helper.sh: surface npm/playwright install errors (redirect stderr to stdout)
- clawdhub-helper.sh: replace python3 info display with jq in cmd_info
- clawdhub-helper.sh: replace python3 search output with jq in cmd_search
- clawdhub-helper.sh: pass query as argv to python3 in URL encoding (injection-safe)
- add-skill-helper.sh: replace python3 metadata extraction with jq in cmd_add_clawdhub
- add-skill-helper.sh: skip ClawdHub URLs in cmd_check_updates with informational log

Closes #3353

* fix: address PR #2173 review feedback in resolve_ai_cli() (#4811)

- Correct npm package name from 'opencode' to 'opencode-ai' in install instructions
- Use proper name 'OpenCode' (capitalised) in log messages
- Rename unused `resolved_model` to `_resolved_model` to signal intentional non-use

Closes #3332

* fix: correct imported_at timestamps for Cloudron skills to actual merge time (#4812)

PR #2651 merged at 2026-03-01T16:19:15Z. The three Cloudron skill entries
(cloudron-app-packaging, cloudron-app-publishing, cloudron-server-ops) had
placeholder timestamps of 18:00:00Z instead of the actual import time.

Addresses quality-debt review feedback from gemini on PR #2651 (issue #3351).

Closes #3351

* fix: bump Google model tiers to Gemini 3 in MODEL_TIERS (#4813)

Update flash/pro tier mappings from gemini-2.5 to gemini-3-preview models,
addressing CodeRabbit review feedback on PR #2126. Both models are confirmed
present in OpenCode's Google provider config and available via OpenRouter.

Closes #3341

* fix: add blank lines around fenced code blocks in t1349-brief.md (MD031) (#4815)

Resolves MD031 (blanks-around-fences) violations flagged by CodeRabbit in PR #2462.
All 8 fenced code blocks in the Acceptance Criteria section now have blank lines
before and after, satisfying the 'Lint clean' acceptance criterion in the brief itself.

Closes #3315

* fix(t3329): use here-string with || true for eligible task count in ai-lifecycle.sh (#4816)

Fixes buggy line-counting logic flagged in PR #2113 review (Gemini).
The previous pattern `printf '%s\n' "$eligible_tasks" | grep -c '.' || echo "0"`
produces a multi-line value ("0\n0") when eligible_tasks is empty, because
grep -c exits 1 on no match and the || echo "0" appends a second zero.

Replace with `grep -c . <<< "$eligible_tasks" || true` which correctly
returns a single "0" on empty input and aligns with the repo style guide
requirement to use || true (not || echo) under set -e.

Closes #3329

* fix: clarify enhancement rollout priority in t1311 research (#4818)

* fix(t3311): regenerate pattern-3 skills after clean (#4819)

* fix(t3326): add regression test for non-actionable Gemini summary (#4820)

* fix(t3342): filter 'no suggestions for improvement' review summaries (#4822)

* fix(issue3313): restore portable proof-log and dedup regressions (#4826)

Replace removed local integration scripts with in-repo regression coverage, and harden contact filename resolution so same-name contacts with different emails are handled predictably.

* fix(issue3350): refine representative style extraction guidance (#4823)

* fix(t3353): surface ClawdHub API fetch and JSON errors (#4817)

* fix(issue3303): add regression test for non-actionable Gemini review (#4825)

* fix(t3325): add regression test for non-actionable gemini review (#4824)

* fix: use <PLACEHOLDER> style values in matterbridge config examples (#4830)

Addresses CodeRabbit review feedback on PR #2255 (issue #3309):
- Add explicit acceptance criterion to t1328-brief.md requiring all config
  examples use <PLACEHOLDER> style values for tokens/credentials, with a
  note referencing tools/credentials/ agents for secure storage
- Replace all bare credential values in matterbridge.md with <PLACEHOLDER>
  tokens (MATRIX_PASSWORD, DISCORD_BOT_TOKEN, TELEGRAM_BOT_TOKEN, etc.)
  and add aidevops secret set guidance inline
- Update matterbridge-helper.sh config template to use <PLACEHOLDER> values
  and add a header comment directing users to tools/credentials/ agents

Closes #3309

* refactor: deduplicate scheduler detection in setup.sh (#4828)

* fix(t3488): parameterize review fix-cycle count query (#4827)

Address Gemini medium review feedback from PR #1388 by replacing task_id SQL interpolation with db_param binding in supervisor deploy triage logic.

Closes #3488

* fix(t3422): remove 2>/dev/null suppression from resolve_model calls (#4829)

Addresses Gemini review feedback on PR #2256. The 2>/dev/null redirections
on resolve_model() calls in ai-actions.sh, batch.sh, and dispatch.sh were
hiding syntax errors in helper scripts and configuration issues. The ||
fallback constructs already handle failure safely; stderr is now visible
for diagnostics.

- ai-actions.sh: resolve_model in _exec_escalate_model()
- batch.sh: resolve_model in cmd_add() tier resolution
- dispatch.sh: resolve_model in verify-mode dispatch branch

Closes #3422

* t3307: clarify PR triage merge step with explicit two-step issue close (#4831)

* fix: clarify PR triage merge step with explicit two-step issue close

Address gemini-code-assist review feedback from PR #2474. The 'Green CI +
all gates passed' bullet was ambiguous — 'closed with a comment' could be
read as a single gh issue close --comment action. Rephrase to match the
document's established convention: comment first to link the merged PR,
then close as two separate steps. Also aligns placeholder style with the
rest of the document (<number>/<slug> vs NUMBER/SLUG).

Closes #3307

* docs: clarify two-step comment-then-close in PR triage audit trail

Make the order of operations explicit in the PR triage bullet: comment
on the issue first (linking the merged PR), then close it. Previously
the sentence implied a single action; now it shows the two separate gh
commands, consistent with the rest of the document's style.

Addresses Gemini review feedback on PR #2474. Closes #3307.

* fix(t3296): add memory-helper.sh references to README domain index entries (#4832)

Address PR #2650 review feedback (gemini findings):
- README.md:505 Pattern Tracking row: replace 'memory system' with 'memory-helper.sh'
- README.md:759 Review row: replace '(memory system)' with '(memory-helper.sh)'

Closes #3296

* fix: parameterize review-triage fix-cycle queries (#4833)

* fix: include .sh files in AI framework audit glob (#4834)

* fix(t3295): remove Docling from PDF OCR overview link (#4835)

* fix: clarify milestone validation blocking vs diagnostics behavior (#4836)

* docs: clarify persistent-label CI guard behavior (#4837)

Clarify that pulse must not use close-keyword references on persistent issues and document the guard-persistent-issues safety-net behavior to prevent accidental closure loops.

* fix: correct XMTP npm module init instructions (#4838)

* fix(issue3282): use fake timers in approval timeout test (#4841)

* fix(t3281): harden health dashboard task issue linking (#4842)

* test: add regression for issue #3323 positive Gemini review filtering (#4821)

* fix: address PR #2475 review feedback in runners-check.md (#4843)

- Replace `cat ... 2>/dev/null` with `test -r` readability check to surface
  file permission errors instead of silently falling back to default value
- Remove `2>/dev/null` from launchctl/crontab calls so system errors
  (command not found, permission issues) are visible for debugging

Closes #3298

* fix(GH#4814): add regression tests for positive-only review filter (#4840)

Adds two regression tests to prevent re-filing quality-debt issues
for purely positive bot reviews:

1. Exact incident body from GH#4814 (PR #2166 Gemini review):
   'The changes are well-implemented and improve the script's
   robustness and quality.' — COMMENTED state, 0 inline comments,
   bot reviewer. Must produce 0 findings (filtered by $summary_only).

2. Positive review body with actionable inline comments present:
   When a bot posts a positive review body but also has inline
   comments with actionable content, the review body is filtered
   but the inline comment is kept. Verifies $summary_only does
   not suppress inline findings.

The filtering logic was already correct (added in prior commits);
these tests lock in the behaviour and prevent regression.

Closes #4814

* fix: add env overrides for simplex bot runtime config (#4850)

* fix: add explicit bun-types to simplex bot tsconfig (#4849)

* fix: address onboarding-helper review feedback from PR #2729 (#4848)

* fix: avoid reconstructing session ID in executeCommand (GH#3266) (#4847)

The sessionId was being reconstructed in command-executor.ts using
hardcoded 'direct:<id>' / 'group:<id>' string patterns, duplicating
the format defined in session.ts. If the format ever changed, this
code would break silently.

Fix: trackSession() now returns the session ID directly from
SessionStore. The ID is threaded through processItem → routeCommand →
buildCommandContext and stored in CommandContext.sessionId.
executeCommand uses ctx.sessionId instead of reconstructing it.

Addresses PR #2375 review feedback (gemini, medium severity).

* fix: surface scanner and jq stderr in skill-scan instead of suppressing (#4846)

Addresses PR #2493 review feedback (issue #3249):
- Redirect skill-scanner background process stderr to indexed .err files
  instead of /dev/null, so Python env/dependency/syntax errors are visible
- Report .err file contents to stderr in the collection loop when non-empty
- Remove 2>/dev/null from jq skill_sources parse so malformed JSON errors
  are visible and set -e can abort on failure as intended

* fix(t3273): replace hardcoded oh-my-pi local path in plans (#4845)

* fix: avoid SIGPIPE false negatives in old-label migration checks (#4844)

Under set -o pipefail, 'launchctl list | grep -qF' can return exit code
141 (SIGPIPE) when grep exits early after a match, causing the migration/
unload path to be skipped even when the old label is present.

Apply the same variable-capture pattern already used in _launchd_is_loaded:
capture launchctl list output into a variable first, then pipe to grep.
Fixes both cmd_enable (line 571) and cmd_disable (line 688).

Closes #3270 (PR #2365 review feedback)

* t3599: extract scheduler detection and migration helpers in setup.sh (#4839)

* fix(t3599): extract scheduler detection and migration helpers to eliminate duplication

Extract two helper functions from the duplicated scheduler setup logic in setup.sh:
- _detect_scheduler_installed: checks both launchd and cron for an existing scheduler
- _migrate_scheduler_cron_to_launchd: handles cron→launchd migration with proper
  failure handling (on failure, signals caller to re-attempt install rather than
  silently marking the scheduler as installed)

Refactor the auto-update and supervisor pulse detection blocks to use these helpers.
Also fixes the migration failure guard: previously a failed cron→launchd migration
would still set _auto_update_installed=true, permanently skipping re-installation.

Addresses review feedback from PR #1971 (gemini-code-assist).
Closes #3599

* fix(t3599): consolidate scheduler setup detection logic

* fix(t3215): generalize dependency-detection search guidance (#4851)

* fix: apply PR #2652 readability feedback to Cloudron packaging docs (#4852)

* fix: remove blanket 2>/dev/null suppression in cleanup_osgrep (#4854)

Per PR #2170 review feedback (issue #3214): pgrep stdout redirect
is sufficient for existence checks; pkill errors are already guarded
by || true and should remain visible for debugging per style guide.

* fix: handle root commit in git diff shortstat for session miner (#4855)

When the oldest commit in a session window is a root commit (no parent),
`git diff --shortstat <hash>~1` fails with an invalid ref error, silently
dropping diff stats for that session.

Fix: check for a parent via `rev-parse --verify --quiet <hash>^`. If none
exists, diff from the canonical empty-tree object (4b825dc...) instead of
`<hash>~1`. This correctly captures insertions from the initial commit.

Closes #3230
Addresses PR #2658 review feedback (gemini-code-assist, extract.py:494)

* fix(tests): fix VERBOSE passthrough in test-verify-brief.sh (#4858)

Replace misleading `export VERBOSE` with proper CLI flag forwarding.
verify-brief.sh parses --verbose from its own $@, not from an env var,
so the export had no effect. Now uses VERBOSE_ARG string variable with
${VERBOSE_ARG:+$VERBOSE_ARG} expansion (bash 3.2 compatible) passed
directly to all verify-brief.sh invocations.

Closes #3255
Addresses PR #2187 review feedback (coderabbit finding)

* fix: address PR #2357 review feedback on full-loop-helper.sh (#4856)

- Add PR_NUMBER to load_state allowlist so cmd_resume preserves the
  PR reference recorded during pr-create phase instead of overwriting
  it with an empty string (fixes reviewer finding at line 331)
- Extract _run_foreground() function with EXIT trap that removes the
  PID file on process exit, preventing status/logs from falsely
  reporting an active background loop after the subprocess terminates
  (fixes reviewer finding at lines 292/294/307)
- Add explicit return 0 to load_state() per shell quality standards

Closes #3238

* fix(t3244): align /role description and usage with all 5 valid roles (#4857)

The description and usage message for the /role command only listed
observer/member/admin, but validRoles also included author and moderator.
This inconsistency would confuse users who couldn't discover the full
role set from the bot's own help text.

Closes #3244

* fix: correct strategic review scheduling claims in onboarding.md (#4862)

Address PR #2344 review feedback (issue #3217):
- Clarify strategic review is a separate scheduled process, not a pulse step
- Fix misleading claim that 'enabling the pulse enables everything'
- Session miner and circuit breaker correctly noted as pulse exit steps
- Add runners.md reference for strategic review setup

Closes #3217

* fix: address PR #2336 review feedback in strategic-review.md (#4861)

- Fix worktree list to iterate per-repo (git worktree list is repo-scoped,
  not global); removes incorrect 'all repos share the worktree namespace' note
- Fix gh pr merge to require explicit PR number and --repo flag to prevent
  acting on the wrong PR in a cross-repo context

Closes #3216

* fix: address PR #2694 review feedback in brand-identity.md (#4860)

- formality_spectrum: change type from string to number (0) in template
  and example ("4" -> 4) for consistency with brand_positioning numeric scales
- destructive.style: split mixed visual+behavioural value into separate
  style and behaviour fields in both template schema and example
- Brand-identity.toon: fix capitalisation typo to brand-identity.toon

Closes #3224

* fix: address PR #2680 gemini review feedback in per-tenant-rag.md (#4859)

- Remove duplicate alpha default from reciprocalRankFusion signature;
  DEFAULT_QUERY_CONFIG.hybridAlpha is the single source of truth (line 453)
- Fix token budget in assembleContext to account for attribution string
  and separator tokens, preventing context window overrun

Closes #3226

* fix: remove grep stderr suppression in sanity pipeline checks (#4868)

* fix: alphabetize communications links in AGENTS index (#4867)

* fix(t3201): add timeout to Telegram runner dispatch example (#4866)

* fix: clarify Nostr and Matrix AI training policies in privacy-comparison (#4865)

Nostr's AI training policy is relay-dependent (relay operators can process
and monetize public notes), not 'None' at the protocol level. Matrix's
policy is server-dependent (homeserver admins can access unencrypted
messages), not 'None (Foundation)'.

Addresses PR #2776 review feedback from gemini-code-assist.
Closes #3204

* fix: is_model_available returns failure for unknown providers (#4864)

PR #2366 review feedback (issue #3210): the unknown-provider branch
previously returned 0 (success), allowing resolve_chain to emit a model
string with no credential or health verification. This produces silent
runtime failures when the routing table is extended beyond the known
providers (anthropic/openai/google).

Change: return 1 with a warning for unknown providers, consistent with
the existing behaviour for known providers with missing API keys. The
model-availability-helper.sh delegation path is unaffected.

* fix: sort communications keywords alphabetically in subagent-index.toon (#4863)

Address PR #2765 review feedback (gemini-code-assist, medium severity):
- Sort services/communications/ keywords alphabetically for readability
- Improve description to list bot types consistently (Discord bot, Matrix bot)

Closes #3208

* fix(t3189): add no-further-feedback review regression test (#4873)

* fix(issue3188): add regression test for non-actionable Gemini approval review (#4872)

PR #2887 Gemini review ('I approve of this refactoring') was incorrectly
captured as a quality-debt finding by scan-merged before the summary_praise_only
filter was added. The filter already handles this body (summary_praise_only=true
via 'improves', 'consistent', 'good improvement'). Add a regression test to
prevent reintroduction.

Resolves #3188.

* fix(t3209): harden ampcode result files and error visibility (#4871)

* fix: harden budget cost parsing and 7-day burn-rate metric (#4870)

* fix: remove 2>/dev/null from check_permission_failure_pr (GH#3195) (#4869)

Extends PR #2825 fix to check_permission_failure_pr function:
- Remove stderr suppression from gh pr view (exit code captured separately)
- Remove stderr suppression from gh pr comment (|| true retained)

Consistent with check_external_contributor_pr pattern fixed in #2825.

Closes #3195

* fix: eliminate newline-injection vulnerability in shellcheck-wrapper arg filtering (#4875)

Replace printf/read serialization round-trip in _filter_args with direct
global array population. The previous pattern used printf '%s\n' to serialize
args and while IFS= read -r to deserialize them in main(), which was vulnerable
to argument splitting if any arg contained a newline — an attacker could embed
a newline in a filename to inject a second argument and bypass the
--external-sources stripping that prevents 11 GB RSS memory explosions.

Addresses GH#3176 (quality-debt review feedback from PR #2918, Gemini finding
at shellcheck-wrapper.sh:93).

* fix: align grep -c pattern in setup_terminal_title with established || : convention (#4877)

PR #3003 review feedback (issue #3167): the Tabby disabled_count assignment
used '|| true' on the outer statement rather than '|| :' inside the command
substitution, inconsistent with the pattern established at lines 214-219.

Move || : inside the subshell so grep -c exit-1 (no matches) is suppressed
at the source; 0 already handles the empty-string case.
ShellCheck: zero violations.

* fix(t3175): remove redundant wc whitespace trimming (#4878)

* fix(issue3158): add regression test for non-actionable Gemini approval review (#4882)

PR #3060 Gemini review ("The changes are correct and well-justified.") was a
false-positive quality-debt finding. The summary_praise_only filter already
handles this body correctly (via 'effectively' and 'improves'); this test
prevents reintroduction.

Resolves #3158.

* fix(t3178): remove kill stderr suppression in timeout fallback (#4880)

* fix(issue3117): correct mcporter security doc path reference (#4879)

* fix(t3186): harden supervisor state-machine regression tests (#4884)

* fix(t3159): revert out-of-scope indentation changes in backup safety test (#4883)

* fix(t3120): surface jq dataset parse errors in bench parsing (#4881)

* fix(issue3174): sanitize flag file reads in cmd_start and cmd_stop (#4876)

Apply tr sanitization to started_at reads in cmd_start() and cmd_stop()
to prevent terminal escape injection, consistent with the existing pattern
already applied in cmd_status(). Addresses remaining unsanitized reads
from PR #2943 review feedback (GH#3174).

Closes #3174

* fix(issue3145): add regression test for PR #3077 Gemini summary-only review (#4885)

The Gemini Code Assist review on PR #3077 was a positive summary with no
actionable critique (state=COMMENTED, no inline comments). The scan-merged
command created a false-positive quality-debt issue (#3145) before the
summary_only and summary_praise_only filters were added.

This commit adds a regression test using the exact PR #3077 review body to
ensure the two-layer filter (summary_only rule + summary_praise_only heuristic)
continues to suppress this class of non-actionable Gemini summaries.

30/30 tests pass, shellcheck clean.

Closes #3145

* fix(t3173): filter praise-only Gemini review summaries (#4886)

* fix(t4874): prevent false-positive issues when suggestion already applied before merge (#4887)

scan-merged was creating quality-debt issues for review comments whose
suggestion had already been applied by the author before merging.

Root cause (two interacting bugs):
1. _extract_verification_snippet treated suggestion fences the same as diff
   fences, skipping all lines starting with '-'. Markdown list items like
   '- **Enhances:** t1393' start with '-' and were silently dropped, leaving
   no extractable snippet → finding marked unverifiable → issue created.

2. Even when a snippet was extracted from a suggestion fence, the semantics
   were inverted: the code treated 'snippet found in file' as 'problem still
   exists → keep', but for suggestion fences the snippet IS the proposed fix
   text — finding it in HEAD means the fix was already applied → resolved.

Fix:
- Split diff vs suggestion fence handling in _extract_verification_snippet:
  diff fences skip +/- lines (unified-diff markers); suggestion fences do NOT
  skip '-' lines (they are literal replacement content, not removal markers).
- Add _body_has_suggestion_fence() helper to detect suggestion fence presence.
- Invert snippet semantics in _finding_still_exists_on_main for suggestion
  fences: snippet found in HEAD → fix applied → resolved → skip;
  snippet absent from HEAD → fix not yet applied → keep → create issue.
- Fix grep -Fq to use -e flag so patterns starting with '-' are not
  misinterpreted as grep options (macOS BSD grep does not support '--').

Regression tests (GH#4874):
- suggestion fence: skip finding when markdown list item already applied
- suggestion fence: create issue when markdown list item not yet applied
- Updated test_handles_suggestion_fence_and_comments to reflect correct
  semantics (suggestion text in file → fix applied → no issue)

Incident: false-positive issue #3183 from PR #2871 (merge d8d438a5).
Closes #4874

* fix(t3116): remove stderr suppression in security posture helper (#4888)

* fix(t3110): update actions/checkout pin in opencode template (#4889)

Align the checkout action with the newer v4.1.7 commit referenced in the Gemini quality-debt finding from PR #3103. This keeps the pinned SHA current within the v4 line while preserving immutable action pinning.

* feat: add Qlty maintainability tracking to daily quality sweep and local linting (#4890)

Systemic improvement to the daily code quality auditing routines:

1. Enhanced daily quality sweep (stats-functions.sh):
   - Use SARIF output from qlty CLI for structured smell analysis
   - Report per-rule breakdown (function-complexity, file-complexity, etc.)
   - Report per-file breakdown (top 10 files by smell density)
   - Fetch Qlty Cloud badge grade (A/B/C/D/F) from badge SVG
   - Track qlty_smells and qlty_grade in sweep state for delta detection
   - Add Qlty grade + smell count to quality review dashboard
   - Include Qlty in composite badge status indicator
   - Add Qlty grade to issue title for at-a-glance monitoring

2. Enhanced local linting (linters-local.sh):
   - New check_qlty_maintainability() function
   - SARIF-based smell count with severity thresholds
   - Top smell types and top files for targeted fixes
   - Qlty Cloud badge grade check with colour-coded output
   - Respects bundle skip_gates for project-specific overrides

This makes Qlty maintainability a first-class metric alongside SonarCloud,
tracked daily with deltas, surfaced in dashboards, and actionable by the
supervisor for creating targeted quality-debt issues.

* feat: add all-time model usage table and comma-format dollar amounts (#4891)

- Add _format_cost() helper for comma-separated dollar amounts
- Parameterize _get_model_usage() and _get_token_totals() with period (30d/all)
- Extract _render_model_usage_table() to eliminate ~90 lines of duplication
- Profile README now shows both 30-day and all-time model usage tables

* feat: source all-time model usage from opencode.db for full history (#4893)

- Query opencode.db message table (data back to Nov 2025) for all-time stats
- Add _compute_costs_from_tokens() to calculate costs from pricing table
- Add GPT-5.x, grok, kimi, big-pickle to _model_cost_rates()
- Merge model name variants (e.g., claude-opus-4-5-20251101 -> claude-opus-4-5)
- 30-day table still uses llm-requests.db (accurate recorded costs)

* refactor: reduce Qlty maintainability smells in Python scripts (batch 1)

Extract-function refactoring across 3 Python scripts to reduce Qlty
maintainability smells from 32 to 5 (84% reduction in these files,
139 → 111 total across the codebase).

entity-extraction.py (8 → 2 smells):
- Extract _extract_json_from_response, _validate_and_clean_entities
  from _parse_llm_response (complexity 24 → below threshold)
- Extract _build_arg_parser, _format_entity_summary from main
- Replace 6-return extract_entities with dispatch dict _METHOD_DISPATCH
- Flatten nested control flow with guard clauses
- Extract _run_ollama_list to deduplicate similar code

session-miner/extract.py (11 → 1 smells):
- Extract 16 helpers from 5 high-complexity functions
- Replace _summarize_tool_input 9-return chain with _TOOL_SUMMARIZERS dict
- Unify 3 duplicated chunk-building loops into _chunk_records
- Flatten nested control flow with _parse_json_safe helper
- File complexity 208 → 116

email-to-markdown.py (13 → 2 smells):
- Extract 20+ helpers from 7 high-complexity functions
- normalise_email_sections (complexity 44) → _SectionState + 7 handlers
- build_frontmatter (complexity 44) → 3 YAML formatting helpers
- main (complexity 37) → _build_arg_parser + _run_batch + _run_single
- generate_summary 10 returns → _try_llm_summary dispatch
- File complexity 303 → 223

Remaining smells are irreducible without module splits:
- File-level complexity (inherent to module scope)
- email_to_markdown 9 params (public API, cannot change)

* refactor: reduce Qlty maintainability smells in JS/TS files (batch 2)

Extract-function refactoring across 2 JavaScript files to reduce Qlty
maintainability smells from 45 to 23 (49% reduction in these files,
111 → 89 total across the codebase).

index.mjs (12 → 1 smell):
- Extract helpers from all 8 high-complexity functions
- messagesTransformHook (27): 4 helpers for violation detection pipeline
- validateReturnStatements (27): walkFunctionsForReturns + sub-helpers
- validatePositionalParams (21): checkPositionalParamLine + predicates
- runMarkdownQualityPipeline (25): checkMD031, checkTrailingWhitespace
- loadAgentIndex (23): parseToonSubagentBlock + collectLeafAgents
- Replace complex binary expressions with named pattern arrays
- File complexity 359 → 316

playwright-automator.mjs (33 → 22 smells):
- Extract runBatchJob to deduplicate identical batch patterns
- Convert 4 functions from positional params to options objects
- Extract helpers from top 5 complexity functions:
  login (42): tryFillField, tryClickSubmit, isNonAuthUrl
  runDiscovery (43): categoriseRoutes, diffRoutesAgainstCache
  fetchProjectApiWithPolling (40): fetchProjectApiData, evaluateNewestJobStatus
  waitForVideoGeneration (33): logVideoPollingProgress
  batchVideo (32): submitVideoBatch, pollAndRecordVideoResults

* fix: bash 3.2 compatibility — unblock pulse dispatch and add automated checker (#4896)

Root cause: headless-runtime-helper.sh passed the opencode command array as a
single printf-escaped string to sandbox-exec-helper.sh. The sandbox received
one argument and passed it to env as a single executable path, causing 'No such
file or directory' on every pulse cycle. Additionally, tee stdout contaminated
the $() exit code capture, causing arithmetic parse errors.

Fixes:
- headless-runtime-helper.sh: pass cmd array elements as separate args to
  sandbox; write exit code to temp file instead of $() capture
- setup.sh: replace declare -A with string-based dedup (bash 4.0+ feature)
- mission-dashboard-helper.sh: replace local -A with string-based dedup
- aidevops-update-check.sh: replace ${var,,} with tr case conversion
- quality-feedback-helper.sh: replace ${BASH_REMATCH[1],,} with tr

Prevention:
- build.txt: add comprehensive bash 3.2 compatibility rules covering forbidden
  features, subshell traps, and array passing across process boundaries
- linters-local.sh: add check_bash32_compat() gate that scans all shell scripts
  for declare -A, mapfile, ${var,,}, namerefs, coproc, and &>> patterns

* chore: bump version to 2.172.30

* fix: bash escape quoting — "\t" produces literal backslash-t, not tab (#4897)

setup.sh built plist XML with "\t\t<string>..." which produced literal
\t\t in the plist file. launchd rejected the invalid XML with I/O error,
silently killing the supervisor pulse. The auto-update triggered by
v2.172.30 regenerated the plist, exposing this latent bug.

Fix: use $'\t\t' (ANSI-C quoting) for actual tab characters.

Prevention:
- build.txt: add escape sequence quoting rules explaining that bash
  double quotes do NOT interpret \t \n \r (unlike C/Python/JS)
- linters-local.sh: add "\t"/"\n" detection in string concatenation
  context to the bash 3.2 compat checker

* chore: bump version to 2.172.31

* feat: add ripgrep (rg) to required dependencies in setup (#4892)

ripgrep is used extensively by the framework and agents for fast content
search. Previously it was only offered as an optional file discovery tool
in a separate setup step. Now it is installed alongside jq, curl, and ssh
as a required dependency.

Introduces a separate missing_packages array to handle the command-to-
package name mapping (rg command -> ripgrep package), since the package
name is the same across all supported platforms (brew, apt, dnf, yum,
pacman, apk).

* docs: note pulse supervisor requires Anthropic sonnet, OpenAI unreliable for orchestration

* docs: add pulse model constraint to model-routing.md — sonnet only, openai unreliable for orchestration

* fix: remove ssh from required deps in setup-modules/core.sh (#4899)

ssh is pre-installed on virtually all Unix systems. The package name
'ssh' doesn't exist on most package managers (correct names vary:
openssh-client, openssh-clients, openssh). Removing the check avoids
a broken install attempt on any platform.

Closes #4898

* chore(release): bump version to 2.173.0

* refactor: reduce Qlty maintainability smells in Python/JS scripts (batch 3a)

* refactor: reduce Qlty maintainability smells in Python scripts (batch 3a)

Extract-function refactoring in 2 Python scripts to reduce smells.
Total codebase: 89 → 69 smells.

extraction_pipeline.py (6 → 1 smell):
- Extract _get_field_rules, _score_field from compute_confidence (29)
- Extract _validate_total_check, _validate_date_field from validate_extraction (31)
- Extract _parse_extract_options from cmd_extract (19, 6 returns)
- File complexity 137 → 103

email-summary.py (6 → 2 smells):
- Extract _filter_meaningful_lines, _split_sentences from _extract_first_sentences (25)
- Extract _build_cli_parser, _handle_json_output from main (18)
- Consolidate summarise_ollama returns with early-exit pattern
- File complexity 121 → 112

* refactor: reduce Qlty maintainability smells in OpenCode TS files (batch 3b)

Extract-function refactoring across 5 TypeScript files.
Total codebase: 69 → 64 smells.

toon.ts lib (5 → 0 smells):
- Extract convertPrimitive, convertArrayToToon, convertObjectToToon
- Extract parseLiteral, parseTabularBlock, parseKeyValuePair
- Extract detectKnownValue for value type dispatch

github-release.ts (2 → 0 smells):
- Extract normalizeVersion, requireVersion, createRelease, createDraftRelease

toon.ts tool (2 → 0 smells):
- Extract resolveInputData, handleEncode/Decode/Compare/Stats

use-streaming.ts (1 → 0 smells):
- Extract dispatchStreamEvent, parseSseLines, readSseStream
- StreamEventHandlers interface for dependency injection

parallel-quality.ts (1 → 0 smells):
- Extract resolveChecksToRun, buildResultSummary, formatQualityResults

* chore: claim t1484

* refactor: reduce Qlty maintainability smells (batch3c)

* refactor: reduce Qlty smells in playwright-automator.mjs (batch 3c)

Extract-function refactoring in playwright-automator.mjs.
Total codebase: 64 → 52 smells (-12).

Extracted helpers from 6 high-complexity functions:
- dismissInterruptions (29): dismissModalsAndBanners, dismissOverlaysAndAgreements
- configureImageOptions (31): setAspectRatio, setEnhanceToggle
- waitForImageGeneration (30): checkImageGenCompletion, retryGenerateIfStalled
- motionPreset (28): listMotionPresets, resolveMotionPresetUrl
- apiRequest (26): apiExecuteFetch, parseApiErrorDetail
- smokeTest (23): smokeTestNavigation, smokeTestCredits

* chore: claim t1485

* chore: claim t1486

* chore: claim t1487

* chore: add module-split tasks for top 4 file-complexity smells (t1485-t1488)

Create detailed task briefs for splitting the 4 highest file-complexity
files into focused modules. Each brief includes proposed module structure,
acceptance criteria, and model tier (opus) for pulse dispatch.

t1485: playwright-automator.mjs (1272) → 6 modules ref:GH#4905
t1486: opencode-aidevops/index.mjs (316) → 5 modules ref:GH#4906
t1487: email-to-markdown.py (223) → 4 modules ref:GH#4907
t1488: seo-content-analyzer.py (177) → 3 modules ref:GH#4908

These are the remaining irreducible file-complexity smells from the
quality sweep session that reduced Qlty smells from 139 → 52.

* fix: split footer text into separate paragraphs for readability (#4909)

Each sentence in the model usage and work-with-AI table footers is now
its own italic paragraph, fixing broken markdown rendering where multi-line
_..._ blocks didn't render correctly on GitHub.

* chore: claim t1488

* chore: claim t1489

* chore: add Codacy quality gate adjustment task (t1489)

Codacy quality gate (0 max new issues) trips on extract-function
refactoring — observed 4x during quality sweep session. Project grade
stays A; only the gate threshold is violated. ref:GH#4910

* chore: claim t1490

* fix: add comma thousands separators to token counts (e.g., 10,425.3M) (#4911)

* feat: bridge daily quality sweep to code-simplifier pipeline (t1490)

Add _create_simplification_issues() to the daily quality sweep that
auto-creates simplification-debt issues for files with high Qlty smell
density (>5 smells). This bridges the gap between the sweep's SARIF
analysis and the code-simplifier's human-gated dispatch pipeline.

Behaviour:
- Only creates issues for files with >5 smells
- Max 3 new issues per sweep (rate limiting)
- Deduplicates against existing open simplification-debt issues
- Issues created with simplification-debt + needs-maintainer-review labels
- Assigned to repo maintainer (from repos.json)
- Issue body includes per-rule breakdown, suggested approach, and
  verification steps — matching code-simplifier.md format
- Maintainer approves/declines via comment (approved/declined: reason)
- Approved issues enter pulse dispatch queue at priority 8

Closes #4912

* perf: tune worker RAM allocation — 512MB per worker, 6GB reserve (was 1GB/8GB)

opencode headless sessions are lightweight (~200-400MB). The previous
1GB/worker + 8GB reserve limited a 24GB machine to 10 workers despite
a max_workers_cap of 24. New values allow full utilisation.

* fix: ensure simplification-debt labels exist before issue creation

gh issue create fails with 'label not found' if the simplification-debt
or needs-maintainer-review labels don't exist on the repo yet. Add
label creation (idempotent — gh label create is a no-op if exists) before
the issue creation loop.

* fix: auto-assign issues on creation to prevent duplicate dispatch

claim-task-id.sh now assigns the current GitHub user to newly created
issues immediately. This closes the race window where multiple pulses
or machines could dispatch workers for the same unassigned issue.

* fix: make pulse-wrapper.sh source-safe in zsh/supervisor sessions (GH#4904) (#4920)

- shared-constants.sh: guard all BASH_SOURCE[0] references with :-fallback
  so sourcing from zsh (where BASH_SOURCE is unset) with set -u does not
  abort with 'BASH_SOURCE[0]: parameter not set'
- config-helper.sh: same BASH_SOURCE guard; fix main() guard to check
  BASH_SOURCE is non-empty before comparing to $0, preventing main() from
  running when sourced from zsh (where $0 == 'zsh' == BASH_SOURCE fallback)
- pulse-wrapper.sh: replace [[ $cmd =~ /full-loop|Supervisor Pulse|... ]]
  with case statement — zsh parses the | as a pipe operator inside [[ ]],
  causing 'parse error near |' at lines 3037 and 3062

Verified: sourcing in zsh and bash 3.2 both succeed; all 14 pulse-wrapper
tests pass; shellcheck zero violations on all three files.

* t1486: split opencode-aidevops index into focused modules (#4915)

* refactor: split opencode plugin index into focused modules

* chore: re-trigger Codacy analysis after gate threshold adjustment (GH#4910)

* t1487: Split email-to-markdown.py into focused modules (complexity 223 → 43) (#4914)

* refactor(t1487): split email-to-markdown.py into focused modules

Reduces file-complexity from 223 to ~43 by extracting three pipeline modules:
- email_parser.py: MIME parsing, headers, body, attachment extraction (~274 lines, complexity ~48)
- email_normaliser.py: section normalisation, thread reconstruction, frontmatter (~455 lines, complexity ~68)
- email_md_summary.py: LLM/heuristic summary generation (~259 lines, complexity ~35)
- email-to-markdown.py: pipelin…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant