feat: add 81 new features (batch 2-9)#271
Conversation
…ule, and 3 hooks Add 5 new features to improve production readiness coverage: - agents/cicd-pipeline.md: GitHub Actions workflow generation, pipeline diagnosis, deploy strategies - commands/perf-audit.md: Bundle, API, DB, frontend performance audit with A-F grading - skills/database-migrations.md: Migration patterns for Prisma, Drizzle, Supabase, Knex with rollback and zero-downtime techniques - rules/environment.md: .env file structure, naming conventions, runtime validation with zod - hooks/hooks.json: 3 new hooks (secret detection in .env files, file size warnings at 600/800 lines, dependency bloat warnings) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ntmatter, comment) - Add SHA pinning note to workflow templates and permissions to Basic CI / Vercel Deploy - Add YAML frontmatter description to perf-audit.md for command discoverability - Fix hook comment from "transitive" to "direct" dependencies to match actual behavior Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…e, dependency-updater agent, and 3 hooks Add 5 new features to Everything Claude Code (batch 2): - commands/dependency-audit.md: dependency health audit with A-F grading - skills/observability-patterns.md: structured logging, error tracking, health checks, metrics - rules/accessibility.md: WCAG 2.1 AA frontend guidelines - agents/dependency-updater.md: safe incremental dependency update workflow - hooks/hooks.json: 3 new PostToolUse hooks (img alt check, TODO/FIXME warning, TS any warning) Final count: 11 agents, 11 commands, 9 skills, 10 rules, 15 hooks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add 4 new skills and 1 agent to expand coverage into high-demand areas: - skills/i18n-localization: next-intl, react-i18next, Flask-Babel, Django i18n, RTL, ICU MessageFormat - skills/monorepo-management: Turborepo, Nx, pnpm workspaces, shared configs, CI optimization - skills/resilience-patterns: retry/backoff, circuit breaker, bulkhead, timeout, fallback, DLQ (TS/Python/Go) - skills/infrastructure-as-code: Terraform modules, state mgmt, CI/CD pipelines, drift detection, security - agents/iac-reviewer: IaC security/cost/reliability/modularity review with A-F grading Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a large set of new documentation (agents, skills, commands, rules, contexts) and updates Changes
Sequence Diagram(s)Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (9)
agents/iac-reviewer.md (1)
4-4:toolsfield format inconsistency with canonical Claude Code syntax.The
toolsfield uses a JSON array (["Read", "Grep", "Glob", "Bash"]), while official Claude Code documentation andagents/dependency-updater.mdin this same PR use a comma-separated string. The canonical format is the comma-separated string.♻️ Proposed fix
-tools: ["Read", "Grep", "Glob", "Bash"] +tools: Read, Grep, Glob, BashBased on official Claude Code docs: "To restrict tools, use the tools field (allowlist):
tools: Read, Grep, Glob, Bash" — confirming the comma-separated string is the documented format.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agents/iac-reviewer.md` at line 4, The tools field is using a JSON array syntax instead of the canonical comma-separated string expected by Claude Code; update the tools entry in agents/iac-reviewer.md from tools: ["Read", "Grep", "Glob", "Bash"] to the documented format tools: Read, Grep, Glob, Bash so it matches agents/dependency-updater.md and the official Claude Code docs.skills/monorepo-management/SKILL.md (1)
1-17: Add required "How It Works" and "Examples" sections.Same guideline gap as
skills/i18n-localization/SKILL.md— theskills/**/*.mdguideline requires How It Works and Examples sections. The "When to Activate" section is present but the other two required sections are absent.As per coding guidelines:
skills/**/*.md— Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/monorepo-management/SKILL.md` around lines 1 - 17, This skill metadata lacks the required "How It Works" and "Examples" sections; update skills/monorepo-management/SKILL.md by adding a "How It Works" section that describes the core mechanics (package layout, dependency hoisting, caching, affected-only builds, shared config strategies) and an "Examples" section that contains at least one concrete example (e.g., Turborepo or pnpm workspace setup steps, sample npm scripts or pipeline commands, and common config snippets) so the file matches the skills/**/*.md guideline that mandates "When to Use/Activate", "How It Works", and "Examples".skills/i18n-localization/SKILL.md (1)
1-18: Add required "How It Works" and "Examples" sections.Per the
skills/**/*.mdcoding guideline, skill documents must have clear sections for When to Use, How It Works, and Examples. This file has "When to Activate" (≈ When to Use) and inline code snippets but is missing dedicated## How It Worksand## Examplesheadings.♻️ Suggested additions
## When to Activate ... +## How It Works + +This skill activates Claude's i18n expertise, guiding you through: +1. Selecting and configuring a framework-specific i18n library (next-intl, react-i18next, vue-i18n, Flask-Babel, Django) +2. Structuring translation keys using dot-notation namespaces +3. Applying ICU MessageFormat for plurals and gender selection +4. Configuring locale detection (URL → cookie → Accept-Language → default) +5. Enforcing locale-aware formatting via the native `Intl` API + +## Examples + +- "Set up i18n in my Next.js project with English, Japanese, and Arabic support" +- "Add German locale — what translation keys am I missing?" +- "Our dates show as MM/DD/YYYY in Japan — fix the locale formatting" +- "Migrate these hardcoded button labels to our translation system" + ## Core PrinciplesAs per coding guidelines:
skills/**/*.md— Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/i18n-localization/SKILL.md` around lines 1 - 18, The SKILL.md is missing the required "## How It Works" and "## Examples" sections; add a "How It Works" section after "When to Activate" that explains the core concepts (translation key management, locale config, message catalogs, formatting and RTL handling) and references common implementations like next-intl, react-i18next, vue-i18n and Flask-Babel, and add an "Examples" section with concise example scenarios/snippets such as translation key structure, locale detection flow, date/number formatting example, and a short example for integrating a library (e.g., react-i18next usage), ensuring headings are exactly "## How It Works" and "## Examples" to satisfy the skills/**/*.md guideline.rules/accessibility.md (2)
129-135: Escalation protocol references "security-reviewer" for accessibility issues.The security-reviewer agent is designed for security concerns, not accessibility impact assessment. Consider referencing a more appropriate reviewer or generalizing the guidance (e.g., "request a senior accessibility review") to avoid confusion about agent responsibilities.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rules/accessibility.md` around lines 129 - 135, Update the Escalation Protocol entry to stop referencing the "security-reviewer" agent for accessibility issues and instead call out an appropriate reviewer or a generalized option (e.g., "request a senior accessibility review" or "accessibility-reviewer") so responsibilities are clear; locate the "Escalation Protocol" section and replace the "Use **security-reviewer** agent for impact assessment" line with a reference to the correct accessibility reviewer or the generalized phrasing.
57-67: Focus trap selector misses several focusable element types.The
querySelectoron line 61 only targets'button, [href], input', but modals commonly contain<select>,<textarea>, and elements with explicittabindex. A more robust selector would be:📝 Suggested improvement
- const firstFocusable = modalRef.current?.querySelector('button, [href], input') + const firstFocusable = modalRef.current?.querySelector( + 'button, [href], input, select, textarea, [tabindex]:not([tabindex="-1"])' + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rules/accessibility.md` around lines 57 - 67, The current useEffect that sets initial focus when the modal opens uses modalRef.current?.querySelector with a selector limited to 'button, [href], input' (see useEffect, modalRef, isOpen), which misses common focusable elements; update the selector used in modalRef.current?.querySelector to include a fuller set of focusable elements such as a[href], button, input, select, textarea and elements with [tabindex] (excluding tabindex="-1"), and exclude disabled elements so the first truly focusable element (e.g., within the initial focus logic in the useEffect) is chosen.hooks/hooks.json (2)
163-172:<imggrep won't catch self-closing tags without a space (<img/>).The pattern
grep -n '<img ' "$file_path"requires a space after<img, so it won't match<img/>or<img\n. In JSX,<img src=.../>(no space beforesrc) is uncommon but<img\n src=...(multiline) is common and also missed. Consider a looser pattern:📝 Suggested improvement
- missing_alt=$(grep -n '<img ' "$file_path" 2>/dev/null | grep -v 'alt=' || true) + missing_alt=$(grep -nP '<img[\s/>]' "$file_path" 2>/dev/null | grep -v 'alt=' || true)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hooks/hooks.json` around lines 163 - 172, The grep pattern only matches "<img " with a space and misses self-closing tags or tags followed by newline; update the hook command that builds missing_alt (the grep invocation reading file_path) to use a looser token-boundary pattern such as grep -n -E '<img[[:space:]/]' or grep -n -P '\<img\b' (or use perl/pcregrep to handle multiline if you need to match tags broken across lines), so the missing_alt variable detects <img/>, <img .../> and tags where attributes start on the next line.
173-182: TODO/FIXME hook only matches patterns with a trailing colon (TODO:,FIXME:).The grep pattern
grep -cE '(TODO|FIXME):'requires a colon, so// TODO add testsor# FIXME this is brokenwithout colons won't be counted. This could significantly undercount TODOs. Consider making the colon optional:📝 Suggested fix
- todo_count=$(grep -cE '(TODO|FIXME):' "$file_path" 2>/dev/null) || todo_count=0 + todo_count=$(grep -cE '\b(TODO|FIXME)\b' "$file_path" 2>/dev/null) || todo_count=0Apply the same change to the display grep on the next line.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hooks/hooks.json` around lines 173 - 182, Update the two grep regexes that count and display TODO/FIXME to make the trailing colon optional so comments like "// TODO add tests" are matched; specifically change the pattern used in the todo_count assignment (grep -cE '(TODO|FIXME):') and the display grep (grep -n -E '(TODO|FIXME):') to use an optional colon (e.g., '(TODO|FIXME):?') so non-colon TODO/FIXME instances are included.agents/cicd-pipeline.md (2)
135-140: Vercel URL capture is fragile.
2>&1 | tail -1merges stderr into stdout and takes the last line. If Vercel CLI prints warnings, the captured URL could be corrupted. Consider using--no-colorand parsing structured output, or capturing only stdout:📝 Safer alternative
- url=$(npx vercel --token=${{ secrets.VERCEL_TOKEN }} 2>&1 | tail -1) + url=$(npx vercel --token=${{ secrets.VERCEL_TOKEN }} --no-color 2>/dev/null | tail -1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agents/cicd-pipeline.md` around lines 135 - 140, In the "Deploy to Vercel Preview" step the current capture (npx vercel --token=... 2>&1 | tail -1) can include warnings from stderr; change the command that sets url to avoid merging stderr into stdout and to disable colorized output: call npx vercel with --no-color and redirect stderr to /dev/null (e.g. npx vercel --token=${{ secrets.VERCEL_TOKEN }} --no-color 2>/dev/null | tail -n1) so url only contains the CLI stdout, or if the Vercel CLI in your environment supports structured output (e.g. --output=json) use that and parse the JSON to extract the deployment URL instead of tailing the last line.
86-92: Consider using glob patterns in flat config instead of--extflag for ESLint v9.The
--extflag was reintroduced in ESLint v9.21.0+ (your repo uses v9.39.2), so the command works. However, with flat config (v9 default), the recommended pattern is to specify file globs directly ineslint.config.jsusingfiles: ["**/*.{ts,tsx,js,jsx}"]rather than via CLI flags. This aligns with the flat config design and makes intent explicit in your configuration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agents/cicd-pipeline.md` around lines 86 - 92, The CI uses an ESLint Lint step that passes file types via the CLI flag (--ext) which is less idiomatic with ESLint v9 flat config; update your eslint configuration (eslint.config.js) to declare the target globs instead by adding files: ["**/*.{ts,tsx,js,jsx}"] (or the narrower repo globs you need) in the exported flat config, then simplify the CI step (the "Lint" run) to just run npx eslint . so linting is driven by eslint.config.js; reference the "Lint" job and eslint.config.js (and the files: pattern) when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@hooks/hooks.json`:
- Around line 45-54: The inline hook with matcher "tool == \"Write\" &&
tool_input.file_path matches \"\\\\.env\"" currently uses exit 1 in its bash
command which is non-blocking; change that exit 1 to exit 2 to ensure the
PreToolUse hook blocks the Write tool when secrets are detected, and while
you’re here, scan the same "command" blocks (the secret-detection bash command
and any other blocking hooks referenced in this file) to confirm they use exit 2
for true blocking behavior to keep semantics consistent.
In `@skills/infrastructure-as-code/SKILL.md`:
- Around line 366-394: The apply job currently uses a matrix (job name "apply",
matrix: environment, and max-parallel: 1) which does not guarantee
dev→staging→prod ordering; change the workflow to explicit sequential jobs
instead (e.g., create apply-dev, apply-staging, apply-prod jobs) and wire them
with needs dependencies so apply-staging needs: apply-dev and apply-prod needs:
apply-staging; keep the same steps (checkout, setup-terraform,
configure-aws-credentials, Terraform Init/Apply) but parameterize
working-directory/environment per job (pass environment value like
dev/staging/prod) rather than relying on the matrix strategy.
- Around line 476-483: The current CI step uses "if ! terraform plan
-detailed-exitcode" which conflates terraform exit codes (1=error, 2=drift);
modify the step that runs "terraform plan -detailed-exitcode -input=false" to
capture its exit code (e.g., store "$?") and branch on it: if exit code == 2
emit the existing warning about infrastructure drift, if exit code == 1 emit an
error log and fail the job (exit non-zero), otherwise continue for exit code 0;
keep the surrounding "terraform init -input=false" and the same
working-directory/infrastructure context.
In `@skills/monorepo-management/SKILL.md`:
- Around line 354-400: The comment claiming "fetch-depth: 2" is needed for Turbo
to detect changes is misleading because the workflow runs pnpm turbo run build
lint test typecheck with no --filter, so Turbo runs everything; either update
the comment to state that a full run happens when no --filter is used, or change
the build invocation to an affected-only command (e.g., add a Turbo filter like
--filter=...[HEAD^1] if you keep fetch-depth: 2, or --filter=...[origin/main]
and set fetch-depth: 0) and ensure the fetch-depth value matches the chosen
filter; adjust the lines around fetch-depth and the pnpm turbo run build lint
test typecheck step accordingly.
In `@skills/observability-patterns.md`:
- Around line 519-549: Add a brief comment in the middleware function (around
the console.warn usage in middleware) explaining that this Next.js middleware
runs in the Edge Runtime where typical Node.js structured loggers (e.g., Pino)
are unavailable, so console is used as a pragmatic fallback for the example;
mention that for server handlers (not edge middleware) readers should prefer
structured logging and OpenTelemetry spans, and reference the anti-patterns
guidance to avoid confusion.
- Around line 1-8: The skill file is in the wrong place and missing required
sections; move the markdown into a skill folder named observability-patterns and
rename the file to SKILL.md (i.e., create observability-patterns/SKILL.md with
the current content), then add the required sections "When to Activate", "How It
Works", and "Examples" beneath the title (populate each with brief, actionable
content consistent with the other skills in this PR) so the skill discovery tool
recognizes and loads it.
In `@skills/resilience-patterns/SKILL.md`:
- Around line 220-238: The HALF_OPEN handling is fragile: when entering
CircuitState.HALF_OPEN only successCount is reset, leaving failureCount >=
failureThreshold to implicitly trigger reopening; update onFailure to explicitly
treat any failure while in HALF_OPEN as a transition to OPEN. Concretely, inside
the onFailure() method check if this.state === CircuitState.HALF_OPEN and if so
set this.state = CircuitState.OPEN, reset/adjust counters as appropriate (e.g.,
set failureCount = 1 and lastFailureTime = Date.now()), otherwise keep the
existing failureThreshold logic using this.options.failureThreshold; reference
onFailure, onSuccess, CircuitState.HALF_OPEN, failureCount, successCount,
options.failureThreshold, and options.halfOpenMax.
---
Nitpick comments:
In `@agents/cicd-pipeline.md`:
- Around line 135-140: In the "Deploy to Vercel Preview" step the current
capture (npx vercel --token=... 2>&1 | tail -1) can include warnings from
stderr; change the command that sets url to avoid merging stderr into stdout and
to disable colorized output: call npx vercel with --no-color and redirect stderr
to /dev/null (e.g. npx vercel --token=${{ secrets.VERCEL_TOKEN }} --no-color
2>/dev/null | tail -n1) so url only contains the CLI stdout, or if the Vercel
CLI in your environment supports structured output (e.g. --output=json) use that
and parse the JSON to extract the deployment URL instead of tailing the last
line.
- Around line 86-92: The CI uses an ESLint Lint step that passes file types via
the CLI flag (--ext) which is less idiomatic with ESLint v9 flat config; update
your eslint configuration (eslint.config.js) to declare the target globs instead
by adding files: ["**/*.{ts,tsx,js,jsx}"] (or the narrower repo globs you need)
in the exported flat config, then simplify the CI step (the "Lint" run) to just
run npx eslint . so linting is driven by eslint.config.js; reference the "Lint"
job and eslint.config.js (and the files: pattern) when making the change.
In `@agents/iac-reviewer.md`:
- Line 4: The tools field is using a JSON array syntax instead of the canonical
comma-separated string expected by Claude Code; update the tools entry in
agents/iac-reviewer.md from tools: ["Read", "Grep", "Glob", "Bash"] to the
documented format tools: Read, Grep, Glob, Bash so it matches
agents/dependency-updater.md and the official Claude Code docs.
In `@hooks/hooks.json`:
- Around line 163-172: The grep pattern only matches "<img " with a space and
misses self-closing tags or tags followed by newline; update the hook command
that builds missing_alt (the grep invocation reading file_path) to use a looser
token-boundary pattern such as grep -n -E '<img[[:space:]/]' or grep -n -P
'\<img\b' (or use perl/pcregrep to handle multiline if you need to match tags
broken across lines), so the missing_alt variable detects <img/>, <img .../> and
tags where attributes start on the next line.
- Around line 173-182: Update the two grep regexes that count and display
TODO/FIXME to make the trailing colon optional so comments like "// TODO add
tests" are matched; specifically change the pattern used in the todo_count
assignment (grep -cE '(TODO|FIXME):') and the display grep (grep -n -E
'(TODO|FIXME):') to use an optional colon (e.g., '(TODO|FIXME):?') so non-colon
TODO/FIXME instances are included.
In `@rules/accessibility.md`:
- Around line 129-135: Update the Escalation Protocol entry to stop referencing
the "security-reviewer" agent for accessibility issues and instead call out an
appropriate reviewer or a generalized option (e.g., "request a senior
accessibility review" or "accessibility-reviewer") so responsibilities are
clear; locate the "Escalation Protocol" section and replace the "Use
**security-reviewer** agent for impact assessment" line with a reference to the
correct accessibility reviewer or the generalized phrasing.
- Around line 57-67: The current useEffect that sets initial focus when the
modal opens uses modalRef.current?.querySelector with a selector limited to
'button, [href], input' (see useEffect, modalRef, isOpen), which misses common
focusable elements; update the selector used in modalRef.current?.querySelector
to include a fuller set of focusable elements such as a[href], button, input,
select, textarea and elements with [tabindex] (excluding tabindex="-1"), and
exclude disabled elements so the first truly focusable element (e.g., within the
initial focus logic in the useEffect) is chosen.
In `@skills/i18n-localization/SKILL.md`:
- Around line 1-18: The SKILL.md is missing the required "## How It Works" and
"## Examples" sections; add a "How It Works" section after "When to Activate"
that explains the core concepts (translation key management, locale config,
message catalogs, formatting and RTL handling) and references common
implementations like next-intl, react-i18next, vue-i18n and Flask-Babel, and add
an "Examples" section with concise example scenarios/snippets such as
translation key structure, locale detection flow, date/number formatting
example, and a short example for integrating a library (e.g., react-i18next
usage), ensuring headings are exactly "## How It Works" and "## Examples" to
satisfy the skills/**/*.md guideline.
In `@skills/monorepo-management/SKILL.md`:
- Around line 1-17: This skill metadata lacks the required "How It Works" and
"Examples" sections; update skills/monorepo-management/SKILL.md by adding a "How
It Works" section that describes the core mechanics (package layout, dependency
hoisting, caching, affected-only builds, shared config strategies) and an
"Examples" section that contains at least one concrete example (e.g., Turborepo
or pnpm workspace setup steps, sample npm scripts or pipeline commands, and
common config snippets) so the file matches the skills/**/*.md guideline that
mandates "When to Use/Activate", "How It Works", and "Examples".
| { | ||
| "matcher": "tool == \"Write\" && tool_input.file_path matches \"\\\\.env\"", | ||
| "hooks": [ | ||
| { | ||
| "type": "command", | ||
| "command": "#!/bin/bash\n# Block writing real secrets to .env files\ninput=$(cat)\nfile_path=$(echo \"$input\" | jq -r '.tool_input.file_path // \"\"')\ncontent=$(echo \"$input\" | jq -r '.tool_input.content // \"\"')\n\n# Check for real secret patterns\nif echo \"$content\" | grep -qE '(sk-[a-zA-Z0-9]{20,}|ghp_[a-zA-Z0-9]{36}|AKIA[A-Z0-9]{16}|eyJ[a-zA-Z0-9._-]{50,})'; then\n echo \"[Hook] BLOCKED: Real secret pattern detected in $file_path\" >&2\n echo \"[Hook] Detected patterns: sk-*, ghp_*, AKIA*, eyJ* (JWT)\" >&2\n echo \"[Hook] Use placeholder values or reference env vars from your secret manager\" >&2\n exit 1\nfi\n\n# Warn if .env file is not in .gitignore\nif [ -f .gitignore ]; then\n basename_env=$(basename \"$file_path\")\n if ! grep -qE \"(\\.env|$basename_env)\" .gitignore 2>/dev/null; then\n echo \"[Hook] WARNING: $basename_env may not be in .gitignore\" >&2\n echo \"[Hook] Ensure .env files are gitignored to prevent secret leaks\" >&2\n fi\nfi\n\necho \"$input\"" | ||
| } | ||
| ], | ||
| "description": "Block real secrets in .env files and warn about .gitignore" | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Claude Code hooks exit code 2 block tool use semantics
💡 Result:
In Claude Code hooks, exit code 2 means a blocking error:
- General meaning: the hook is signaling “block this action,” and
stderris fed back to Claude for automatic processing (rather than just shown to the user). (docs.anthropic.com) - For tool calls (
PreToolUse): exit2blocks the tool invocation (the tool does not run). Claude receives the hook’sstderras feedback. (docs.anthropic.com) - After tool calls (
PostToolUse): exit2cannot undo the tool execution (the tool already ran); it instead sendsstderrback to Claude as feedback. (docs.anthropic.com) - Contrast: exit
0= success; other non-zero = non-blocking error (execution continues;stderrshown to the user). (docs.anthropic.com)
Citations:
- 1: https://docs.anthropic.com/en/docs/claude-code/hooks?utm_source=openai
- 2: https://docs.anthropic.com/en/docs/claude-code/hooks?utm_source=openai
- 3: https://docs.anthropic.com/en/docs/claude-code/hooks?utm_source=openai
- 4: https://docs.anthropic.com/en/docs/claude-code/hooks?utm_source=openai
Secret-detection hook uses exit 1 instead of exit 2—will not block tool execution.
For PreToolUse hooks in Claude Code, exit code 2 blocks the tool invocation, while exit code 1 is a non-blocking error. The hook will run and detect secrets, but the Write tool will still execute. This defeats the security purpose of blocking real secrets in .env files.
Fix required
On line 50 of the inline bash script, change:
- exit 1
+ exit 2Also verify that the existing blocking hooks (lines 10, 40) correctly use exit 2 for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@hooks/hooks.json` around lines 45 - 54, The inline hook with matcher "tool ==
\"Write\" && tool_input.file_path matches \"\\\\.env\"" currently uses exit 1 in
its bash command which is non-blocking; change that exit 1 to exit 2 to ensure
the PreToolUse hook blocks the Write tool when secrets are detected, and while
you’re here, scan the same "command" blocks (the secret-detection bash command
and any other blocking hooks referenced in this file) to confirm they use exit 2
for true blocking behavior to keep semantics consistent.
| apply: | ||
| needs: plan | ||
| if: github.ref == 'refs/heads/main' && github.event_name == 'push' | ||
| runs-on: ubuntu-latest | ||
| environment: ${{ matrix.environment }} # each env has its own approval gate | ||
| strategy: | ||
| max-parallel: 1 # apply sequentially: dev → staging → prod | ||
| matrix: | ||
| environment: [dev, staging, prod] | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
|
|
||
| - uses: hashicorp/setup-terraform@v3 | ||
| with: | ||
| terraform_version: ${{ env.TF_VERSION }} | ||
|
|
||
| - uses: aws-actions/configure-aws-credentials@v4 | ||
| with: | ||
| role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/terraform-ci | ||
| aws-region: us-east-1 | ||
|
|
||
| - name: Terraform Init | ||
| working-directory: infrastructure/environments/${{ matrix.environment }} | ||
| run: terraform init -input=false | ||
|
|
||
| - name: Terraform Apply | ||
| working-directory: infrastructure/environments/${{ matrix.environment }} | ||
| run: terraform apply -input=false -auto-approve | ||
| ``` |
There was a problem hiding this comment.
Matrix ordering is not guaranteed — sequential dev → staging → prod apply is not assured.
GitHub Actions does not guarantee the execution order of matrix combinations, even with max-parallel: 1. The apply job could run prod before staging. For a sequential promotion pipeline, use explicit jobs with needs dependencies instead of a matrix strategy.
📝 Suggested approach (sequential jobs)
apply-dev:
needs: plan
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
uses: ./.github/workflows/terraform-apply.yml
with:
environment: dev
apply-staging:
needs: apply-dev
uses: ./.github/workflows/terraform-apply.yml
with:
environment: staging
apply-prod:
needs: apply-staging
uses: ./.github/workflows/terraform-apply.yml
with:
environment: prod🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/infrastructure-as-code/SKILL.md` around lines 366 - 394, The apply job
currently uses a matrix (job name "apply", matrix: environment, and
max-parallel: 1) which does not guarantee dev→staging→prod ordering; change the
workflow to explicit sequential jobs instead (e.g., create apply-dev,
apply-staging, apply-prod jobs) and wire them with needs dependencies so
apply-staging needs: apply-dev and apply-prod needs: apply-staging; keep the
same steps (checkout, setup-terraform, configure-aws-credentials, Terraform
Init/Apply) but parameterize working-directory/environment per job (pass
environment value like dev/staging/prod) rather than relying on the matrix
strategy.
| - name: Check for drift | ||
| working-directory: infrastructure/environments/prod | ||
| run: | | ||
| terraform init -input=false | ||
| if ! terraform plan -detailed-exitcode -input=false; then | ||
| echo "::warning::Infrastructure drift detected in production!" | ||
| fi | ||
| ``` |
There was a problem hiding this comment.
Drift detection script doesn't distinguish between errors and actual drift.
terraform plan -detailed-exitcode returns exit code 1 for errors and exit code 2 for drift. The if ! check treats both identically. Capture the exit code to differentiate:
📝 Suggested fix
- name: Check for drift
working-directory: infrastructure/environments/prod
run: |
terraform init -input=false
- if ! terraform plan -detailed-exitcode -input=false; then
- echo "::warning::Infrastructure drift detected in production!"
- fi
+ set +e
+ terraform plan -detailed-exitcode -input=false
+ EXIT_CODE=$?
+ set -e
+ if [ "$EXIT_CODE" -eq 2 ]; then
+ echo "::warning::Infrastructure drift detected in production!"
+ elif [ "$EXIT_CODE" -eq 1 ]; then
+ echo "::error::Terraform plan failed with an error!"
+ exit 1
+ fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Check for drift | |
| working-directory: infrastructure/environments/prod | |
| run: | | |
| terraform init -input=false | |
| if ! terraform plan -detailed-exitcode -input=false; then | |
| echo "::warning::Infrastructure drift detected in production!" | |
| fi | |
| ``` | |
| - name: Check for drift | |
| working-directory: infrastructure/environments/prod | |
| run: | | |
| terraform init -input=false | |
| set +e | |
| terraform plan -detailed-exitcode -input=false | |
| EXIT_CODE=$? | |
| set -e | |
| if [ "$EXIT_CODE" -eq 2 ]; then | |
| echo "::warning::Infrastructure drift detected in production!" | |
| elif [ "$EXIT_CODE" -eq 1 ]; then | |
| echo "::error::Terraform plan failed with an error!" | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/infrastructure-as-code/SKILL.md` around lines 476 - 483, The current
CI step uses "if ! terraform plan -detailed-exitcode" which conflates terraform
exit codes (1=error, 2=drift); modify the step that runs "terraform plan
-detailed-exitcode -input=false" to capture its exit code (e.g., store "$?") and
branch on it: if exit code == 2 emit the existing warning about infrastructure
drift, if exit code == 1 emit an error log and fail the job (exit non-zero),
otherwise continue for exit code 0; keep the surrounding "terraform init
-input=false" and the same working-directory/infrastructure context.
| runs-on: ubuntu-latest | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
| with: | ||
| fetch-depth: 2 # needed for turbo to detect changes | ||
|
|
||
| - uses: pnpm/action-setup@v4 | ||
| with: | ||
| version: 9 | ||
|
|
||
| - uses: actions/setup-node@v4 | ||
| with: | ||
| node-version: 22 | ||
| cache: pnpm | ||
|
|
||
| - run: pnpm install --frozen-lockfile | ||
|
|
||
| - name: Build, lint, test (with remote cache) | ||
| run: pnpm turbo run build lint test typecheck | ||
| env: | ||
| TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }} | ||
| TURBO_TEAM: ${{ vars.TURBO_TEAM }} | ||
|
|
||
| deploy-web: | ||
| needs: build | ||
| if: github.ref == 'refs/heads/main' | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
|
|
||
| - uses: pnpm/action-setup@v4 | ||
| with: | ||
| version: 9 | ||
|
|
||
| - uses: actions/setup-node@v4 | ||
| with: | ||
| node-version: 22 | ||
| cache: pnpm | ||
|
|
||
| - run: pnpm install --frozen-lockfile | ||
|
|
||
| - name: Build only web app | ||
| run: pnpm turbo run build --filter=web | ||
| env: | ||
| TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }} | ||
| TURBO_TEAM: ${{ vars.TURBO_TEAM }} | ||
| ``` |
There was a problem hiding this comment.
fetch-depth: 2 is misleading without a Turbo filter.
The comment on line 358 says fetch-depth: 2 is "needed for turbo to detect changes," but the build step runs pnpm turbo run build lint test typecheck without any --filter flag — so Turbo builds all packages regardless of history depth. The comment creates a false impression that affected-only builds are happening.
For a true affected-only CI run, you'd need either:
--filter=...[HEAD^1](compare against previous commit — works withfetch-depth: 2)--filter=...[origin/main](compare against base branch — requires more history orfetch-depth: 0)
♻️ Suggested correction
- - uses: actions/checkout@v4
- with:
- fetch-depth: 2 # needed for turbo to detect changes
+ - uses: actions/checkout@v4
+ with:
+ fetch-depth: 0 # full history needed for branch-based filtering
- uses: pnpm/action-setup@v4
with:
version: 9
- uses: actions/setup-node@v4
with:
node-version: 22
cache: pnpm
- run: pnpm install --frozen-lockfile
- name: Build, lint, test (with remote cache)
- run: pnpm turbo run build lint test typecheck
+ run: pnpm turbo run build lint test typecheck --filter=...[origin/main]
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| runs-on: ubuntu-latest | |
| steps: | |
| - uses: actions/checkout@v4 | |
| with: | |
| fetch-depth: 2 # needed for turbo to detect changes | |
| - uses: pnpm/action-setup@v4 | |
| with: | |
| version: 9 | |
| - uses: actions/setup-node@v4 | |
| with: | |
| node-version: 22 | |
| cache: pnpm | |
| - run: pnpm install --frozen-lockfile | |
| - name: Build, lint, test (with remote cache) | |
| run: pnpm turbo run build lint test typecheck | |
| env: | |
| TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }} | |
| TURBO_TEAM: ${{ vars.TURBO_TEAM }} | |
| deploy-web: | |
| needs: build | |
| if: github.ref == 'refs/heads/main' | |
| runs-on: ubuntu-latest | |
| steps: | |
| - uses: actions/checkout@v4 | |
| - uses: pnpm/action-setup@v4 | |
| with: | |
| version: 9 | |
| - uses: actions/setup-node@v4 | |
| with: | |
| node-version: 22 | |
| cache: pnpm | |
| - run: pnpm install --frozen-lockfile | |
| - name: Build only web app | |
| run: pnpm turbo run build --filter=web | |
| env: | |
| TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }} | |
| TURBO_TEAM: ${{ vars.TURBO_TEAM }} | |
| ``` | |
| runs-on: ubuntu-latest | |
| steps: | |
| - uses: actions/checkout@v4 | |
| with: | |
| fetch-depth: 0 # full history needed for branch-based filtering | |
| - uses: pnpm/action-setup@v4 | |
| with: | |
| version: 9 | |
| - uses: actions/setup-node@v4 | |
| with: | |
| node-version: 22 | |
| cache: pnpm | |
| - run: pnpm install --frozen-lockfile | |
| - name: Build, lint, test (with remote cache) | |
| run: pnpm turbo run build lint test typecheck --filter=...[origin/main] | |
| env: | |
| TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }} | |
| TURBO_TEAM: ${{ vars.TURBO_TEAM }} | |
| deploy-web: | |
| needs: build | |
| if: github.ref == 'refs/heads/main' | |
| runs-on: ubuntu-latest | |
| steps: | |
| - uses: actions/checkout@v4 | |
| - uses: pnpm/action-setup@v4 | |
| with: | |
| version: 9 | |
| - uses: actions/setup-node@v4 | |
| with: | |
| node-version: 22 | |
| cache: pnpm | |
| - run: pnpm install --frozen-lockfile | |
| - name: Build only web app | |
| run: pnpm turbo run build --filter=web | |
| env: | |
| TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }} | |
| TURBO_TEAM: ${{ vars.TURBO_TEAM }} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/monorepo-management/SKILL.md` around lines 354 - 400, The comment
claiming "fetch-depth: 2" is needed for Turbo to detect changes is misleading
because the workflow runs pnpm turbo run build lint test typecheck with no
--filter, so Turbo runs everything; either update the comment to state that a
full run happens when no --filter is used, or change the build invocation to an
affected-only command (e.g., add a Turbo filter like --filter=...[HEAD^1] if you
keep fetch-depth: 2, or --filter=...[origin/main] and set fetch-depth: 0) and
ensure the fetch-depth value matches the chosen filter; adjust the lines around
fetch-depth and the pnpm turbo run build lint test typecheck step accordingly.
| --- | ||
| name: observability-patterns | ||
| description: Production observability patterns including structured logging, error tracking, health checks, metrics, and monitoring for Node.js and Next.js applications. | ||
| --- | ||
|
|
||
| # Observability Patterns | ||
|
|
||
| Production-grade observability patterns for reliable, debuggable applications. |
There was a problem hiding this comment.
Skill won't be discovered by Claude Code — wrong file structure.
Claude Code skills use a folder structure where each skill is a folder with a SKILL.md file (plus optional scripts/resources); SKILL.md is the canonical file that Claude Code's skill tool discovers and loads. This file is at skills/observability-patterns.md (flat) and will be ignored by the skill discovery mechanism. It should be skills/observability-patterns/SKILL.md to match the other skills in this PR and the platform's expected structure.
Additionally, the file is missing the "When to Activate", "How It Works", and "Examples" sections required by the skills/**/*.md coding guideline. All other skills in this PR (i18n, monorepo, resilience) include "When to Activate" at minimum.
🗂️ Proposed fix: rename and add required sections
- Move the file:
skills/observability-patterns.md → skills/observability-patterns/SKILL.md
- Add required sections after the title:
# Observability Patterns
Production-grade observability patterns for reliable, debuggable applications.
+## When to Activate
+
+- Setting up logging/monitoring for a new Node.js or Next.js service
+- Adding error tracking (Sentry) to an existing application
+- Implementing health check endpoints for Kubernetes/ECS probes
+- Adding OpenTelemetry metrics and tracing
+- Debugging production incidents with missing context
+- Reviewing an app for observability gaps before a production launch
+
+## How It Works
+
+This skill applies structured observability patterns in four layers:
+1. **Logging** — Pino with redaction, child loggers, and async-local-storage correlation
+2. **Error tracking** — Sentry with custom error classes, boundaries, and PII scrubbing
+3. **Health checks** — Liveness/readiness probes with dependency status aggregation
+4. **Metrics** — OpenTelemetry counters, histograms, and alert thresholds
+
+## Examples
+
+- "Add request ID correlation to all my Express logs"
+- "My health check returns 200 even when the DB is down — fix it"
+- "Set up Sentry in Next.js and scrub PII before events are sent"
+- "Add OpenTelemetry tracing to my API handlers"
+
## Structured Logging🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/observability-patterns.md` around lines 1 - 8, The skill file is in
the wrong place and missing required sections; move the markdown into a skill
folder named observability-patterns and rename the file to SKILL.md (i.e.,
create observability-patterns/SKILL.md with the current content), then add the
required sections "When to Activate", "How It Works", and "Examples" beneath the
title (populate each with brief, actionable content consistent with the other
skills in this PR) so the skill discovery tool recognizes and loads it.
| // ✅ Next.js middleware for request annotation (measures middleware overhead only) | ||
| // Note: For actual API handler latency, use OpenTelemetry spans within route handlers | ||
| import { NextResponse } from 'next/server' | ||
| import type { NextRequest } from 'next/server' | ||
|
|
||
| export function middleware(request: NextRequest) { | ||
| const start = Date.now() | ||
|
|
||
| const response = NextResponse.next() | ||
|
|
||
| // Add timing header | ||
| response.headers.set('Server-Timing', `total;dur=${Date.now() - start}`) | ||
|
|
||
| // Log slow requests | ||
| const duration = Date.now() - start | ||
| if (duration > 1000) { | ||
| console.warn(JSON.stringify({ | ||
| level: 'warn', | ||
| message: 'Slow request detected', | ||
| path: request.nextUrl.pathname, | ||
| method: request.method, | ||
| duration, | ||
| })) | ||
| } | ||
|
|
||
| return response | ||
| } | ||
|
|
||
| export const config = { | ||
| matcher: '/api/:path*', | ||
| } |
There was a problem hiding this comment.
console.warn in production middleware contradicts the file's own anti-patterns section without explanation.
The API latency middleware is marked ✅ but uses console.warn — which the Anti-Patterns section (line 601+) explicitly flags as bad practice with ❌. This will confuse readers.
The reason console.warn is used here is that Next.js middleware runs in the Edge Runtime, where Node.js modules like Pino are not available and console is the only logging option. The example needs a comment explaining this constraint; otherwise it actively undermines the structured-logging guidance elsewhere in the document.
📝 Suggested clarification
-// ✅ Next.js middleware for request annotation (measures middleware overhead only)
-// Note: For actual API handler latency, use OpenTelemetry spans within route handlers
+// ✅ Next.js middleware for request annotation (measures middleware overhead only)
+// Note: For actual API handler latency, use OpenTelemetry spans within route handlers
+// Note: Next.js Edge Runtime does not support Node.js modules (Pino, Winston, etc.).
+// console.warn with JSON.stringify is the correct logging approach here;
+// structured logging libraries are only available in Node.js runtime route handlers.
import { NextResponse } from 'next/server'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/observability-patterns.md` around lines 519 - 549, Add a brief comment
in the middleware function (around the console.warn usage in middleware)
explaining that this Next.js middleware runs in the Edge Runtime where typical
Node.js structured loggers (e.g., Pino) are unavailable, so console is used as a
pragmatic fallback for the example; mention that for server handlers (not edge
middleware) readers should prefer structured logging and OpenTelemetry spans,
and reference the anti-patterns guidance to avoid confusion.
| private onSuccess() { | ||
| if (this.state === CircuitState.HALF_OPEN) { | ||
| this.successCount++; | ||
| if (this.successCount >= this.options.halfOpenMax) { | ||
| this.state = CircuitState.CLOSED; | ||
| this.failureCount = 0; | ||
| } | ||
| } else { | ||
| this.failureCount = 0; | ||
| } | ||
| } | ||
|
|
||
| private onFailure() { | ||
| this.failureCount++; | ||
| this.lastFailureTime = Date.now(); | ||
| if (this.failureCount >= this.options.failureThreshold) { | ||
| this.state = CircuitState.OPEN; | ||
| } | ||
| } |
There was a problem hiding this comment.
Circuit breaker: HALF_OPEN failure behavior is coincidentally correct but fragile.
When transitioning to HALF_OPEN (line 201), only successCount is reset — failureCount remains at or above failureThreshold. So onFailure() happens to reopen the circuit because the threshold check on line 235 is already satisfied. However, this is non-obvious and would break if someone adds a this.failureCount = 0 in the HALF_OPEN transition (a natural refactor). The standard pattern explicitly transitions back to OPEN on any HALF_OPEN failure.
📝 Suggested clarification
private onFailure() {
this.failureCount++;
this.lastFailureTime = Date.now();
- if (this.failureCount >= this.options.failureThreshold) {
+ if (
+ this.state === CircuitState.HALF_OPEN ||
+ this.failureCount >= this.options.failureThreshold
+ ) {
this.state = CircuitState.OPEN;
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/resilience-patterns/SKILL.md` around lines 220 - 238, The HALF_OPEN
handling is fragile: when entering CircuitState.HALF_OPEN only successCount is
reset, leaving failureCount >= failureThreshold to implicitly trigger reopening;
update onFailure to explicitly treat any failure while in HALF_OPEN as a
transition to OPEN. Concretely, inside the onFailure() method check if
this.state === CircuitState.HALF_OPEN and if so set this.state =
CircuitState.OPEN, reset/adjust counters as appropriate (e.g., set failureCount
= 1 and lastFailureTime = Date.now()), otherwise keep the existing
failureThreshold logic using this.options.failureThreshold; reference onFailure,
onSuccess, CircuitState.HALF_OPEN, failureCount, successCount,
options.failureThreshold, and options.halfOpenMax.
…/mq skills, rust reviewer Add 8 new features: - contexts/debug.md: Systematic debugging workflow context - skills/rust-patterns: Ownership, error handling, async, testing patterns - agents/rust-reviewer.md: Rust code reviewer (safety, lifetimes, clippy) - skills/fastapi-patterns: Pydantic v2, DI, async SQLAlchemy, JWT auth - skills/express-patterns: TypeScript Express, Zod validation, middleware chains - skills/kubernetes-patterns: Deployments, Helm, Kustomize, HPA, CI/CD - skills/load-testing: k6, Artillery, Locust with CI integration - skills/message-queue-patterns: BullMQ, RabbitMQ, SQS/SNS, DLQ patterns Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
affaan-m
left a comment
There was a problem hiding this comment.
Automated review: waiting for checks to complete.
There was a problem hiding this comment.
Actionable comments posted: 16
🧹 Nitpick comments (4)
agents/rust-reviewer.md (2)
38-38:Unbounded collectionsis miscategorized under Ownership/Lifetimes.Capacity hinting for
Vec/HashMapis a performance concern, not an ownership or lifetime issue. Moving it to MEDIUM – Performance keeps the priority taxonomy accurate and avoids elevating it to HIGH.♻️ Proposed fix
### HIGH -- Ownership and Lifetimes - **Unnecessary cloning**: `.clone()` where a borrow would suffice - **Lifetime over-constraint**: Overly restrictive lifetime bounds - **Leaked resources**: Missing `Drop` implementation for owned resources -- **Unbounded collections**: Growing `Vec`/`HashMap` without capacity hints ### MEDIUM -- Performance - **Unnecessary allocations**: `String` where `&str` suffices, `Vec` in hot loops +- **Unbounded collections**: Growing `Vec`/`HashMap` without capacity hints - **Missing `#[inline]`**: Small functions called across crate boundaries🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agents/rust-reviewer.md` at line 38, Locate the "Unbounded collections" entry in agents/rust-reviewer.md (the header/text titled "Unbounded collections") and change its category from the Ownership/Lifetimes section to the Performance section by updating its taxonomy label to "MEDIUM – Performance"; ensure the priority tag/description is updated accordingly and move or re-place the whole entry under the Performance heading so capacity-hinting for Vec/HashMap is documented as a performance item rather than an ownership/lifetime issue.
67-67:cargo build --releaseis slow and redundant in a review checklist.A full release build is expensive and adds nothing over
cargo clippy+cargo testfor a code-review workflow.cargo checkgives fast compile-error feedback without the cost.♻️ Proposed fix
-cargo build --release 2>&1 | head -50 +cargo check 2>&1 | head -50🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agents/rust-reviewer.md` at line 67, Replace the slow, redundant "cargo build --release 2>&1 | head -50" step in the review checklist with faster checks: run "cargo check" for quick compile-error feedback and include "cargo clippy" and "cargo test" for linting and tests; update the rust-reviewer.md entry that references the "cargo build --release" command to list these commands instead and remove the release build to avoid unnecessary CI/workstation cost.skills/load-testing/SKILL.md (1)
1-8: Add a "How It Works" section to satisfy the skill format guideline.The file has "When to Activate" and code examples but is missing the required How It Works section.
♻️ Suggested addition after the intro paragraph
Design and execute load tests to validate performance, identify bottlenecks, and prevent regressions. + +## How It Works + +Load testing tools (k6, Artillery, Locust) simulate concurrent virtual users (VUs) or arrival-rate-based traffic against a target service. Each tool runs a user-defined script that models realistic flows, collects latency/error metrics, and compares them against configurable thresholds. Tests run in stages (ramp-up → steady-state → ramp-down) and integrate into CI/CD pipelines as performance gates that fail a build when thresholds are breached.Based on learnings, "Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/load-testing/SKILL.md` around lines 1 - 8, The SKILL.md for "load-testing" is missing the required "How It Works" section; add a new "## How It Works" section immediately after the intro paragraph (after "Design and execute load tests...") that briefly explains core concepts and workflow: describe scenario modeling and virtual user behavior, script structure and reusable modules, threshold configuration and pass/fail criteria, key metrics to monitor (latency, throughput, error rate), tool-specific notes for k6/Artillery/Locust, and CI/CD integration for automated regression testing; keep it concise and aligned with existing "When to Activate" and examples so readers know what happens under the hood and how to apply the patterns in practice.skills/fastapi-patterns/SKILL.md (1)
10-17: Missing "How It Works" section required by coding guidelines.The skill guidelines require three sections: "When to Use", "How It Works", and "Examples". This file uses "When to Activate" (minor rename, acceptable) but omits "How It Works" entirely — there's no section explaining the core design decisions (lifespan management, DI chain, async session scoping, etc.) separate from the code examples.
As per coding guidelines:
skills/**/*.md— "Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/fastapi-patterns/SKILL.md` around lines 10 - 17, Add a "How It Works" section to SKILL.md that succinctly explains the core design decisions separate from examples: summarize FastAPI lifespan management (startup/shutdown events), dependency injection chain and request-scoped dependencies, async SQLAlchemy session scoping and transaction handling, Pydantic v2 validation/model usage, authentication/authorization flow (token/session management), and testing patterns (TestClient async setup and fixtures); place this section between "When to Activate" and "Examples", use short bullet points or brief paragraphs for each concept, and reference these concepts so readers can map them to the examples in the file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@agents/rust-reviewer.md`:
- Line 55: Rename the rubric entry titled "Non-exhaustive matching" in
agents/rust-reviewer.md because it conflicts with Rust's #[non_exhaustive]
attribute; change the heading to a clearer phrase such as "Wildcard match arm
hiding new variants" or "Overly broad catch-all arm" and update the short
description text under that heading to reflect that the problem is an
overly-broad `_` arm that can hide future enum variants (not the attribute),
ensuring any examples or references use the new heading consistently.
- Line 11: Update the git diff invocation in the rust reviewer instruction that
currently uses "git diff -- '*.rs' 'Cargo.toml' 'Cargo.lock'" to include HEAD so
staged changes are shown; replace the command string with one that adds HEAD
(e.g., "git diff HEAD -- '*.rs' 'Cargo.toml' 'Cargo.lock'") so the agent
compares both staged and unstaged edits against the last commit.
In `@skills/fastapi-patterns/SKILL.md`:
- Around line 62-86: The example's lifespan function references
Base.metadata.create_all but Base is not imported or defined; add a proper
import for the declarative base (e.g., import Base from your models module) so
Base is available in the entry point where lifespan, engine, and the app are
defined; ensure the import (for example, from app.models import Base) appears
alongside existing imports so Base can be used in lifespan and during startup.
- Around line 229-232: The code treats payload.get("sub") as an int but JWT
"sub" is a string; change handling in the jwt.decode block so you parse/cast the
subject to int before calling db.get(User, user_id): get the raw_sub =
payload.get("sub"), validate it's present, convert raw_sub to int (catch
ValueError) and include that conversion error alongside JWTError when raising
HTTPException, then pass the integer user_id into db.get(User, user_id) to avoid
silent misses.
- Around line 140-154: The snippet's type hint AsyncGenerator is missing from
imports; update the import block to include AsyncGenerator (e.g., add "from
typing import AsyncGenerator") so the signature async def get_db() ->
AsyncGenerator[AsyncSession, None] resolves correctly; locate the import lines
where create_async_engine/async_sessionmaker/AsyncSession are imported and add
AsyncGenerator there.
- Around line 352-369: The SQLAlchemy model snippet references datetime in the
type annotation for created_at (Mapped[datetime]) but datetime is not imported;
add an import for datetime (e.g., from datetime import datetime) at the top of
the snippet so the User class (and its created_at mapped_column) compiles
correctly.
- Around line 218-234: The code uses python-jose imports and jwt.decode in
get_current_user (and references JWTError and oauth2_scheme); replace
python-jose with a maintained JWT library (e.g., PyJWT or Authlib): remove from
jose imports, import the selected library's decode function and its exception
types, call jwt.decode(token, settings.secret_key, algorithms=["HS256"],
options={"verify_signature": True}) according to the library's API, catch the
library's token exceptions (e.g., PyJWT's PyJWTError or InvalidTokenError)
instead of JWTError, and update dependency files (requirements/pyproject) to
remove python-jose and add the chosen library so signature verification and
error handling behave correctly.
- Around line 128-134: The current generic class declaration "class
PaginatedResponse[T](BaseModel)" uses PEP 695 (Python 3.12+) and must either be
documented as requiring Python 3.12+ or changed to a 3.9+ compatible form:
define a TypeVar T and make PaginatedResponse inherit Generic[T] (e.g., add
"from typing import TypeVar, Generic", "T = TypeVar('T')" and declare "class
PaginatedResponse(BaseModel, Generic[T])"), ensuring imports are updated and the
existing fields (items: list[T], total, page, size, pages) remain unchanged.
In `@skills/load-testing/SKILL.md`:
- Around line 123-141: The snippet is missing the k6 HTTP import so http.post in
the default export will throw a ReferenceError; add the import line for k6 HTTP
(import http from "k6/http") at the top alongside the existing imports (where
Trend, Counter, Rate are imported) so the call to http.post in the export
default function works correctly.
- Around line 107-118: The purchaseFlow function blindly assumes
http.get(...).json() returns a non-empty array so product may be undefined and
product.id will crash; fix by first capturing the response (e.g., res =
http.get(...)), check res.status (or res.error) and only call res.json() when
successful, then validate that products is an Array and has length > 0 before
selecting product; if the check fails, short-circuit (return) or log a warning
and skip the POST/checkout steps to avoid dereferencing product.id in
purchaseFlow.
- Around line 229-241: The CI step uses the archived grafana/k6-action@v0.3.1
and sets K6_TARGET but scripts don’t consume it; update the workflow to use
grafana/setup-k6-action@v1 to install k6 and grafana/run-k6-action@v1 to run
tests, passing the same K6_TARGET env; then modify the k6 script
(tests/load/smoke.js) to read the endpoint from __ENV.K6_TARGET (replace
hardcoded "http://localhost:3000" with the env value) so the runner targets
http://app:3000 when running in Actions.
In `@skills/message-queue-patterns/SKILL.md`:
- Around line 1-16: The markdown is missing a dedicated "How It Works" section
required by the skills guideline; add a concise "How It Works" section to
SKILL.md (place it above or after the existing "Core Concepts" section and below
"When to Activate") that briefly explains the runtime workflow of message queues
(message production, delivery, consumption, retries, dead-letter handling,
idempotency) and include one or two short bullets/examples to satisfy the "When
to Use / How It Works / Examples" structure expected by the skills linter;
ensure the new section title is exactly "How It Works" so automated checks can
detect it.
- Around line 116-135: The current idempotency check using
db.processedEvents.findUnique (existing) is a TOCTOU race because it's outside
db.$transaction; either make the DB UNIQUE constraint on processedEvents.eventId
explicit in the docs/comments and show handling of the unique-violation error,
or move the idempotency insert inside db.$transaction as the first operation
(use tx.processedEvents.create for the idempotency key before updating
tx.orders) and catch the unique constraint error to skip processing; reference
processedEvents, findUnique, db.$transaction, tx.processedEvents.create and
ensure the example shows relying on the UNIQUE constraint or
transactions+error-handling clearly.
- Around line 162-169: The consumer currently calls
channel.queue_bind(queue="user-events", exchange="user-events",
routing_key="user.*") without declaring the exchange first, which causes an AMQP
404 if the producer hasn't created it; update the consumer to call
channel.exchange_declare(exchange="user-events", exchange_type="topic",
durable=True) (idempotent) before channel.queue_bind, keeping the existing
queue_declare, basic_qos, basic_consume (process_message) and start_consuming
calls.
- Around line 82-92: The EventPublisher currently calls channel.exchange_declare
on every publish causing a broker round-trip and also misstates thread-safety;
move the exchange declaration out of publish into the constructor or a dedicated
setup() (e.g., perform self.channel.exchange_declare(exchange=...,
exchange_type="topic", durable=True) in __init__/setup so publish only calls
basic_publish), and update the class docstring (near EventPublisher / __init__)
to explicitly state that pika.BlockingConnection and its channels are NOT
thread-safe and either restrict this class to single-threaded use or show
alternatives (pika.SelectConnection or one connection/channel per thread) for
concurrent workloads.
- Around line 231-238: The sendMessage function currently uses
crypto.randomUUID() for MessageDeduplicationId which prevents FIFO
deduplication; change sendMessage to compute a deterministic deduplication id
(e.g., SHA-256 or other hash of JSON.stringify(payload) or use a business key
from payload) and pass that as MessageDeduplicationId only when the target queue
is FIFO (detect via queueUrl ending with .fifo or a config flag); also only
include MessageGroupId for FIFO queues. Update references in sendMessage and any
helpers that build the SQS SendMessageCommand to conditionally add
MessageDeduplicationId and MessageGroupId and to compute the hash from the
payload instead of using crypto.randomUUID().
---
Nitpick comments:
In `@agents/rust-reviewer.md`:
- Line 38: Locate the "Unbounded collections" entry in agents/rust-reviewer.md
(the header/text titled "Unbounded collections") and change its category from
the Ownership/Lifetimes section to the Performance section by updating its
taxonomy label to "MEDIUM – Performance"; ensure the priority tag/description is
updated accordingly and move or re-place the whole entry under the Performance
heading so capacity-hinting for Vec/HashMap is documented as a performance item
rather than an ownership/lifetime issue.
- Line 67: Replace the slow, redundant "cargo build --release 2>&1 | head -50"
step in the review checklist with faster checks: run "cargo check" for quick
compile-error feedback and include "cargo clippy" and "cargo test" for linting
and tests; update the rust-reviewer.md entry that references the "cargo build
--release" command to list these commands instead and remove the release build
to avoid unnecessary CI/workstation cost.
In `@skills/fastapi-patterns/SKILL.md`:
- Around line 10-17: Add a "How It Works" section to SKILL.md that succinctly
explains the core design decisions separate from examples: summarize FastAPI
lifespan management (startup/shutdown events), dependency injection chain and
request-scoped dependencies, async SQLAlchemy session scoping and transaction
handling, Pydantic v2 validation/model usage, authentication/authorization flow
(token/session management), and testing patterns (TestClient async setup and
fixtures); place this section between "When to Activate" and "Examples", use
short bullet points or brief paragraphs for each concept, and reference these
concepts so readers can map them to the examples in the file.
In `@skills/load-testing/SKILL.md`:
- Around line 1-8: The SKILL.md for "load-testing" is missing the required "How
It Works" section; add a new "## How It Works" section immediately after the
intro paragraph (after "Design and execute load tests...") that briefly explains
core concepts and workflow: describe scenario modeling and virtual user
behavior, script structure and reusable modules, threshold configuration and
pass/fail criteria, key metrics to monitor (latency, throughput, error rate),
tool-specific notes for k6/Artillery/Locust, and CI/CD integration for automated
regression testing; keep it concise and aligned with existing "When to Activate"
and examples so readers know what happens under the hood and how to apply the
patterns in practice.
| You are a senior Rust code reviewer ensuring memory safety, idiomatic patterns, and production readiness. | ||
|
|
||
| When invoked: | ||
| 1. Run `git diff -- '*.rs' 'Cargo.toml' 'Cargo.lock'` to see recent Rust file changes |
There was a problem hiding this comment.
git diff without HEAD silently misses staged changes.
git diff -- '*.rs' ... compares the working tree against the index; once a developer runs git add, all those changes vanish from the output, and the agent reviews nothing. Use git diff HEAD -- to capture both staged and unstaged changes against the last commit.
🐛 Proposed fix
-1. Run `git diff -- '*.rs' 'Cargo.toml' 'Cargo.lock'` to see recent Rust file changes
+1. Run `git diff HEAD -- '*.rs' 'Cargo.toml' 'Cargo.lock'` to see recent Rust file changes📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 1. Run `git diff -- '*.rs' 'Cargo.toml' 'Cargo.lock'` to see recent Rust file changes | |
| 1. Run `git diff HEAD -- '*.rs' 'Cargo.toml' 'Cargo.lock'` to see recent Rust file changes |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agents/rust-reviewer.md` at line 11, Update the git diff invocation in the
rust reviewer instruction that currently uses "git diff -- '*.rs' 'Cargo.toml'
'Cargo.lock'" to include HEAD so staged changes are shown; replace the command
string with one that adds HEAD (e.g., "git diff HEAD -- '*.rs' 'Cargo.toml'
'Cargo.lock'") so the agent compares both staged and unstaged edits against the
last commit.
| ### MEDIUM -- Best Practices | ||
| - **Missing `#[must_use]`**: Functions where ignoring return is likely a bug | ||
| - **Stringly-typed APIs**: Using `String` where an enum or newtype fits | ||
| - **Non-exhaustive matching**: Catch-all `_` hiding new variants |
There was a problem hiding this comment.
Non-exhaustive matching clashes with Rust's own #[non_exhaustive] attribute.
In Rust, #[non_exhaustive] marks a type as intentionally open to future variants. Describing a catch-all _ arm as "non-exhaustive matching" inverts the meaning and will confuse reviewers. Rename the entry to something like Wildcard match arm hiding new variants or Overly broad catch-all arm.
🐛 Proposed fix
-- **Non-exhaustive matching**: Catch-all `_` hiding new variants
+- **Wildcard catch-all arm**: Catch-all `_` hiding new variants📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **Non-exhaustive matching**: Catch-all `_` hiding new variants | |
| - **Wildcard catch-all arm**: Catch-all `_` hiding new variants |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agents/rust-reviewer.md` at line 55, Rename the rubric entry titled
"Non-exhaustive matching" in agents/rust-reviewer.md because it conflicts with
Rust's #[non_exhaustive] attribute; change the heading to a clearer phrase such
as "Wildcard match arm hiding new variants" or "Overly broad catch-all arm" and
update the short description text under that heading to reflect that the problem
is an overly-broad `_` arm that can hide future enum variants (not the
attribute), ensuring any examples or references use the new heading
consistently.
| ```python | ||
| from contextlib import asynccontextmanager | ||
| from fastapi import FastAPI | ||
| from app.routers import users, items, auth | ||
| from app.db.session import engine | ||
|
|
||
| @asynccontextmanager | ||
| async def lifespan(app: FastAPI): | ||
| # Startup | ||
| async with engine.begin() as conn: | ||
| await conn.run_sync(Base.metadata.create_all) | ||
| yield | ||
| # Shutdown | ||
| await engine.dispose() | ||
|
|
||
| app = FastAPI( | ||
| title="My API", | ||
| version="1.0.0", | ||
| lifespan=lifespan, | ||
| ) | ||
|
|
||
| app.include_router(auth.router, prefix="/auth", tags=["auth"]) | ||
| app.include_router(users.router, prefix="/users", tags=["users"]) | ||
| app.include_router(items.router, prefix="/items", tags=["items"]) | ||
| ``` |
There was a problem hiding this comment.
Base is not defined or imported in the entry point example.
Line 72 calls Base.metadata.create_all, but Base is neither imported nor defined in this code block. Readers will encounter a NameError when copying the snippet as-is. Add an import such as from app.models import Base (consistent with the project layout above).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/fastapi-patterns/SKILL.md` around lines 62 - 86, The example's
lifespan function references Base.metadata.create_all but Base is not imported
or defined; add a proper import for the declarative base (e.g., import Base from
your models module) so Base is available in the entry point where lifespan,
engine, and the app are defined; ensure the import (for example, from app.models
import Base) appears alongside existing imports so Base can be used in lifespan
and during startup.
| class PaginatedResponse[T](BaseModel): | ||
| items: list[T] | ||
| total: int | ||
| page: int | ||
| size: int | ||
| pages: int | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and examine the file
find . -name "SKILL.md" -path "*/fastapi-patterns/*" -type fRepository: affaan-m/everything-claude-code
Length of output: 109
🏁 Script executed:
# Also check if the fastapi-patterns directory exists
find . -type d -name "fastapi-patterns"Repository: affaan-m/everything-claude-code
Length of output: 100
🏁 Script executed:
# Let's search for the file more broadly
fd -t f "SKILL.md" | head -20Repository: affaan-m/everything-claude-code
Length of output: 958
🏁 Script executed:
# Read the file to check the content around lines 128-134
cat -n ./skills/fastapi-patterns/SKILL.md | sed -n '120,140p'Repository: affaan-m/everything-claude-code
Length of output: 545
🏁 Script executed:
# Also check the beginning of the file for any Python version constraints mentioned
head -30 ./skills/fastapi-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 1031
🏁 Script executed:
# Get the full line count to understand the file size
wc -l ./skills/fastapi-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 113
🏁 Script executed:
# Check the entire file for any mentions of Python version requirements or compatibility
grep -i "python" ./skills/fastapi-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 530
🏁 Script executed:
# Let's also verify the PEP 695 requirement by checking if there's any documentation about type syntax
grep -i "3.12\|pep 695\|generic\|typevar" ./skills/fastapi-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 57
🏁 Script executed:
# Check if there's a README or requirements file that specifies Python version
find ./skills/fastapi-patterns -type f \( -name "README*" -o -name "requirements*" -o -name "pyproject.toml" \)Repository: affaan-m/everything-claude-code
Length of output: 57
🏁 Script executed:
# Let's look at more context around the PaginatedResponse class definition
sed -n '100,160p' ./skills/fastapi-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 1333
🏁 Script executed:
# Let me verify the technical claim about PEP 695 and Python version support
# Check if there's any documentation in the repo about target Python versions
find . -name "*.md" -o -name "*.toml" -o -name "*.yml" | xargs grep -l "python.*version\|3.9\|3.10\|3.11\|3.12" 2>/dev/null | head -10Repository: affaan-m/everything-claude-code
Length of output: 467
🏁 Script executed:
# Also check the file structure to see if there's a pyproject.toml or setup.py at repo root
ls -la | grep -E "pyproject|setup|requirements"Repository: affaan-m/everything-claude-code
Length of output: 57
🌐 Web query:
PEP 695 Python type parameters syntax Python 3.12 requirement
💡 Result:
PEP 695 introduces new syntax for declaring type parameters (generics) and for declaring type aliases with the new type statement, e.g.:
def max[T](args: Iterable[T]) -> T: ...
class list[T]: ...
type Point[T] = tuple[T, T]That syntax is a Python 3.12 language feature (the PEP’s Python-Version is 3.12), so it requires Python 3.12+ to parse/run; on Python 3.11 and earlier it’s a syntax error. If your package uses this syntax in .py files, you should set python_requires >= "3.12". [1][2]
References:
[1] PEP 695 – Type Parameter Syntax (peps.python.org) (peps.python.org)
[2] Python docs: What’s New In Python 3.12 → “PEP 695: Type Parameter Syntax” (docs.python.org)
[3] Python 3.12 typing docs noting type-parameter syntax added/changed in 3.12 (docs.python.org)
Citations:
- 1: https://peps.python.org/pep-0695/?utm_source=openai
- 2: https://docs.python.org/3/whatsnew/3.12.html?utm_source=openai
- 3: https://docs.python.org/3.12/library/typing.html?utm_source=openai
🌐 Web query:
Pydantic Generic TypeVar compatibility Python 3.9 3.10 3.11
💡 Result:
What works across Python 3.9 / 3.10 / 3.11
1) Pydantic v1 (≤ 1.10.x): use GenericModel
In Pydantic v1, generic models must inherit from pydantic.generics.GenericModel (plus typing.Generic[T]) for TypeVar parameterization to work correctly. Example pattern from the v1 docs: class Response(GenericModel, Generic[DataT]): ... and then Response[int](...). [1]
2) Pydantic v2 (≥ 2.0): use BaseModel + Generic[T] (and TypeAdapter for “types”)
Pydantic v2 supports generics in its “types” system (e.g., Annotated[list[T], ...]) and you can validate a concrete specialization like ShortList[int] via TypeAdapter. [2]
Also, v2 migration notes highlight that TypeAdapter is the replacement for validating arbitrary (including generic) types outside models. [3]
3) Python-version-specific typing gotchas (3.9 vs 3.10 vs 3.11)
ParamSpec/ parameter expressions inGeneric[...]: the standard library typing behavior expanded in Python 3.10 (“Genericcan now be parameterized over parameter expressions”; seeParamSpec). If you need that feature while supporting 3.9, you’ll typically rely ontyping_extensionsfor the newer typing constructs on older interpreters. [4]typing.Selfexists in Python 3.11 (PEP 673); support in Pydantic depended on version and historically produced errors in some cases (see the Pydantic issue discussing lack of support at the time). If you need to run on 3.9/3.10 too, prefertyping_extensions.Selfand verify against your Pydantic version. [5]
Compatibility recipe (practical)
- If you must support Pydantic v1 and v2: write two implementations (or a small shim) because v1 generic models want
GenericModel, while v2 generally usesBaseModelgenerics and/orTypeAdapter. [1] [2] [3] - If you must support Python 3.9–3.11: use
typing_extensionsfor any typing features newer than 3.9 (e.g.,Self, some newer forms ofTypeAlias, etc.), and keep your generics in the classicTypeVar+Generic[T]form. [4] [5]
Sources:
[1] Pydantic v1 generic models docs (GenericModel) https://docs.pydantic.dev/1.10/usage/models/
[2] Pydantic “Types → Generics” (TypeVar inside Annotated; TypeAdapter over generic aliases) https://docs.pydantic.dev/latest/concepts/types/
[3] Pydantic v2 migration guide (TypeAdapter notes) https://docs.pydantic.dev/2.0/migration/
[4] Python 3.11 typing docs (notes “Changed in version 3.10” re ParamSpec/parameter expressions) https://docs.python.org/3.11/library/typing.html
[5] Pydantic issue re typing.Self support on Python 3.11 pydantic/pydantic#5992
Document the Python 3.12+ requirement or provide a Python 3.9+ compatible alternative.
The class PaginatedResponse[T](BaseModel) syntax (PEP 695) requires Python 3.12+. For Pydantic v2 on Python 3.9–3.11, use Generic[T] instead.
♻️ Portable alternative (Python 3.9+)
+from typing import Generic, TypeVar
+T = TypeVar("T")
+
-class PaginatedResponse[T](BaseModel):
+class PaginatedResponse(BaseModel, Generic[T]):
items: list[T]
total: int
page: int
size: int
pages: int🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/fastapi-patterns/SKILL.md` around lines 128 - 134, The current generic
class declaration "class PaginatedResponse[T](BaseModel)" uses PEP 695 (Python
3.12+) and must either be documented as requiring Python 3.12+ or changed to a
3.9+ compatible form: define a TypeVar T and make PaginatedResponse inherit
Generic[T] (e.g., add "from typing import TypeVar, Generic", "T = TypeVar('T')"
and declare "class PaginatedResponse(BaseModel, Generic[T])"), ensuring imports
are updated and the existing fields (items: list[T], total, page, size, pages)
remain unchanged.
| ```python | ||
| from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker, AsyncSession | ||
|
|
||
| engine = create_async_engine(settings.database_url) | ||
| async_session = async_sessionmaker(engine, expire_on_commit=False) | ||
|
|
||
| async def get_db() -> AsyncGenerator[AsyncSession, None]: | ||
| async with async_session() as session: | ||
| try: | ||
| yield session | ||
| await session.commit() | ||
| except Exception: | ||
| await session.rollback() | ||
| raise | ||
| ``` |
There was a problem hiding this comment.
AsyncGenerator is not imported in the get_db example.
async def get_db() -> AsyncGenerator[AsyncSession, None] references AsyncGenerator which is absent from the import block shown. Readers copying this snippet will get a NameError.
🐛 Proposed fix
-from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker, AsyncSession
+from collections.abc import AsyncGenerator
+from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker, AsyncSession🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/fastapi-patterns/SKILL.md` around lines 140 - 154, The snippet's type
hint AsyncGenerator is missing from imports; update the import block to include
AsyncGenerator (e.g., add "from typing import AsyncGenerator") so the signature
async def get_db() -> AsyncGenerator[AsyncSession, None] resolves correctly;
locate the import lines where
create_async_engine/async_sessionmaker/AsyncSession are imported and add
AsyncGenerator there.
| --- | ||
| name: message-queue-patterns | ||
| description: Message queue patterns for event-driven architectures — producer/consumer, pub/sub, dead letter queues, idempotent consumers, with RabbitMQ, Redis Streams, BullMQ, and AWS SQS/SNS examples. | ||
| --- | ||
|
|
||
| # Message Queue Patterns | ||
|
|
||
| Build reliable event-driven systems with message queues for decoupled, scalable communication. | ||
|
|
||
| ## When to Activate | ||
|
|
||
| - Designing event-driven or message-based architectures | ||
| - Implementing producer/consumer or pub/sub patterns | ||
| - Setting up RabbitMQ, Redis Streams, BullMQ, or AWS SQS/SNS | ||
| - Building idempotent consumers and dead letter queues | ||
| - Handling async workflows (order processing, notifications, ETL) |
There was a problem hiding this comment.
Missing "How It Works" section per skills guideline.
The skill has "Core Concepts" but is missing a dedicated "How It Works" section. As per coding guidelines, skills/**/*.md files must contain clear sections for When to Use, How It Works, and Examples. "Core Concepts" can be repurposed or a short "How It Works" section should be added above or after it to satisfy this requirement.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/message-queue-patterns/SKILL.md` around lines 1 - 16, The markdown is
missing a dedicated "How It Works" section required by the skills guideline; add
a concise "How It Works" section to SKILL.md (place it above or after the
existing "Core Concepts" section and below "When to Activate") that briefly
explains the runtime workflow of message queues (message production, delivery,
consumption, retries, dead-letter handling, idempotency) and include one or two
short bullets/examples to satisfy the "When to Use / How It Works / Examples"
structure expected by the skills linter; ensure the new section title is exactly
"How It Works" so automated checks can detect it.
| def publish(self, exchange: str, routing_key: str, payload: dict): | ||
| self.channel.exchange_declare(exchange=exchange, exchange_type="topic", durable=True) | ||
| self.channel.basic_publish( | ||
| exchange=exchange, | ||
| routing_key=routing_key, | ||
| body=json.dumps(payload), | ||
| properties=pika.BasicProperties( | ||
| delivery_mode=2, # Persistent | ||
| content_type="application/json", | ||
| ), | ||
| ) |
There was a problem hiding this comment.
exchange_declare on every publish() invocation wastes a broker round-trip; missing thread-safety caveat.
Two issues with the EventPublisher class:
-
channel.exchange_declare(line 83) is called on everypublish()call. While RabbitMQ makes this call idempotent, it still incurs a synchronous server round-trip for each message. Move the declaration to__init__(or a dedicatedsetup()method). -
The docstring on line 74 says "Reuse connections and channels for production use" but
pika.BlockingConnectionis not thread-safe. Re-using a single instance across threads will corrupt the connection state. The docstring should warn that this class is single-threaded only, or the example should showpika.SelectConnection/ a connection-per-thread pattern for concurrent workloads.
♻️ Suggested fix
class EventPublisher:
- """Reuse connections and channels for production use."""
+ """Single-threaded publisher. Not thread-safe — use one instance per thread."""
def __init__(self, host: str = "localhost"):
self.connection = pika.BlockingConnection(
pika.ConnectionParameters(host)
)
self.channel = self.connection.channel()
+ # Declare exchange once at construction time
+ self.channel.exchange_declare(exchange="user-events", exchange_type="topic", durable=True)
- def publish(self, exchange: str, routing_key: str, payload: dict):
- self.channel.exchange_declare(exchange=exchange, exchange_type="topic", durable=True)
+ def publish(self, exchange: str, routing_key: str, payload: dict):
self.channel.basic_publish(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/message-queue-patterns/SKILL.md` around lines 82 - 92, The
EventPublisher currently calls channel.exchange_declare on every publish causing
a broker round-trip and also misstates thread-safety; move the exchange
declaration out of publish into the constructor or a dedicated setup() (e.g.,
perform self.channel.exchange_declare(exchange=..., exchange_type="topic",
durable=True) in __init__/setup so publish only calls basic_publish), and update
the class docstring (near EventPublisher / __init__) to explicitly state that
pika.BlockingConnection and its channels are NOT thread-safe and either restrict
this class to single-threaded use or show alternatives (pika.SelectConnection or
one connection/channel per thread) for concurrent workloads.
| // Idempotency check — skip if already processed | ||
| const existing = await db.processedEvents.findUnique({ | ||
| where: { eventId: job.id }, | ||
| }); | ||
| if (existing) { | ||
| console.log(`Order ${orderId} already processed, skipping`); | ||
| return; | ||
| } | ||
|
|
||
| // Process order | ||
| await db.$transaction(async (tx) => { | ||
| await tx.orders.update({ | ||
| where: { id: orderId }, | ||
| data: { status: "confirmed" }, | ||
| }); | ||
| // Record idempotency key | ||
| await tx.processedEvents.create({ | ||
| data: { eventId: job.id!, processedAt: new Date() }, | ||
| }); | ||
| }); |
There was a problem hiding this comment.
TOCTOU race in idempotency check — DB unique constraint is a required but unstated dependency.
The guard at lines 117–123 (findUnique → early return) is outside the transaction. Two concurrent workers processing the same job.id can both pass this check before either one writes the processedEvents record, causing double-processing.
The transaction on line 126 provides safety only if there is a UNIQUE constraint on processedEvents.eventId at the database level (which would cause the second create to throw and roll back). This constraint is a silent dependency — the example should make it explicit, either with a comment or by moving the idempotency insert inside the transaction as the first operation and relying on the constraint to reject duplicates.
💡 Suggested note/fix
+ // Requires: UNIQUE constraint on processedEvents(eventId) to handle concurrent workers safely
// Idempotency check — skip if already processed
const existing = await db.processedEvents.findUnique({
where: { eventId: job.id },
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Idempotency check — skip if already processed | |
| const existing = await db.processedEvents.findUnique({ | |
| where: { eventId: job.id }, | |
| }); | |
| if (existing) { | |
| console.log(`Order ${orderId} already processed, skipping`); | |
| return; | |
| } | |
| // Process order | |
| await db.$transaction(async (tx) => { | |
| await tx.orders.update({ | |
| where: { id: orderId }, | |
| data: { status: "confirmed" }, | |
| }); | |
| // Record idempotency key | |
| await tx.processedEvents.create({ | |
| data: { eventId: job.id!, processedAt: new Date() }, | |
| }); | |
| }); | |
| // Requires: UNIQUE constraint on processedEvents(eventId) to handle concurrent workers safely | |
| // Idempotency check — skip if already processed | |
| const existing = await db.processedEvents.findUnique({ | |
| where: { eventId: job.id }, | |
| }); | |
| if (existing) { | |
| console.log(`Order ${orderId} already processed, skipping`); | |
| return; | |
| } | |
| // Process order | |
| await db.$transaction(async (tx) => { | |
| await tx.orders.update({ | |
| where: { id: orderId }, | |
| data: { status: "confirmed" }, | |
| }); | |
| // Record idempotency key | |
| await tx.processedEvents.create({ | |
| data: { eventId: job.id!, processedAt: new Date() }, | |
| }); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/message-queue-patterns/SKILL.md` around lines 116 - 135, The current
idempotency check using db.processedEvents.findUnique (existing) is a TOCTOU
race because it's outside db.$transaction; either make the DB UNIQUE constraint
on processedEvents.eventId explicit in the docs/comments and show handling of
the unique-violation error, or move the idempotency insert inside
db.$transaction as the first operation (use tx.processedEvents.create for the
idempotency key before updating tx.orders) and catch the unique constraint error
to skip processing; reference processedEvents, findUnique, db.$transaction,
tx.processedEvents.create and ensure the example shows relying on the UNIQUE
constraint or transactions+error-handling clearly.
| connection = pika.BlockingConnection(pika.ConnectionParameters("localhost")) | ||
| channel = connection.channel() | ||
|
|
||
| channel.queue_declare(queue="user-events", durable=True) | ||
| channel.queue_bind(queue="user-events", exchange="user-events", routing_key="user.*") | ||
| channel.basic_qos(prefetch_count=10) | ||
| channel.basic_consume(queue="user-events", on_message_callback=process_message) | ||
| channel.start_consuming() |
There was a problem hiding this comment.
queue_bind will fail if the consumer starts before the producer — missing exchange_declare in consumer.
Line 166 binds the queue to the user-events exchange without first declaring it. In RabbitMQ, queue_bind on a non-existent exchange raises an AMQP 404 channel error and closes the channel. Because exchange_declare in the producer is called inside publish() (not at startup), any consumer-first startup order will crash here.
Add an idempotent exchange_declare in the consumer before the bind, matching the producer's declaration:
🛠️ Proposed fix
channel.queue_declare(queue="user-events", durable=True)
+channel.exchange_declare(exchange="user-events", exchange_type="topic", durable=True)
channel.queue_bind(queue="user-events", exchange="user-events", routing_key="user.*")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/message-queue-patterns/SKILL.md` around lines 162 - 169, The consumer
currently calls channel.queue_bind(queue="user-events", exchange="user-events",
routing_key="user.*") without declaring the exchange first, which causes an AMQP
404 if the producer hasn't created it; update the consumer to call
channel.exchange_declare(exchange="user-events", exchange_type="topic",
durable=True) (idempotent) before channel.queue_bind, keeping the existing
queue_declare, basic_qos, basic_consume (process_message) and start_consuming
calls.
| async function sendMessage(payload: Record<string, unknown>) { | ||
| await sqs.send(new SendMessageCommand({ | ||
| QueueUrl: queueUrl, | ||
| MessageBody: JSON.stringify(payload), | ||
| MessageGroupId: "default", // For FIFO queues | ||
| MessageDeduplicationId: crypto.randomUUID(), | ||
| })); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
AWS SQS FIFO MessageDeduplicationId random UUID deduplication window
💡 Result:
For Amazon SQS FIFO queues, deduplication is based on MessageDeduplicationId within a fixed 5‑minute deduplication interval/window. If SQS has already accepted a message with a given deduplication ID, then any later sends with the same MessageDeduplicationId within that 5‑minute window are accepted but not delivered (even if the first message was received and deleted). [1], [2], [3], [4]
So if you set MessageDeduplicationId to a random UUID on every send, you effectively disable deduplication, because every message appears unique to SQS—duplicates (retries) won’t be recognized unless you reuse the same ID for the retry. [1], [2], [3]
If you don’t want to manage IDs yourself, you can enable ContentBasedDeduplication, where SQS computes a SHA‑256 hash of the message body to use as the deduplication ID (still with the same 5‑minute window). [2]
Sources:
[1] AWS SQS Developer Guide – Using the message deduplication ID in Amazon SQS
[2] AWS SQS Developer Guide – Exactly-once processing in Amazon SQS
[3] AWS Developer Tools Blog – How the Amazon SQS FIFO API Works
[4] AWS SQS FAQs – FIFO queues deduplication interval
🌐 Web query:
AWS SQS MessageGroupId MessageDeduplicationId standard queue vs FIFO queue parameters
💡 Result:
Which queue types support these parameters?
| Parameter | Standard queue | FIFO queue |
|---|---|---|
| MessageGroupId | Optional. Can be used to enable fair queues (tenant isolation / fair message dwell time). It does not provide FIFO ordering; messages with the same group can still be processed in parallel. (docs.aws.amazon.com) | Required. Defines the message group that is processed in strict order, one at a time per group. If you omit it, the send fails. (docs.aws.amazon.com) |
| MessageDeduplicationId | Not applicable (deduplication ID is a FIFO-only concept). (docs.aws.amazon.com) | Applies only to FIFO. Used for deduplication within a 5‑minute window; duplicates are accepted but not delivered. (docs.aws.amazon.com) |
FIFO-specific “gotchas” (SendMessage / SendMessageBatch)
- MessageGroupId is mandatory for FIFO sends. (docs.aws.amazon.com)
- For deduplication, either:
- Provide MessageDeduplicationId, or
- Enable ContentBasedDeduplication on the queue so SQS generates one from the message body (and if you do provide a dedup ID, it overrides the generated one). (docs.aws.amazon.com)
Core behavior difference (context)
- Standard: at-least-once delivery, best-effort ordering. (docs.aws.amazon.com)
- FIFO: message-group ordering + deduplication/exactly-once processing semantics. (docs.aws.amazon.com)
Citations:
- 1: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessageBatchRequestEntry.html?utm_source=openai
- 2: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessageBatchRequestEntry.html?utm_source=openai
- 3: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessageBatchRequestEntry.html?utm_source=openai
- 4: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessageBatchRequestEntry.html?utm_source=openai
- 5: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessageBatchRequestEntry.html?utm_source=openai
- 6: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessageBatchRequestEntry.html?utm_source=openai
- 7: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-queue-types.html?utm_source=openai
- 8: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagededuplicationid-property.html?utm_source=openai
🌐 Web query:
AWS SQS SendMessage MessageDeduplicationId standard queue error InvalidParameterValue
💡 Result:
MessageDeduplicationId is only valid for SQS FIFO queues. If you pass it to a Standard queue, SQS rejects the request with InvalidParameterValue because deduplication IDs aren’t supported/used for Standard queues. [1]
What to do:
- If your queue is Standard: remove
MessageDeduplicationIdfromSendMessage(and don’t try to use FIFO-only features). - If you need deduplication: use a FIFO queue (queue name must end with
.fifo) and then either:- provide a
MessageDeduplicationIdon each send, or - enable FIFO queue attribute
ContentBasedDeduplicationso SQS can generate it from the message body. [1], [5]
- provide a
Sources:
[1] Amazon SQS SendMessage API (MessageDeduplicationId applies only to FIFO)
[5] Amazon SQS SetQueueAttributes API (ContentBasedDeduplication applies only to FIFO)
MessageDeduplicationId: crypto.randomUUID() defeats FIFO deduplication — derive the ID from message content instead.
Using crypto.randomUUID() generates a new unique ID on every call. SQS FIFO deduplication works by discarding messages with the same MessageDeduplicationId sent within a 5-minute window. A random UUID means every invocation is unique, so no deduplication ever occurs—the queue effectively becomes at-least-once despite using FIFO.
For actual deduplication to work, MessageDeduplicationId must be deterministic and derived from the message content or a business key (e.g., a transaction ID or SHA-256 of the payload). Additionally, MessageDeduplicationId is only valid for FIFO queues; including it on a standard queue returns an InvalidParameterValue error. MessageGroupId is required for FIFO ordering but optional on standard queues (for fair queue processing).
🐛 Proposed fix
-import { SQSClient, SendMessageCommand, ReceiveMessageCommand, DeleteMessageCommand } from "@aws-sdk/client-sqs";
+import { SQSClient, SendMessageCommand, ReceiveMessageCommand, DeleteMessageCommand } from "@aws-sdk/client-sqs";
+import { createHash } from "crypto";
async function sendMessage(payload: Record<string, unknown>) {
+ // For FIFO queues: derive a stable deduplication ID from message content
+ const deduplicationId = createHash("sha256")
+ .update(JSON.stringify(payload))
+ .digest("hex");
await sqs.send(new SendMessageCommand({
QueueUrl: queueUrl,
MessageBody: JSON.stringify(payload),
- MessageGroupId: "default", // For FIFO queues
- MessageDeduplicationId: crypto.randomUUID(),
+ MessageGroupId: "default", // Required for FIFO queues
+ MessageDeduplicationId: deduplicationId, // FIFO queues only; omit for standard queues
}));
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function sendMessage(payload: Record<string, unknown>) { | |
| await sqs.send(new SendMessageCommand({ | |
| QueueUrl: queueUrl, | |
| MessageBody: JSON.stringify(payload), | |
| MessageGroupId: "default", // For FIFO queues | |
| MessageDeduplicationId: crypto.randomUUID(), | |
| })); | |
| } | |
| async function sendMessage(payload: Record<string, unknown>) { | |
| // For FIFO queues: derive a stable deduplication ID from message content | |
| const deduplicationId = createHash("sha256") | |
| .update(JSON.stringify(payload)) | |
| .digest("hex"); | |
| await sqs.send(new SendMessageCommand({ | |
| QueueUrl: queueUrl, | |
| MessageBody: JSON.stringify(payload), | |
| MessageGroupId: "default", // Required for FIFO queues | |
| MessageDeduplicationId: deduplicationId, // FIFO queues only; omit for standard queues | |
| })); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/message-queue-patterns/SKILL.md` around lines 231 - 238, The
sendMessage function currently uses crypto.randomUUID() for
MessageDeduplicationId which prevents FIFO deduplication; change sendMessage to
compute a deterministic deduplication id (e.g., SHA-256 or other hash of
JSON.stringify(payload) or use a business key from payload) and pass that as
MessageDeduplicationId only when the target queue is FIFO (detect via queueUrl
ending with .fifo or a config flag); also only include MessageGroupId for FIFO
queues. Update references in sendMessage and any helpers that build the SQS
SendMessageCommand to conditionally add MessageDeduplicationId and
MessageGroupId and to compute the hash from the payload instead of using
crypto.randomUUID().
… nestjs, dotnet skills and 2 reviewers Add 8 new skills and 2 new reviewer agents covering major ecosystem gaps: - auth-patterns: JWT, OAuth 2.0/OIDC, sessions, RBAC, API keys, MFA - graphql-patterns: schema design, DataLoader, pagination, subscriptions - mongodb-patterns: schema design, CRUD, indexing, aggregation, transactions - redis-patterns: caching, rate limiting, distributed locks, pub/sub, streams - vue-patterns: Composition API, Pinia, Vue Router, TypeScript, Vitest - react-native-patterns: Expo, navigation, Zustand, FlashList, Reanimated - nestjs-patterns: modules, DI, guards, interceptors, TypeORM, testing - dotnet-patterns: Minimal APIs, EF Core, Result pattern, xUnit - graphql-reviewer: N+1 detection, auth bypass, schema design review - nestjs-reviewer: validation, auth, architecture, TypeORM review Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add missing `import os` in mongodb-patterns Python example - Add missing `import os` in redis-patterns Python example - Replace console.log with comment in vue-patterns onMounted example - Fix invalid base32 encoding to hex in auth-patterns TOTP example - Remove unbounded recursion in redis-patterns getWithMutex function Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 19
🧹 Nitpick comments (5)
skills/mongodb-patterns/SKILL.md (1)
6-18: Missing required "How It Works" sectionThe skill guidelines require three sections: When to Use, How It Works, and Examples. This document has "When to Activate" (close equivalent to "When to Use") and inline examples throughout, but the "How It Works" section is absent. Consider adding a brief
## How It Workssection summarising MongoDB's document model, driver lifecycle, or the key internals a reader needs to contextualise the patterns below.Based on learnings: "Applies to
skills/**/*.md: Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/mongodb-patterns/SKILL.md` around lines 6 - 18, Add a new "## How It Works" section to SKILL.md (next to the existing "## When to Activate") that briefly summarizes MongoDB fundamentals needed to contextualize the patterns: the document data model (BSON, embedding vs referencing), driver/client lifecycle and connection pooling, how indexes and query selection work, aggregation pipeline basics, and transaction semantics across multiple documents/collections; keep it concise (3–6 short paragraphs) and reference these concepts so the later examples (indexes, aggregation, transactions) map back to the internals.skills/nestjs-patterns/SKILL.md (1)
1-20: Add the required "How It Works" and "Examples" sections per skill guidelines.The coding guidelines require
skills/**/*.mdto include clear sections for When to Use, How It Works, and Examples. This file hasWhen to Activate(acceptable equivalent) but is missing bothHow It Worksand a dedicatedExamplessection — code snippets are embedded under feature headings but not grouped under a canonical## Examplesheading.♻️ Suggested structural addition
## When to Activate ... + +## How It Works + +NestJS uses a module-based dependency injection container. Each feature module declares its controllers, providers (services), and optionally exports providers for use by other modules. The request lifecycle flows through: middleware → guards → interceptors → pipes → controllers → services → interceptors (response) → exception filters (on error). + ## Core Principles ...And add at the bottom (or after each major section):
+## Examples + +See the annotated snippets in each section above, or refer to the official NestJS documentation for complete runnable examples.As per coding guidelines:
skills/**/*.md— "Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/nestjs-patterns/SKILL.md` around lines 1 - 20, The markdown is missing the required "How It Works" and dedicated "Examples" sections; add a "## How It Works" paragraph explaining the core NestJS patterns covered (modules, DI, controllers, validation, guards, interceptors, exception filters, DB integration, testing, microservices) and how they fit together, and add a "## Examples" section that collects and formats the representative code snippets currently scattered under feature headings into grouped examples (e.g., Modules & DI, Validation & Pipes, Guards & Interceptors, DB integration, Testing, Microservices) so each example references the relevant concept; keep the existing "When to Activate" heading as-is.skills/vue-patterns/SKILL.md (3)
61-63: Removeconsole.logfrom the "Script Setup (Recommended)" template — it's a production anti-pattern.Readers copy scaffolding templates verbatim; a bare
console.login a recommended pattern normalises leaking internal state to the browser console in production.♻️ Suggested fix
onMounted(() => { - console.log("Component mounted:", props.title); + // perform any DOM-dependent initialisation here });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/vue-patterns/SKILL.md` around lines 61 - 63, Remove the bare console.log in the "Script Setup (Recommended)" template: locate the onMounted hook that calls console.log("Component mounted:", props.title) and either delete that console.log or replace it with a dev-only/debug utility (e.g., an environment-gated logger) so no internal state is leaked in production; ensure the onMounted(() => { ... }) hook remains but contains no production console output.
296-307:fetchProductshas no error handling — a failed request silently corruptsitems.The
finallyonly resetsloading;fetchrejections are swallowed and a non-2xx response will callresp.json()on an error body, writing it toitems.value. The store also exposes noerrorref, unlikeuseFetchearlier in the same document.♻️ Suggested fix
+ const error = ref<Error | null>(null); + async function fetchProducts() { loading.value = true; + error.value = null; try { const resp = await fetch("/api/products"); + if (!resp.ok) throw new Error(`HTTP ${resp.status}`); items.value = await resp.json(); + } catch (e) { + error.value = e as Error; } finally { loading.value = false; } } - return { items, loading, filter, filtered, fetchProducts }; + return { items, loading, error, filter, filtered, fetchProducts };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/vue-patterns/SKILL.md` around lines 296 - 307, fetchProducts currently swallows network/HTTP errors and can overwrite items with error bodies; add an error ref (e.g., error) and update fetchProducts to catch exceptions and set error.value, check resp.ok before calling resp.json() (throw or set error on non-2xx), only assign items.value on successful JSON parse, and ensure loading.value is still reset in finally; also return/expose the error ref alongside items, loading, filter, filtered, fetchProducts so callers can react to failures.
337-352:to.metaproperties are typedunknownwithout aRouteMetaaugmentation — add it.Vue Router types
RouteMetaasRecord<string, unknown>by default, soto.meta.requiresAuth,to.meta.requiresRole, andto.meta.guestare allunknown. The guard comparisons (auth.user?.role !== to.meta.requiresRole) will either raise TypeScript errors or require unsafe casting without the standard module augmentation.♻️ Suggested addition (e.g., in a `router/types.d.ts` or at the top of the router file)
+import "vue-router"; + +declare module "vue-router" { + interface RouteMeta { + requiresAuth?: boolean; + requiresRole?: string; + guest?: boolean; + } +} + import { createRouter, createWebHistory } from "vue-router";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/vue-patterns/SKILL.md` around lines 337 - 352, Add a RouteMeta module augmentation so TypeScript knows the shape of to.meta used in router.beforeEach: declare the RouteMeta interface (or augment 'vue-router' RouteMeta) to include requiresAuth?: boolean, requiresRole?: string, and guest?: boolean; place this augmentation in a global d.ts (e.g., router/types.d.ts) or at the top of the router file so uses like to.meta.requiresAuth, to.meta.requiresRole, and to.meta.guest are properly typed and no longer treated as unknown.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@skills/mongodb-patterns/SKILL.md`:
- Around line 157-165: The code uses os.environ in the MongoClient call but
never imports os; add an import os at the top of the snippet so that
MongoClient(os.environ["MONGODB_URI"]) (and the variables client, db, users)
work without raising NameError: ensure the example includes "import os"
alongside the existing imports (MongoClient, ObjectId, datetime, timezone).
- Around line 328-367: Import the argon2 module and handle missing password in
the instance method: add an import for argon2 (used in the pre("save") hook and
in userSchema.methods.verifyPassword), and update verifyPassword
(userSchema.methods.verifyPassword) to guard against this.password being
undefined (return false if no password is present) or document/ensure callers
fetch the password by using .select('+password') when retrieving User (e.g.,
User.findById(...).select('+password')) so argon2.verify is never called with
undefined.
In `@skills/nestjs-patterns/SKILL.md`:
- Around line 391-408: The test suite's shared mockRepo accumulates jest.fn()
call history across tests; update the beforeEach block that sets up the
Test.createTestingModule and assigns service to first call jest.clearAllMocks()
so each test starts with fresh mock state; specifically, add a
jest.clearAllMocks() or jest.resetAllMocks() call at the start of the beforeEach
that initializes mockRepo/UsersService to prevent flaky, order-dependent tests.
- Around line 283-287: The current HttpException handling casts the response
message to a string which breaks when ValidationPipe returns message as
string[]; update the branch that sets message (inside the HttpException check)
to inspect the response shape from exception.getResponse(): if body is a string
use it, otherwise read (body as Record<string, unknown>).message and if that
value is an array (Array.isArray) join it into a single string (e.g. .join(',
')), else coerce it to a string; keep using exception.getStatus()/HttpStatus as
before so code paths using exception, HttpException, getResponse(), and the
message variable are fixed without losing other behavior.
- Around line 309-311: The plain `@Index`() on the email property is redundant and
creates a second non-unique index alongside `@Column`({ unique: true }); remove
the `@Index`() decorator from the email field and rely on `@Column`({ unique: true
}) to create the unique index, or if you need an explicit index
configuration/name replace the plain `@Index`() with an `@Index`({...}) call that
sets unique: true (and optionally a name) for the email property.
- Around line 343-349: The create method is persisting dto.password as plaintext
(see CreateUserDto and the create/save flow in async create(dto: CreateUserDto):
Promise<User>), so update the implementation to hash the password (using bcrypt
or argon2) before calling usersRepo.create/usersRepo.save — perform an async
hash (respecting a configurable salt/ops setting from env), replace dto.password
with the hashed value (or build a new user object from dto with the hashed
password), and ensure error handling is preserved; additionally add a checklist
item in SKILL.md explicitly requiring password hashing (and mention preferred
algorithms like bcrypt/argon2 and configurable rounds).
In `@skills/redis-patterns/SKILL.md`:
- Around line 106-122: The snippet uses os.environ in the Redis client creation
(r = redis.Redis.from_url(os.environ["REDIS_URL"], ...)) but never imports os,
causing a NameError; add "import os" to the top of the file (next to imports
like json and redis) so get_user and the Redis client initialization can access
os.environ correctly.
- Around line 10-19: Add a new "How It Works" Markdown section in SKILL.md
between the existing "Core Principles" and "Data Structures Quick Reference"
headings that briefly explains Redis internals: mention the single-threaded
event loop and atomic command execution, persistence modes (RDB snapshots and
AOF append-only logs and tradeoffs), replication and failover basics
(master-replica model and replica promotion), and cluster topology/sharding
(hash slots and slot migration) so the file meets the required "When to Use /
How It Works / Examples" structure.
- Line 319: The blanket .catch on the redis.xgroup call swallows every error;
update the call to catch the error and inspect it (e.g., check err.message or
err.code for the Redis BUSYGROUP response) and only ignore the BUSYGROUP case
for the "CREATE" on stream "orders" with group "order-processors" and option
"MKSTREAM"; for any other error rethrow or surface it (or log and throw) so
connection/configuration issues are not silently dropped.
- Around line 77-101: The getWithMutex function uses unbounded recursion causing
stack overflows and also misses a second cache check after acquiring the lock;
change the implementation to use an iterative retry loop with a configurable
maxAttempts (or timeout) instead of recursive calls, add a short sleep between
attempts, and after successfully setting the lock re-check redis.get(key) before
calling fetcher() to avoid redundant fetches; reference getWithMutex, lockKey,
acquired, and the recursive return getWithMutex(...) when updating the logic.
- Around line 136-154: The session HSET and EXPIRE are issued separately in
create() and get(), which can leave keys without TTL if a crash occurs; change
both places (create() where this.redis.hset and this.redis.expire are called,
and get() where this.redis.expire and this.redis.hset are called) to use a
single atomic pipeline/MULTI (e.g., this.redis.multi() or pipeline()) that
queues the HSET and EXPIRE together and then EXEC/exec the pipeline, and await
the exec result to ensure both commands are applied atomically.
- Around line 303-308: The subscription call on the Redis client uses
sub.subscribe(...) which returns a Promise but is not awaited, so subscription
or connection errors are swallowed; update the code where the Redis instance
`sub` is created and `sub.subscribe("notifications")` is called (and any similar
subscriptions) to await the subscribe Promise and handle errors (e.g., try/catch
or .catch) before registering the `sub.on("message", ...)` handler that calls
`handleNotification`, ensuring the subscription is confirmed or failures are
logged/propagated.
- Around line 218-247: Replace the deprecated HMSET usage inside
TOKEN_BUCKET_SCRIPT with the variadic HSET (e.g., use redis.call('HSET', key,
'tokens', tokens, 'last_refill', now)) and keep the EXPIRE logic as-is; also
move the local import time out of check_rate_limit into a module-level import so
check_rate_limit only references time and does not perform the import. Ensure
changes touch the TOKEN_BUCKET_SCRIPT constant and the check_rate_limit function
(remove the inline "import time" and add "import time" at the top of the
module).
In `@skills/vue-patterns/SKILL.md`:
- Around line 196-205: The useLocalStorage composable is not SSR-safe and can
crash on malformed JSON; change useLocalStorage to initialize the ref with
defaultValue, then only read from localStorage inside a client-only lifecycle
hook (e.g., onMounted) or after checking typeof window !== "undefined", wrapping
the getItem + JSON.parse in try/catch to fall back to defaultValue on any error;
also guard the watch callback so it only calls localStorage.setItem when
window/localStorage is available and JSON.stringify succeeds (try/catch) —
locate and update the useLocalStorage function to implement these checks and
error handling.
- Around line 422-440: The snippet passes an undefined variable items to
useVirtualList, causing a compile error; declare and initialize items in the
<script setup> before calling useVirtualList (e.g., a reactive array or const
items = ref([...]) / const items = [{ name: '...' }, ...]) so
useVirtualList(items, { itemHeight: 60, overscan: 10 }) has a real source array;
ensure the items variable shape matches how list elements read data.name and
import/ref it appropriately so list, containerProps, and wrapperProps work at
runtime.
- Around line 161-184: The useFetch composable has a race where prior fetches
can overwrite state and aren’t aborted; modify watchEffect to create an
AbortController for each run, pass controller.signal into fetch inside fetchData
(or make fetchData accept an optional AbortSignal), and use the watchEffect
cleanup callback to call controller.abort() so in-flight requests are cancelled
when url changes or the component unmounts; also guard state assignment
(data.value/error.value/loading.value) so you don’t write after an abort and
ensure the exported refresh uses the same abort-signal-aware fetchData
signature.
- Around line 1-19: This skill markdown is missing the required top-level "How
It Works" and "Examples" sections and uses "When to Activate" instead of the
required "When to Use"; update the document for "vue-patterns" by renaming the
"When to Activate" header to "When to Use", add a "How It Works" section that
explains the mechanics (Composition API patterns, composable structure, Pinia
usage, Router setup, TypeScript integration, and testing approach) in concise
steps, and add an "Examples" section with a few short, focused examples
illustrating a composable, a Pinia store snippet, a router guard, and a small
component using Composition API to demonstrate the patterns; keep content
concise and aligned with existing headers so "Core Principles" remains
opinionated guidance but not a substitute for the required sections.
- Around line 217-228: The useDebouncedRef function leaks its timeout because it
never clears pending timers or stops the watcher on component unmount; update
useDebouncedRef to (1) declare timeout as possibly undefined (e.g., let timeout:
ReturnType<typeof setTimeout> | undefined), (2) capture the watcher stop handle
by assigning stop = watch(source, ...) and keep the existing
clearTimeout(timeout) before scheduling, and (3) register an onUnmounted
callback that calls clearTimeout(timeout) and stop() so the debounced callback
cannot run after unmount; reference symbols: useDebouncedRef, source, debounced,
timeout, and the watch return value/stop and onUnmounted.
- Around line 86-88: The examples using defineModel (symbols: defineModel,
model, filter) need a version note: state that defineModel graduated to stable
in Vue 3.4 and that in Vue 3.3 it existed only as an experimental opt-in feature
(requires enabling the experimental model compiler option), so update the
SKILL.md text around the code block to explicitly require Vue ≥ 3.4 for stable
use (or document the 3.3 experimental opt-in) and add a brief inline note above
or below the example indicating which Vue versions require the opt-in compiler
flag.
---
Nitpick comments:
In `@skills/mongodb-patterns/SKILL.md`:
- Around line 6-18: Add a new "## How It Works" section to SKILL.md (next to the
existing "## When to Activate") that briefly summarizes MongoDB fundamentals
needed to contextualize the patterns: the document data model (BSON, embedding
vs referencing), driver/client lifecycle and connection pooling, how indexes and
query selection work, aggregation pipeline basics, and transaction semantics
across multiple documents/collections; keep it concise (3–6 short paragraphs)
and reference these concepts so the later examples (indexes, aggregation,
transactions) map back to the internals.
In `@skills/nestjs-patterns/SKILL.md`:
- Around line 1-20: The markdown is missing the required "How It Works" and
dedicated "Examples" sections; add a "## How It Works" paragraph explaining the
core NestJS patterns covered (modules, DI, controllers, validation, guards,
interceptors, exception filters, DB integration, testing, microservices) and how
they fit together, and add a "## Examples" section that collects and formats the
representative code snippets currently scattered under feature headings into
grouped examples (e.g., Modules & DI, Validation & Pipes, Guards & Interceptors,
DB integration, Testing, Microservices) so each example references the relevant
concept; keep the existing "When to Activate" heading as-is.
In `@skills/vue-patterns/SKILL.md`:
- Around line 61-63: Remove the bare console.log in the "Script Setup
(Recommended)" template: locate the onMounted hook that calls
console.log("Component mounted:", props.title) and either delete that
console.log or replace it with a dev-only/debug utility (e.g., an
environment-gated logger) so no internal state is leaked in production; ensure
the onMounted(() => { ... }) hook remains but contains no production console
output.
- Around line 296-307: fetchProducts currently swallows network/HTTP errors and
can overwrite items with error bodies; add an error ref (e.g., error) and update
fetchProducts to catch exceptions and set error.value, check resp.ok before
calling resp.json() (throw or set error on non-2xx), only assign items.value on
successful JSON parse, and ensure loading.value is still reset in finally; also
return/expose the error ref alongside items, loading, filter, filtered,
fetchProducts so callers can react to failures.
- Around line 337-352: Add a RouteMeta module augmentation so TypeScript knows
the shape of to.meta used in router.beforeEach: declare the RouteMeta interface
(or augment 'vue-router' RouteMeta) to include requiresAuth?: boolean,
requiresRole?: string, and guest?: boolean; place this augmentation in a global
d.ts (e.g., router/types.d.ts) or at the top of the router file so uses like
to.meta.requiresAuth, to.meta.requiresRole, and to.meta.guest are properly typed
and no longer treated as unknown.
| ```typescript | ||
| import mongoose, { Schema, model } from "mongoose"; | ||
|
|
||
| const userSchema = new Schema( | ||
| { | ||
| email: { type: String, required: true, unique: true, lowercase: true, trim: true }, | ||
| name: { type: String, required: true, maxlength: 100 }, | ||
| role: { type: String, enum: ["admin", "editor", "viewer"], default: "viewer" }, | ||
| password: { type: String, required: true, select: false }, | ||
| }, | ||
| { timestamps: true }, | ||
| ); | ||
|
|
||
| // Middleware — hash password before save | ||
| userSchema.pre("save", async function (next) { | ||
| if (!this.isModified("password")) return next(); | ||
| this.password = await argon2.hash(this.password); | ||
| next(); | ||
| }); | ||
|
|
||
| // Instance method | ||
| userSchema.methods.verifyPassword = async function (plain: string): Promise<boolean> { | ||
| return argon2.verify(this.password, plain); | ||
| }; | ||
|
|
||
| // Static method | ||
| userSchema.statics.findByEmail = function (email: string) { | ||
| return this.findOne({ email: email.toLowerCase() }); | ||
| }; | ||
|
|
||
| // Virtual | ||
| userSchema.virtual("displayName").get(function () { | ||
| return this.name || this.email.split("@")[0]; | ||
| }); | ||
|
|
||
| const User = model("User", userSchema); | ||
|
|
||
| // Populate (reference resolution) | ||
| const order = await Order.findById(orderId).populate("userId", "name email"); | ||
| ``` |
There was a problem hiding this comment.
Two issues in the Mongoose ODM snippet: missing argon2 import and a silent failure in verifyPassword when select: false is active
-
Missing import:
argon2is referenced on Lines 344 and 350 but never imported. The snippet only importsmongoose, { Schema, model }. Copying this example as-is will throwReferenceError: argon2 is not defined. -
verifyPasswordsilently breaks withselect: false: Thepasswordfield is markedselect: false(Line 336), so anyUserdocument fetched without.select('+password')will havethis.password === undefined. Callingargon2.verify(undefined, plain)will throw rather than returnfalse. The example should either show the correct fetch call or add an inline guard.
🐛 Proposed fixes
+import argon2 from "argon2";
import mongoose, { Schema, model } from "mongoose"; userSchema.methods.verifyPassword = async function (plain: string): Promise<boolean> {
+ if (!this.password) throw new Error("Password not selected — fetch with .select('+password')");
return argon2.verify(this.password, plain);
};And when calling the method, select the field explicitly:
const user = await User.findById(id).select('+password');
await user.verifyPassword(plain);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ```typescript | |
| import mongoose, { Schema, model } from "mongoose"; | |
| const userSchema = new Schema( | |
| { | |
| email: { type: String, required: true, unique: true, lowercase: true, trim: true }, | |
| name: { type: String, required: true, maxlength: 100 }, | |
| role: { type: String, enum: ["admin", "editor", "viewer"], default: "viewer" }, | |
| password: { type: String, required: true, select: false }, | |
| }, | |
| { timestamps: true }, | |
| ); | |
| // Middleware — hash password before save | |
| userSchema.pre("save", async function (next) { | |
| if (!this.isModified("password")) return next(); | |
| this.password = await argon2.hash(this.password); | |
| next(); | |
| }); | |
| // Instance method | |
| userSchema.methods.verifyPassword = async function (plain: string): Promise<boolean> { | |
| return argon2.verify(this.password, plain); | |
| }; | |
| // Static method | |
| userSchema.statics.findByEmail = function (email: string) { | |
| return this.findOne({ email: email.toLowerCase() }); | |
| }; | |
| // Virtual | |
| userSchema.virtual("displayName").get(function () { | |
| return this.name || this.email.split("@")[0]; | |
| }); | |
| const User = model("User", userSchema); | |
| // Populate (reference resolution) | |
| const order = await Order.findById(orderId).populate("userId", "name email"); | |
| ``` | |
| import argon2 from "argon2"; | |
| import mongoose, { Schema, model } from "mongoose"; | |
| const userSchema = new Schema( | |
| { | |
| email: { type: String, required: true, unique: true, lowercase: true, trim: true }, | |
| name: { type: String, required: true, maxlength: 100 }, | |
| role: { type: String, enum: ["admin", "editor", "viewer"], default: "viewer" }, | |
| password: { type: String, required: true, select: false }, | |
| }, | |
| { timestamps: true }, | |
| ); | |
| // Middleware — hash password before save | |
| userSchema.pre("save", async function (next) { | |
| if (!this.isModified("password")) return next(); | |
| this.password = await argon2.hash(this.password); | |
| next(); | |
| }); | |
| // Instance method | |
| userSchema.methods.verifyPassword = async function (plain: string): Promise<boolean> { | |
| if (!this.password) throw new Error("Password not selected — fetch with .select('+password')"); | |
| return argon2.verify(this.password, plain); | |
| }; | |
| // Static method | |
| userSchema.statics.findByEmail = function (email: string) { | |
| return this.findOne({ email: email.toLowerCase() }); | |
| }; | |
| // Virtual | |
| userSchema.virtual("displayName").get(function () { | |
| return this.name || this.email.split("@")[0]; | |
| }); | |
| const User = model("User", userSchema); | |
| // Populate (reference resolution) | |
| const order = await Order.findById(orderId).populate("userId", "name email"); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/mongodb-patterns/SKILL.md` around lines 328 - 367, Import the argon2
module and handle missing password in the instance method: add an import for
argon2 (used in the pre("save") hook and in userSchema.methods.verifyPassword),
and update verifyPassword (userSchema.methods.verifyPassword) to guard against
this.password being undefined (return false if no password is present) or
document/ensure callers fetch the password by using .select('+password') when
retrieving User (e.g., User.findById(...).select('+password')) so argon2.verify
is never called with undefined.
| if (exception instanceof HttpException) { | ||
| status = exception.getStatus(); | ||
| const body = exception.getResponse(); | ||
| message = typeof body === "string" ? body : (body as Record<string, unknown>).message as string; | ||
| code = HttpStatus[status] ?? "ERROR"; |
There was a problem hiding this comment.
message cast to string is incorrect when ValidationPipe returns a string[].
ValidationPipe populates exception.getResponse() as { statusCode, message: string[], error }. The cast (body as Record<string, unknown>).message as string silently coerces an array (e.g. ["email must be an email"]) to a comma-joined string or undefined. The doc example should handle both shapes:
🐛 Proposed fix
- message = typeof body === "string" ? body : (body as Record<string, unknown>).message as string;
+ const bodyObj = body as Record<string, unknown>;
+ const raw = bodyObj.message;
+ message = typeof raw === "string" ? raw : Array.isArray(raw) ? raw.join(", ") : "Bad request";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (exception instanceof HttpException) { | |
| status = exception.getStatus(); | |
| const body = exception.getResponse(); | |
| message = typeof body === "string" ? body : (body as Record<string, unknown>).message as string; | |
| code = HttpStatus[status] ?? "ERROR"; | |
| if (exception instanceof HttpException) { | |
| status = exception.getStatus(); | |
| const body = exception.getResponse(); | |
| const bodyObj = body as Record<string, unknown>; | |
| const raw = bodyObj.message; | |
| message = typeof raw === "string" ? raw : Array.isArray(raw) ? raw.join(", ") : "Bad request"; | |
| code = HttpStatus[status] ?? "ERROR"; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/nestjs-patterns/SKILL.md` around lines 283 - 287, The current
HttpException handling casts the response message to a string which breaks when
ValidationPipe returns message as string[]; update the branch that sets message
(inside the HttpException check) to inspect the response shape from
exception.getResponse(): if body is a string use it, otherwise read (body as
Record<string, unknown>).message and if that value is an array (Array.isArray)
join it into a single string (e.g. .join(', ')), else coerce it to a string;
keep using exception.getStatus()/HttpStatus as before so code paths using
exception, HttpException, getResponse(), and the message variable are fixed
without losing other behavior.
| @Column({ unique: true }) | ||
| @Index() | ||
| email: string; |
There was a problem hiding this comment.
@Index() is redundant alongside @Column({ unique: true }).
@Column({ unique: true }) already creates a unique index on the column; the bare @Index() on line 310 adds a second, regular non-unique index on the same column — wasteful and misleading as a doc pattern. Drop the @Index() decorator here; use @Index({ unique: true }) on the @Column instead if an explicit index name is needed.
🐛 Proposed fix
- `@Column`({ unique: true })
- `@Index`()
+ `@Column`()
+ `@Index`({ unique: true })
email: string;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Column({ unique: true }) | |
| @Index() | |
| email: string; | |
| `@Column`() | |
| `@Index`({ unique: true }) | |
| email: string; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/nestjs-patterns/SKILL.md` around lines 309 - 311, The plain `@Index`()
on the email property is redundant and creates a second non-unique index
alongside `@Column`({ unique: true }); remove the `@Index`() decorator from the
email field and rely on `@Column`({ unique: true }) to create the unique index, or
if you need an explicit index configuration/name replace the plain `@Index`() with
an `@Index`({...}) call that sets unique: true (and optionally a name) for the
email property.
| async create(dto: CreateUserDto): Promise<User> { | ||
| const existing = await this.usersRepo.findOne({ where: { email: dto.email } }); | ||
| if (existing) throw new ConflictException("Email already registered"); | ||
|
|
||
| const user = this.usersRepo.create(dto); | ||
| return this.usersRepo.save(user); | ||
| } |
There was a problem hiding this comment.
Password stored as plaintext — add hashing or at minimum a prominent note.
usersRepo.create(dto) passes dto.password directly to the ORM. The file bills itself as "Production-grade NestJS patterns", so omitting password hashing here (and in the checklist) will directly mislead readers. At minimum, the example should use bcrypt/argon2 before persisting, and the checklist should include a hashing item.
🐛 Proposed fix
+import * as bcrypt from "bcrypt";
+
`@Injectable`()
export class UsersService {
...
async create(dto: CreateUserDto): Promise<User> {
const existing = await this.usersRepo.findOne({ where: { email: dto.email } });
if (existing) throw new ConflictException("Email already registered");
+ const hashed = await bcrypt.hash(dto.password, 12);
- const user = this.usersRepo.create(dto);
+ const user = this.usersRepo.create({ ...dto, password: hashed });
return this.usersRepo.save(user);
}And in the checklist:
+- [ ] Passwords hashed with bcrypt or argon2 before persistence (never store plaintext)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async create(dto: CreateUserDto): Promise<User> { | |
| const existing = await this.usersRepo.findOne({ where: { email: dto.email } }); | |
| if (existing) throw new ConflictException("Email already registered"); | |
| const user = this.usersRepo.create(dto); | |
| return this.usersRepo.save(user); | |
| } | |
| async create(dto: CreateUserDto): Promise<User> { | |
| const existing = await this.usersRepo.findOne({ where: { email: dto.email } }); | |
| if (existing) throw new ConflictException("Email already registered"); | |
| const hashed = await bcrypt.hash(dto.password, 12); | |
| const user = this.usersRepo.create({ ...dto, password: hashed }); | |
| return this.usersRepo.save(user); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/nestjs-patterns/SKILL.md` around lines 343 - 349, The create method is
persisting dto.password as plaintext (see CreateUserDto and the create/save flow
in async create(dto: CreateUserDto): Promise<User>), so update the
implementation to hash the password (using bcrypt or argon2) before calling
usersRepo.create/usersRepo.save — perform an async hash (respecting a
configurable salt/ops setting from env), replace dto.password with the hashed
value (or build a new user object from dto with the hashed password), and ensure
error handling is preserved; additionally add a checklist item in SKILL.md
explicitly requiring password hashing (and mention preferred algorithms like
bcrypt/argon2 and configurable rounds).
| const model = defineModel<string>({ required: true }); | ||
| const filter = defineModel<string>("filter", { default: "all" }); | ||
| </script> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's verify the file exists and check the context around lines 86-88
fd "SKILL.md" skills/Repository: affaan-m/everything-claude-code
Length of output: 2184
🏁 Script executed:
# Read the file to see the context around lines 86-88
cat -n skills/vue-patterns/SKILL.md | sed -n '75,100p'Repository: affaan-m/everything-claude-code
Length of output: 940
🌐 Web query:
Vue 3.4 defineModel version requirements release notes
💡 Result:
- Stable / recommended
defineModel: Vue3.4+— the official<script setup>docs statedefineModel()is “Only available in 3.4+” [1], and the Vue 3.4 announcement highlights the stabilization ofdefineModel[2]. - Earlier availability (experimental): Vue
3.3(beta/alpha era) with explicit opt-in — the core-team RFC describesdefineModelas experimental, disabled by default, requiring a compiler option (e.g. Vitevue({ script: { defineModel: true } })) [3]. A Vue 3.3 feature summary likewise notes it’s experimental and opt-in [4].
Release notes / references to cite
- Vue 3.4 announcement (includes
defineModelstabilization) [2] - Vue 3.4+ API reference for
<script setup>/defineModel()[1] - Core team RFC for
defineModel(experimental + opt-in details for 3.3 betas) [3] - Vue 3.3 “what’s new” summary showing experimental + opt-in usage [4]
Sources: [1] (vuejs.org) [2] (blog.vuejs.org) [3] (github.com) [4] (blog.ninja-squad.com)
Citations:
- 1: https://vuejs.org/api/sfc-script-setup.html?utm_source=openai
- 2: https://blog.vuejs.org/?utm_source=openai
- 3: [Core Team RFC] New SFC macro: defineModel vuejs/rfcs#503
- 4: https://blog.ninja-squad.com/2023/05/15/what-is-new-vue-3.3?utm_source=openai
defineModel requires Vue ≥ 3.4 — add a version note.
defineModel was shipped in 3.3 as an experimental feature (opt-in via compiler configuration) and graduated to stable status in 3.4. The document currently presents it without any version requirement, which could mislead teams on earlier Vue versions.
📝 Suggested addition
### v-model on Components
+> **Requires Vue ≥ 3.4** (`defineModel` was experimental in 3.3 and stable from 3.4 onward).
+
```vue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/vue-patterns/SKILL.md` around lines 86 - 88, The examples using
defineModel (symbols: defineModel, model, filter) need a version note: state
that defineModel graduated to stable in Vue 3.4 and that in Vue 3.3 it existed
only as an experimental opt-in feature (requires enabling the experimental model
compiler option), so update the SKILL.md text around the code block to
explicitly require Vue ≥ 3.4 for stable use (or document the 3.3 experimental
opt-in) and add a brief inline note above or below the example indicating which
Vue versions require the opt-in compiler flag.
| export function useFetch<T>(url: string | Ref<string>): UseFetchReturn<T> { | ||
| const data = ref<T | null>(null) as Ref<T | null>; | ||
| const error = ref<Error | null>(null); | ||
| const loading = ref(false); | ||
|
|
||
| async function fetchData() { | ||
| loading.value = true; | ||
| error.value = null; | ||
| try { | ||
| const resolvedUrl = typeof url === "string" ? url : url.value; | ||
| const resp = await fetch(resolvedUrl); | ||
| if (!resp.ok) throw new Error(`HTTP ${resp.status}`); | ||
| data.value = await resp.json(); | ||
| } catch (e) { | ||
| error.value = e as Error; | ||
| } finally { | ||
| loading.value = false; | ||
| } | ||
| } | ||
|
|
||
| watchEffect(() => { fetchData(); }); | ||
|
|
||
| return { data, error, loading, refresh: fetchData }; | ||
| } |
There was a problem hiding this comment.
Race condition in useFetch: previous in-flight requests are never aborted.
When url changes (or the component re-renders the composable), watchEffect fires a new fetchData call while the old fetch is still in flight. If the old request resolves later it will overwrite data.value with stale results. The composable also has no cleanup on unmount, leaving dangling callbacks that set reactive state after the component is destroyed.
Use AbortController inside the watcher and clean up via onCleanup (Vue's built-in watchEffect cleanup param):
🐛 Proposed fix
- watchEffect(() => { fetchData(); });
+ watchEffect((onCleanup) => {
+ const controller = new AbortController();
+ onCleanup(() => controller.abort());
+
+ loading.value = true;
+ error.value = null;
+ const resolvedUrl = typeof url === "string" ? url : url.value;
+ fetch(resolvedUrl, { signal: controller.signal })
+ .then((resp) => {
+ if (!resp.ok) throw new Error(`HTTP ${resp.status}`);
+ return resp.json();
+ })
+ .then((json) => { data.value = json; })
+ .catch((e) => { if (e.name !== "AbortError") error.value = e as Error; })
+ .finally(() => { loading.value = false; });
+ });The outer fetchData helper can then be kept only for the refresh export, or call it with the same abort signal pattern.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/vue-patterns/SKILL.md` around lines 161 - 184, The useFetch composable
has a race where prior fetches can overwrite state and aren’t aborted; modify
watchEffect to create an AbortController for each run, pass controller.signal
into fetch inside fetchData (or make fetchData accept an optional AbortSignal),
and use the watchEffect cleanup callback to call controller.abort() so in-flight
requests are cancelled when url changes or the component unmounts; also guard
state assignment (data.value/error.value/loading.value) so you don’t write after
an abort and ensure the exported refresh uses the same abort-signal-aware
fetchData signature.
| export function useLocalStorage<T>(key: string, defaultValue: T): Ref<T> { | ||
| const stored = localStorage.getItem(key); | ||
| const data = ref<T>(stored ? JSON.parse(stored) : defaultValue) as Ref<T>; | ||
|
|
||
| watch(data, (value) => { | ||
| localStorage.setItem(key, JSON.stringify(value)); | ||
| }, { deep: true }); | ||
|
|
||
| return data; | ||
| } |
There was a problem hiding this comment.
useLocalStorage is not SSR-safe and swallows malformed JSON errors.
Two issues:
localStorage.getItem(key)is called synchronously during composable instantiation — in an SSR/Nuxt context this throwsReferenceError: localStorage is not defined.JSON.parse(stored)has no error guard; a corrupted stored value crashes the composable instead of falling back todefaultValue.
🐛 Proposed fix
export function useLocalStorage<T>(key: string, defaultValue: T): Ref<T> {
- const stored = localStorage.getItem(key);
- const data = ref<T>(stored ? JSON.parse(stored) : defaultValue) as Ref<T>;
+ function readStorage(): T {
+ if (typeof window === "undefined") return defaultValue;
+ try {
+ const stored = localStorage.getItem(key);
+ return stored ? (JSON.parse(stored) as T) : defaultValue;
+ } catch {
+ return defaultValue;
+ }
+ }
+ const data = ref<T>(readStorage()) as Ref<T>;
watch(data, (value) => {
- localStorage.setItem(key, JSON.stringify(value));
+ if (typeof window !== "undefined") {
+ localStorage.setItem(key, JSON.stringify(value));
+ }
}, { deep: true });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/vue-patterns/SKILL.md` around lines 196 - 205, The useLocalStorage
composable is not SSR-safe and can crash on malformed JSON; change
useLocalStorage to initialize the ref with defaultValue, then only read from
localStorage inside a client-only lifecycle hook (e.g., onMounted) or after
checking typeof window !== "undefined", wrapping the getItem + JSON.parse in
try/catch to fall back to defaultValue on any error; also guard the watch
callback so it only calls localStorage.setItem when window/localStorage is
available and JSON.stringify succeeds (try/catch) — locate and update the
useLocalStorage function to implement these checks and error handling.
| export function useDebouncedRef<T>(value: T, delay: number = 300): { source: Ref<T>; debounced: Ref<T> } { | ||
| const source = ref(value) as Ref<T>; | ||
| const debounced = ref(value) as Ref<T>; | ||
| let timeout: ReturnType<typeof setTimeout>; | ||
|
|
||
| watch(source, (val) => { | ||
| clearTimeout(timeout); | ||
| timeout = setTimeout(() => { debounced.value = val; }, delay); | ||
| }); | ||
|
|
||
| return { source, debounced }; | ||
| } |
There was a problem hiding this comment.
useDebouncedRef leaks its timer — clears neither on unmount nor on repeated calls.
clearTimeout(timeout) is correctly called before each re-schedule (line 223), but there is no onUnmounted guard. If the component is destroyed while a debounce is pending, the callback fires and mutates debounced.value on an unmounted component.
🐛 Proposed fix
+import { ref, watch, onUnmounted, type Ref } from "vue";
export function useDebouncedRef<T>(value: T, delay: number = 300): { source: Ref<T>; debounced: Ref<T> } {
const source = ref(value) as Ref<T>;
const debounced = ref(value) as Ref<T>;
let timeout: ReturnType<typeof setTimeout>;
watch(source, (val) => {
clearTimeout(timeout);
timeout = setTimeout(() => { debounced.value = val; }, delay);
});
+ onUnmounted(() => clearTimeout(timeout));
+
return { source, debounced };
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function useDebouncedRef<T>(value: T, delay: number = 300): { source: Ref<T>; debounced: Ref<T> } { | |
| const source = ref(value) as Ref<T>; | |
| const debounced = ref(value) as Ref<T>; | |
| let timeout: ReturnType<typeof setTimeout>; | |
| watch(source, (val) => { | |
| clearTimeout(timeout); | |
| timeout = setTimeout(() => { debounced.value = val; }, delay); | |
| }); | |
| return { source, debounced }; | |
| } | |
| import { ref, watch, onUnmounted, type Ref } from "vue"; | |
| export function useDebouncedRef<T>(value: T, delay: number = 300): { source: Ref<T>; debounced: Ref<T> } { | |
| const source = ref(value) as Ref<T>; | |
| const debounced = ref(value) as Ref<T>; | |
| let timeout: ReturnType<typeof setTimeout>; | |
| watch(source, (val) => { | |
| clearTimeout(timeout); | |
| timeout = setTimeout(() => { debounced.value = val; }, delay); | |
| }); | |
| onUnmounted(() => clearTimeout(timeout)); | |
| return { source, debounced }; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/vue-patterns/SKILL.md` around lines 217 - 228, The useDebouncedRef
function leaks its timeout because it never clears pending timers or stops the
watcher on component unmount; update useDebouncedRef to (1) declare timeout as
possibly undefined (e.g., let timeout: ReturnType<typeof setTimeout> |
undefined), (2) capture the watcher stop handle by assigning stop =
watch(source, ...) and keep the existing clearTimeout(timeout) before
scheduling, and (3) register an onUnmounted callback that calls
clearTimeout(timeout) and stop() so the debounced callback cannot run after
unmount; reference symbols: useDebouncedRef, source, debounced, timeout, and the
watch return value/stop and onUnmounted.
| <script setup lang="ts"> | ||
| import { useVirtualList } from "@vueuse/core"; | ||
|
|
||
| const { list, containerProps, wrapperProps } = useVirtualList(items, { | ||
| itemHeight: 60, | ||
| overscan: 10, | ||
| }); | ||
| </script> | ||
|
|
||
| <template> | ||
| <div v-bind="containerProps" style="height: 400px; overflow: auto"> | ||
| <div v-bind="wrapperProps"> | ||
| <div v-for="{ data, index } in list" :key="index" style="height: 60px"> | ||
| {{ data.name }} | ||
| </div> | ||
| </div> | ||
| </div> | ||
| </template> | ||
| ``` |
There was a problem hiding this comment.
items is used but never declared in the virtual scrolling example.
useVirtualList(items, ...) at line 425 references items which is not defined anywhere in the <script setup> block, causing a compile-time error when readers copy this snippet.
🐛 Proposed fix
<script setup lang="ts">
+import { ref } from "vue";
import { useVirtualList } from "@vueuse/core";
+interface ListItem { name: string; [key: string]: unknown }
+const items = ref<ListItem[]>([]); // populate from props or a store
+
const { list, containerProps, wrapperProps } = useVirtualList(items, {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <script setup lang="ts"> | |
| import { useVirtualList } from "@vueuse/core"; | |
| const { list, containerProps, wrapperProps } = useVirtualList(items, { | |
| itemHeight: 60, | |
| overscan: 10, | |
| }); | |
| </script> | |
| <template> | |
| <div v-bind="containerProps" style="height: 400px; overflow: auto"> | |
| <div v-bind="wrapperProps"> | |
| <div v-for="{ data, index } in list" :key="index" style="height: 60px"> | |
| {{ data.name }} | |
| </div> | |
| </div> | |
| </div> | |
| </template> | |
| ``` | |
| <script setup lang="ts"> | |
| import { ref } from "vue"; | |
| import { useVirtualList } from "@vueuse/core"; | |
| interface ListItem { name: string; [key: string]: unknown } | |
| const items = ref<ListItem[]>([]); // populate from props or a store | |
| const { list, containerProps, wrapperProps } = useVirtualList(items, { | |
| itemHeight: 60, | |
| overscan: 10, | |
| }); | |
| </script> | |
| <template> | |
| <div v-bind="containerProps" style="height: 400px; overflow: auto"> | |
| <div v-bind="wrapperProps"> | |
| <div v-for="{ data, index } in list" :key="index" style="height: 60px"> | |
| {{ data.name }} | |
| </div> | |
| </div> | |
| </div> | |
| </template> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/vue-patterns/SKILL.md` around lines 422 - 440, The snippet passes an
undefined variable items to useVirtualList, causing a compile error; declare and
initialize items in the <script setup> before calling useVirtualList (e.g., a
reactive array or const items = ref([...]) / const items = [{ name: '...' },
...]) so useVirtualList(items, { itemHeight: 60, overscan: 10 }) has a real
source array; ensure the items variable shape matches how list elements read
data.name and import/ref it appropriately so list, containerProps, and
wrapperProps work at runtime.
…utter, microservices, github-actions skills and 2 reviewers Add 10 new features covering major technology gaps: - 8 skills: angular-patterns, nextjs-patterns, svelte-patterns, mysql-patterns, elasticsearch-patterns, flutter-patterns, microservices-patterns, github-actions-patterns - 2 agents: django-reviewer, springboot-reviewer Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 16
🧹 Nitpick comments (6)
agents/springboot-reviewer.md (1)
80-80:grep -rLwill flag test and abstract controllers as unsecured.
grep -rLlists files that do not contain@PreAuthorize/@Secured/@RolesAllowed. Without scoping tosrc/main, this will include test controllers and abstract base controllers that are legitimately unannotated, producing noisy false positives.♻️ Suggested fix
-grep -rL "PreAuthorize\|Secured\|RolesAllowed" --include="*Controller.java" +grep -rL "PreAuthorize\|Secured\|RolesAllowed" --include="*Controller.java" src/main🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agents/springboot-reviewer.md` at line 80, The grep command using "grep -rL "PreAuthorize\|Secured\|RolesAllowed" --include="*Controller.java"" will produce false positives by listing test and abstract controllers that legitimately lack `@PreAuthorize/`@Secured/@RolesAllowed; restrict the search to production code (e.g., only under src/main) and/or exclude test directories and abstract base controllers so only concrete production controllers are flagged — update the invocation that references the controller file pattern to scope the path to src/main (or add an --exclude-dir for tests) and ensure abstract controller classes (base/Abstract*Controller) are ignored when checking for `@PreAuthorize/`@Secured/@RolesAllowed.skills/github-actions-patterns/SKILL.md (1)
127-133:secrets: inheritcontradicts the "Least privilege" Core Principle
secrets: inheritforwards the entire secret namespace of the calling workflow into the reusable workflow—the secret equivalent of leavingpermissions: write-all. This directly contradicts Core Principle#2(Least privilege — set permissions explicitly). Since the shared workflow only consumesNPM_TOKEN(and marks itrequired: false), an explicit mapping is safer and self-documenting:🔒 Proposed fix
- secrets: inherit + secrets: + NPM_TOKEN: ${{ secrets.NPM_TOKEN }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/github-actions-patterns/SKILL.md` around lines 127 - 133, The job using the reusable workflow currently uses "secrets: inherit" which violates least-privilege; change the ci job configuration that calls ./.github/workflows/ci-shared.yml to stop inheriting all secrets and instead pass only the specific secret(s) the shared workflow needs (e.g., map NPM_TOKEN explicitly to ${{ secrets.NPM_TOKEN }}), and update the reusable workflow's inputs/required flags if needed so the shared workflow continues to accept NPM_TOKEN as optional.skills/nextjs-patterns/SKILL.md (1)
249-252:handleToggleswallows server action failures silently.If
toggleTodoActionrejects, the error propagates as an uncaught promise rejection and the optimistic state reverts without any user feedback. Production patterns should show at minimum a try/catch with an error state.♻️ Suggested improvement
+ const [error, setError] = useState<string | null>(null); + async function handleToggle(id: string) { + setError(null); setOptimistic(id); - await toggleTodoAction(id); + try { + await toggleTodoAction(id); + } catch { + setError("Failed to update todo. Please try again."); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/nextjs-patterns/SKILL.md` around lines 249 - 252, The handleToggle function currently calls setOptimistic(id) then awaits toggleTodoAction(id) and thus will let rejections become uncaught and provide no user feedback; wrap the await in a try/catch inside handleToggle (catching errors from toggleTodoAction), set an error state (or call a provided setError handler) on failure, and only revert the optimistic UI (clear setOptimistic or update state) inside the catch block; also log the error for debugging. Ensure you update references to handleToggle, toggleTodoAction, setOptimistic and the component's error state so failures are surfaced to the user.skills/svelte-patterns/SKILL.md (1)
153-153:childrentyped asanycontradicts the file's own type-safety principle.
childrenin a Svelte 5 layout is aSnippet(from'svelte'), notany. This contradicts Core Principle#4.♻️ Proposed fix
-import type { LayoutData } from "./$types"; +import type { LayoutData } from "./$types"; +import type { Snippet } from "svelte"; - let { data, children }: { data: LayoutData; children: any } = $props(); + let { data, children }: { data: LayoutData; children: Snippet } = $props();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/svelte-patterns/SKILL.md` at line 153, The children prop is currently typed as any in the destructuring of $props() — change its type to the correct Svelte Snippet type and add the appropriate import; specifically import type { Snippet } from 'svelte' (or ensure Snippet is available) and update the destructuring annotation from children: any to children: Snippet (leave data as LayoutData) so the line using $props() becomes typed with LayoutData and Snippet.skills/elasticsearch-patterns/SKILL.md (2)
402-403: Prefer SDK types overany[]formustandfilter.
@elastic/elasticsearchexportsQueryDslQueryContainerfor this exact purpose, which would also eliminate the(h: any)cast at the hit-mapping step.♻️ Proposed refactor
+import type { QueryDslQueryContainer } from '@elastic/elasticsearch/lib/api/types'; - const must: any[] = []; - const filter: any[] = []; + const must: QueryDslQueryContainer[] = []; + const filter: QueryDslQueryContainer[] = [];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/elasticsearch-patterns/SKILL.md` around lines 402 - 403, Replace the loose any[] types for must and filter with the Elasticsearch SDK type QueryDslQueryContainer[] (imported from `@elastic/elasticsearch`) and update the hit mapping to accept QueryDslQueryContainer-typed query results so you can remove the (h: any) cast; specifically change the declarations const must: any[] = []; const filter: any[] = []; to use const must: QueryDslQueryContainer[] = []; const filter: QueryDslQueryContainer[] = []; import QueryDslQueryContainer and update the hit-mapping callback (the function that maps hits/results) to take the correct typed hit object instead of casting.
10-18: Add the missing required sections per skill guidelines.The skill guidelines require three sections: When to Use, How It Works, and Examples. This file has:
## When to Activateinstead of## When to Use- No
## How It Workssection (Core Principles partially fills this role but is not labeled correctly)- No
## Examplessection (the content is there but under domain-specific headings)♻️ Suggested structure alignment
-## When to Activate +## When to Use - Designing Elasticsearch index mappings and analyzers ... ## Core Principles ... + +## How It Works + +Elasticsearch stores documents in inverted indexes across shards. Queries execute in two phases: scatter (fan-out to shards) and gather (merge results). Understanding mapping types, query context vs. filter context, and the segment lifecycle is essential for predictable performance. ## Index Design ... +## Examples + ### Explicit Mapping ...As per coding guidelines: "Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/elasticsearch-patterns/SKILL.md` around lines 10 - 18, Rename the top-level heading "## When to Activate" to "## When to Use", add a new "## How It Works" section (moving or relabeling the existing "Core Principles" content under it), and add a new "## Examples" section that collects the domain-specific example snippets currently scattered under other headings into one Examples block; ensure headings are exact ("## When to Use", "## How It Works", "## Examples") and move/adjust the existing text under SKILL.md (e.g., Core Principles and the domain-specific examples) so each required section contains the appropriate content and ordering.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
agents/django-reviewer.mdagents/springboot-reviewer.mdskills/angular-patterns/SKILL.mdskills/elasticsearch-patterns/SKILL.mdskills/flutter-patterns/SKILL.mdskills/github-actions-patterns/SKILL.mdskills/microservices-patterns/SKILL.mdskills/mysql-patterns/SKILL.mdskills/nextjs-patterns/SKILL.mdskills/svelte-patterns/SKILL.md
✅ Files skipped from review due to trivial changes (5)
- skills/flutter-patterns/SKILL.md
- skills/angular-patterns/SKILL.md
- skills/microservices-patterns/SKILL.md
- agents/django-reviewer.md
- skills/mysql-patterns/SKILL.md
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@agents/springboot-reviewer.md`:
- Line 11: The git diff invocation in the reviewer uses "git diff -- '*.java'
'*.kt' '*.yml' '*.properties'" which only shows unstaged changes; update the
command so it compares against HEAD (for example replace with "git diff HEAD --
'*.java' '*.kt' '*.yml' '*.properties'" or use "git diff HEAD~1..HEAD" when you
want the last commit) so the agent sees staged and committed changes; locate the
string in agents/springboot-reviewer.md and modify the command there.
In `@skills/elasticsearch-patterns/SKILL.md`:
- Around line 288-291: The snippet uses os.environ but never imports os; add an
import for the os module at the top of the snippet so the call to
os.environ["ELASTICSEARCH_URL"] works. Locate the code block that defines
Elasticsearch and helpers (the lines referencing Elasticsearch, helpers and
os.environ) and insert "import os" before those lines to ensure os is defined
for use.
- Around line 253-283: The bulkIndex example uses the v8 client incorrectly and
uses an anti-pattern for refresh: update the client.bulk call in bulkIndex to
pass the flattened bulk payload as operations (not body) and remove or change
the refresh parameter (omit refresh for better throughput or use refresh:
"wait_for" only if immediate visibility is required); locate the client.bulk
invocation (client.bulk({ refresh: true, body })) and replace with client.bulk({
operations }) or client.bulk({ operations, refresh: "wait_for" }) as appropriate
for your use case.
- Around line 416-424: The search call uses the deprecated v7-style body
wrapper; update the client.search invocation (the call using this.client.search
with index "products") to flatten parameters to the root: pass query: { bool: {
must: must.length ? must : [{ match_all: {} }], filter } }, from: (page - 1) *
size, size, and highlight: { fields: { name: {}, description: {} } } directly on
the request object instead of inside a body property so it conforms to
`@elastic/elasticsearch` v8 API.
- Around line 1-4: Update the skill file to use Elasticsearch v8 APIs and
conform to the required section structure: rename the "When to Activate" header
to "When to Use" and add missing "How It Works" and "Examples" sections; change
the TypeScript bulk snippet to call client.bulk with the operations parameter
(use products.flatMap(...) to emit alternating action/doc entries for index with
_index and _id) and remove the refresh: true option (or use refresh: "wait_for"
if necessary); add missing import os to the Python bulk snippet; update the
TypeScript search snippet (this.client.search) to lift query, from, size, and
highlight out of a body wrapper to top-level parameters; and improve typings by
replacing any[] for must/filter arrays with proper Elasticsearch query types and
casting (h: any) to a typed result type for safer result handling.
In `@skills/github-actions-patterns/SKILL.md`:
- Around line 244-268: The CI uses the dorny/paths-filter labels (web, api,
shared) directly as Turborepo --filter values in the ci job (see the npx turbo
run lint test build --filter=${{ matrix.package }} step), but Turborepo matches
workspace package names (package.json "name") and will silently no-op if labels
are directory names or different (e.g., `@myapp/web`). Fix by emitting actual
workspace names from the detect-changes step or convert the labels to Turborepo
directory filters before invoking turbo (e.g., map "web" → "./packages/web/..."
or map to "@scope/web") so the matrix.package value passed to --filter matches
real workspace identifiers used by Turborepo.
- Around line 426-433: The example labeled "GOOD" with step id "sanitize" and
the actions/github-script usage shows a narrow regex replace(/[`$]/g,'') that is
unsafe as sanitization; change this example to either (A) rename the step (e.g.,
id "extract_expression" or "sanitize_simple_chars") and document it as only
extracting or normalizing a simple expression (not general sanitization), or (B)
replace the one-off replace approach with an explicit allowlist pattern (e.g.,
only permit alphanumerics and a limited set of safe punctuation) and update the
step id and comment accordingly so the snippet no longer claims to be a general
sanitization solution.
- Around line 10-19: The file uses the wrong header and is missing the required
mechanics section: rename the existing "## When to Activate" header to "## When
to Use" and add a new top-level "## How It Works" section (placed after the Core
Principles section) that briefly explains how GitHub Actions executes workflows
(YAML under .github/workflows, trigger evaluation, runner/job isolation), how
reusable workflows invoked via "workflow_call" inherit context/secrets, how
composite actions run in the caller's runner, and how OIDC federation issues
short-lived JWTs for cloud provider validation; update SKILL.md to include these
exact headers ("## When to Use" and "## How It Works") so the file conforms to
the skills format.
In `@skills/nextjs-patterns/SKILL.md`:
- Around line 10-19: Rename the top-level heading "## When to Activate" to "##
When to Use", add a new top-level "## How It Works" section (summarize Next.js
15+ App Router rendering model, Server vs Client Components, and
caching/streaming behavior) and create a "## Examples" section to wrap the
existing code blocks and usage bullets; update headings in SKILL.md so the file
contains clear "## When to Use", "## How It Works", and "## Examples" sections
and move the current bullets under "When to Use" and any code snippets under
"Examples".
- Around line 275-279: createProduct currently passes raw
Object.fromEntries(formData) into db.products.create causing mass-assignment;
replace this by defining and using a Zod schema (e.g., ProductCreateSchema) to
parse and validate the form values, map/convert the FormData entries into a
plain object, call ProductCreateSchema.parse or safeParse to get a validated
payload (and handle validation errors), then pass only the validated/allowed
fields to db.products.create instead of the raw Object.fromEntries output; keep
the revalidateTag("products") and revalidatePath("/products") calls after
successful creation.
- Around line 315-337: The middleware example uses Next.js Edge Runtime
(middleware function handling NextRequest/NextResponse) but calls getSession
from ./lib/auth which may rely on Node-only APIs (jsonwebtoken, bcrypt, fs) and
will fail at runtime; update the doc: add a clear ⚠️ Edge Runtime compatibility
warning near the middleware example stating that middleware runs on the Edge and
cannot use Node-specific libs, and either replace or show an Edge-compatible
alternative—e.g., call NextAuth's getToken or JWT verification using the jose
library within middleware—mention the functions/members to change (middleware,
getSession) and note supportedLocales/protectedPaths logic can remain the same
but must use Edge-safe auth calls.
- Around line 151-157: The comment is wrong for Next.js 15 and the fetch in
getCategories() is uncached unless you set an explicit cache; update
getCategories() to add the explicit caching option (e.g., include cache:
"force-cache" in the fetch options alongside next: { tags: ["categories"] }) or
alternatively change the comment to state that fetch is uncached by default and
that an explicit cache option is required; locate the getCategories function to
apply the fix.
In `@skills/svelte-patterns/SKILL.md`:
- Around line 182-202: Remove the unused error import from the top of the first
+page.server.ts snippet: delete the imported symbol "error" since the exported
load function (export const load) does not call it; keep the rest of the imports
and logic intact so only the unused import is removed.
- Line 10: Rename the top-level heading "When to Activate" to "When to Use", add
a new top-level "How It Works" section between "When to Use" and "Examples", and
ensure there is a top-level "Examples" section; update the SKILL.md content so
the three required headings are present exactly as "When to Use", "How It
Works", and "Examples" and move or summarize existing content under the
appropriate headings.
- Around line 461-464: The test expects initials "AS" but the UserCard.svelte
derived initials produce "A" for a single-word name; update the test to either
expect "A" when rendering UserCard with name "Admin" or change the test input
name to a two-word string (e.g., "Alice Smith") so the derived initials match
"AS"; locate the assertion in the test that calls render(UserCard, { props: {
name: "Admin", ... } }) and adjust the props.name or the expected getByText
value accordingly.
- Around line 409-411: The comment incorrectly states that
data-sveltekit-preload-data="tap" is viewport-based; update the comment to
clarify that data-sveltekit-preload-data accepts "tap" which begins preloading
on user tap (touchstart/mousedown) and is not viewport-based, and note that
"viewport" is a valid value only for data-sveltekit-preload-code; adjust the
explanatory text around the <a href="/dashboard"
data-sveltekit-preload-data="tap">Dashboard</a> example to reflect this
distinction and, if viewport preloading for code is intended, change to or
mention data-sveltekit-preload-code="viewport" instead.
---
Nitpick comments:
In `@agents/springboot-reviewer.md`:
- Line 80: The grep command using "grep -rL
"PreAuthorize\|Secured\|RolesAllowed" --include="*Controller.java"" will produce
false positives by listing test and abstract controllers that legitimately lack
`@PreAuthorize/`@Secured/@RolesAllowed; restrict the search to production code
(e.g., only under src/main) and/or exclude test directories and abstract base
controllers so only concrete production controllers are flagged — update the
invocation that references the controller file pattern to scope the path to
src/main (or add an --exclude-dir for tests) and ensure abstract controller
classes (base/Abstract*Controller) are ignored when checking for
`@PreAuthorize/`@Secured/@RolesAllowed.
In `@skills/elasticsearch-patterns/SKILL.md`:
- Around line 402-403: Replace the loose any[] types for must and filter with
the Elasticsearch SDK type QueryDslQueryContainer[] (imported from
`@elastic/elasticsearch`) and update the hit mapping to accept
QueryDslQueryContainer-typed query results so you can remove the (h: any) cast;
specifically change the declarations const must: any[] = []; const filter: any[]
= []; to use const must: QueryDslQueryContainer[] = []; const filter:
QueryDslQueryContainer[] = []; import QueryDslQueryContainer and update the
hit-mapping callback (the function that maps hits/results) to take the correct
typed hit object instead of casting.
- Around line 10-18: Rename the top-level heading "## When to Activate" to "##
When to Use", add a new "## How It Works" section (moving or relabeling the
existing "Core Principles" content under it), and add a new "## Examples"
section that collects the domain-specific example snippets currently scattered
under other headings into one Examples block; ensure headings are exact ("##
When to Use", "## How It Works", "## Examples") and move/adjust the existing
text under SKILL.md (e.g., Core Principles and the domain-specific examples) so
each required section contains the appropriate content and ordering.
In `@skills/github-actions-patterns/SKILL.md`:
- Around line 127-133: The job using the reusable workflow currently uses
"secrets: inherit" which violates least-privilege; change the ci job
configuration that calls ./.github/workflows/ci-shared.yml to stop inheriting
all secrets and instead pass only the specific secret(s) the shared workflow
needs (e.g., map NPM_TOKEN explicitly to ${{ secrets.NPM_TOKEN }}), and update
the reusable workflow's inputs/required flags if needed so the shared workflow
continues to accept NPM_TOKEN as optional.
In `@skills/nextjs-patterns/SKILL.md`:
- Around line 249-252: The handleToggle function currently calls
setOptimistic(id) then awaits toggleTodoAction(id) and thus will let rejections
become uncaught and provide no user feedback; wrap the await in a try/catch
inside handleToggle (catching errors from toggleTodoAction), set an error state
(or call a provided setError handler) on failure, and only revert the optimistic
UI (clear setOptimistic or update state) inside the catch block; also log the
error for debugging. Ensure you update references to handleToggle,
toggleTodoAction, setOptimistic and the component's error state so failures are
surfaced to the user.
In `@skills/svelte-patterns/SKILL.md`:
- Line 153: The children prop is currently typed as any in the destructuring of
$props() — change its type to the correct Svelte Snippet type and add the
appropriate import; specifically import type { Snippet } from 'svelte' (or
ensure Snippet is available) and update the destructuring annotation from
children: any to children: Snippet (leave data as LayoutData) so the line using
$props() becomes typed with LayoutData and Snippet.
| You are a senior Spring Boot code reviewer ensuring high standards of security, data access, and architectural patterns. | ||
|
|
||
| When invoked: | ||
| 1. Run `git diff -- '*.java' '*.kt' '*.yml' '*.properties'` to see recent changes |
There was a problem hiding this comment.
git diff without a base ref silently returns nothing when changes are staged or committed.
git diff without arguments compares the working directory to the staging area, meaning it shows only unstaged changes. As soon as changes are staged using git add, git diff will only show differences introduced after that staging operation — and if everything is already committed, nothing shows up; if changes were committed, compare commits with git diff HEAD~1..HEAD.
Since this reviewer is typically invoked after code has been written and staged/committed, the command will produce empty output and the agent will have no context about recent changes. Use git diff HEAD to include both staged and unstaged changes, or git diff HEAD~1 to see the last commit's changes.
🐛 Proposed fix
-1. Run `git diff -- '*.java' '*.kt' '*.yml' '*.properties'` to see recent changes
+1. Run `git diff HEAD -- '*.java' '*.kt' '*.yml' '*.properties'` to see recent changes (staged + unstaged vs last commit); if already committed, use `git diff HEAD~1 -- '*.java' '*.kt' '*.yml' '*.properties'`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 1. Run `git diff -- '*.java' '*.kt' '*.yml' '*.properties'` to see recent changes | |
| 1. Run `git diff HEAD -- '*.java' '*.kt' '*.yml' '*.properties'` to see recent changes (staged + unstaged vs last commit); if already committed, use `git diff HEAD~1 -- '*.java' '*.kt' '*.yml' '*.properties'` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agents/springboot-reviewer.md` at line 11, The git diff invocation in the
reviewer uses "git diff -- '*.java' '*.kt' '*.yml' '*.properties'" which only
shows unstaged changes; update the command so it compares against HEAD (for
example replace with "git diff HEAD -- '*.java' '*.kt' '*.yml' '*.properties'"
or use "git diff HEAD~1..HEAD" when you want the last commit) so the agent sees
staged and committed changes; locate the string in agents/springboot-reviewer.md
and modify the command there.
| --- | ||
| name: elasticsearch-patterns | ||
| description: Elasticsearch patterns for index design, query DSL, aggregations, bulk indexing, index lifecycle management, and integration with TypeScript and Python. | ||
| --- |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f -e md --full-path 'skills/' | head -40Repository: affaan-m/everything-claude-code
Length of output: 1870
🏁 Script executed:
wc -l skills/elasticsearch-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 117
🏁 Script executed:
head -50 skills/elasticsearch-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 1820
🏁 Script executed:
sed -n '100,150p' skills/elasticsearch-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 1077
🏁 Script executed:
sed -n '240,290p' skills/elasticsearch-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 1337
🏁 Script executed:
sed -n '285,320p' skills/elasticsearch-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 1008
🏁 Script executed:
sed -n '395,435p' skills/elasticsearch-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 1349
🏁 Script executed:
sed -n '1,15p' skills/elasticsearch-patterns/SKILL.md && echo "---" && sed -n '420,480p' skills/elasticsearch-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 2293
🏁 Script executed:
web_search
Search for Elasticsearch v8 JavaScript client bulk API body parameter vs operationsRepository: affaan-m/everything-claude-code
Length of output: 170
🌐 Web query:
Elasticsearch JavaScript v8 client bulk API body parameter operations
💡 Result:
In the Elasticsearch JavaScript client v8, client.bulk() takes an operations parameter (an array) that represents the Bulk API’s NDJSON pairs: an action/meta object, followed (when required) by the source / partial doc object. The supported actions are index, create, delete, and update. [1], [2]
Shape of operations
- index / create:
{ index|create: { _index, _id?, ... } }, { ...document... } - delete:
{ delete: { _index, _id, ... } }(no following document) - update:
{ update: { _index, _id, ... } }, { doc: {...}, doc_as_upsert?, scripted_upsert?, script?, upsert? }
Example
await client.bulk({
operations: [
{ index: { _index: 'tweets', _id: '1' } },
{ id: 1, text: 'hello' },
{ update: { _index: 'tweets', _id: '1' } },
{ doc: { text: 'hello world' } },
{ delete: { _index: 'tweets', _id: '2' } }
]
})The client serializes this operations array into the newline-delimited format required by the _bulk endpoint (and the request must end with a trailing newline). [1], [2]
Sources: Elastic JS client API reference (client.bulk) [1], Elasticsearch Bulk API format/actions [2].
🌐 Web query:
Elasticsearch JavaScript v8 client search API body parameter top-level
💡 Result:
In the Elasticsearch JavaScript client v8 (@elastic/elasticsearch), the Search API request-body fields are passed as top-level properties of client.search({ ... }) (i.e., alongside index), not under body. For example, use query, aggs, sort, from, size, etc. directly. [1][2]
const resp = await client.search({
index: 'test',
query: { match_all: {} },
size: 10,
from: 0,
aggs: {
by_status: { terms: { field: 'status.keyword' } }
}
})This is the documented v7→v8 migration pattern: remove body and move its contents to the root request object. [2] In the API docs, those request-body parameters (e.g., query, size, sort, _source, etc.) are listed as part of the client.search request object. [1]
Sources: [1] [2]
🌐 Web query:
Elasticsearch refresh true bulk indexing performance anti-pattern production
💡 Result:
Using refresh=true on every Bulk request in production is a classic indexing-performance anti-pattern because it forces Elasticsearch to refresh the affected shards immediately after each bulk so documents become searchable right away [2]. Refresh is costly, and doing it frequently during heavy ingest hurts indexing speed [1].
Why it’s bad (what actually happens)
refresh=truetends to create many tiny Lucene segments, which are inefficient and later require extra merge work; you “pay” for it at index time, search time, and merge time [2].- That extra segment/merge churn competes for CPU/disk with ongoing ingest and queries, lowering throughput and often increasing latency [1][2].
What to do instead (production-safe defaults)
- Do nothing: omit
refresh(equivalent torefresh=false, the default) for best throughput; changes become visible on the normal refresh cycle [2]. - If you must have “read-after-write” semantics for a workflow, prefer
refresh=wait_for(waits for the next scheduled refresh rather than forcing one) [2]. - For large backfills/bulk loads: temporarily set
index.refresh_intervalto-1(disable refreshes during the load), then restore it afterward [1][3]. Consider temporarily setting replicas to 0 during the initial load as well (then restore) [1].
Sources: Elastic indexing-speed guidance [1], Bulk API docs (refresh behavior) [4], and Elastic’s “refresh parameter” deep-dive (why true is expensive) [2].
Verify Elasticsearch v8 client API compatibility and update section structure per guidelines.
This skill file uses v7-style APIs that are incompatible with v8 client libraries and deviates from the required skill format:
Section Structure: File has "When to Activate" but guidelines require "When to Use". Add missing "How It Works" and "Examples" section headers.
TypeScript Bulk API (line 276): v8 JavaScript client renamed body parameter to operations and changed the structure. Replace with:
const result = await client.bulk({
operations: products.flatMap(product => [
{ index: { _index: "products", _id: product.id } },
{ name: product.name, ... }
])
});Also remove refresh: true—this forces synchronous Lucene refreshes on each bulk call, a production anti-pattern. Omit it (default refresh=false) or use refresh: "wait_for" if read-after-write is required.
Python Bulk API (line 290): Missing import os at the top of the snippet.
TypeScript Search API (line 427): v8 removes the body wrapper. Move query, from, size, highlight to top-level:
const result = await this.client.search({
index: "products",
query: { bool: { ... } },
from: (page - 1) * size,
size,
highlight: { ... }
});Type Safety: Replace any[] types for must/filter arrays with proper query types, and cast (h: any) to a typed result object.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/elasticsearch-patterns/SKILL.md` around lines 1 - 4, Update the skill
file to use Elasticsearch v8 APIs and conform to the required section structure:
rename the "When to Activate" header to "When to Use" and add missing "How It
Works" and "Examples" sections; change the TypeScript bulk snippet to call
client.bulk with the operations parameter (use products.flatMap(...) to emit
alternating action/doc entries for index with _index and _id) and remove the
refresh: true option (or use refresh: "wait_for" if necessary); add missing
import os to the Python bulk snippet; update the TypeScript search snippet
(this.client.search) to lift query, from, size, and highlight out of a body
wrapper to top-level parameters; and improve typings by replacing any[] for
must/filter arrays with proper Elasticsearch query types and casting (h: any) to
a typed result type for safer result handling.
| ```typescript | ||
| import { Client } from "@elastic/elasticsearch"; | ||
|
|
||
| const client = new Client({ node: process.env.ELASTICSEARCH_URL }); | ||
|
|
||
| async function bulkIndex(products: Product[]) { | ||
| const body = products.flatMap(product => [ | ||
| { index: { _index: "products", _id: product.id } }, | ||
| { | ||
| name: product.name, | ||
| description: product.description, | ||
| category: product.category, | ||
| price: product.price, | ||
| in_stock: product.inStock, | ||
| tags: product.tags, | ||
| created_at: product.createdAt, | ||
| suggest: { | ||
| input: [product.name, ...product.tags], | ||
| contexts: { category: [product.category] }, | ||
| }, | ||
| }, | ||
| ]); | ||
|
|
||
| const result = await client.bulk({ refresh: true, body }); | ||
|
|
||
| if (result.errors) { | ||
| const errors = result.items.filter((item: any) => item.index?.error); | ||
| console.error(`Bulk indexing errors: ${errors.length}/${products.length}`); | ||
| } | ||
| } | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n skills/elasticsearch-patterns/SKILL.md | sed -n '250,290p'Repository: affaan-m/everything-claude-code
Length of output: 1434
🌐 Web query:
@elastic/elasticsearch v8 bulk API body operations parameter change
💡 Result:
In the @elastic/elasticsearch (JavaScript) v8 client, the Bulk API no longer takes the payload under body. Instead, you pass the NDJSON “lines” as an array via the operations parameter.
Before (older client usage):
await client.bulk({
refresh: true,
body: [
{ index: { _index: 'tweets', _id: '1' } },
{ field1: 'value1' },
// ...
]
})Now (v8):
const operations = [
{ index: { _index: 'tweets', _id: '1' } },
{ field1: 'value1' },
// ...
]
await client.bulk({ refresh: true, operations })This matches the official JS client Bulk example (uses client.bulk({ …, operations })) [1] and the Elasticsearch API docs’ typed examples (show operations=[ ... ] for bulk) [2]. Also, more broadly, the v8 JS client migration removed the old generic body wrapper pattern and expects request-body fields as first-class parameters defined by the API (like operations for _bulk) [3].
Sources: [1] [2] [3]
🌐 Web query:
Elasticsearch refresh true performance impact bulk indexing
💡 Result:
Setting refresh=true on bulk indexing hurts throughput because it forces Elasticsearch to refresh the affected shards immediately after the request, making the newly indexed docs visible to search right away—but refresh is a costly operation, and doing it frequently during heavy indexing slows indexing down. [1][2]
What to do instead (typical best practice for bulk loads):
- Leave
refreshat the default (false) for bulk ingestion, and rely on the normal periodic refresh. [1] - For maximum bulk indexing speed, temporarily set
index.refresh_intervalto-1(disable refreshes), run your bulk load, then restore the refresh interval afterward. [2] - If you must ensure search visibility after a request, prefer
refresh=wait_for(waits for the next refresh) rather than forcing an immediate refresh withtrue. [1]
Two issues in the TypeScript bulk example: deprecated body param (v8) and refresh: true anti-pattern.
1. v8 API: body → operations
@elastic/elasticsearch v8 changed the bulk API to use operations instead of body. In the current code at line 276, client.bulk({ refresh: true, body }) will not work with v8; it should be client.bulk({ refresh: true, operations: body }). Using body will cause type errors or deprecation warnings with the v8 client.
2. refresh: true destroys bulk indexing throughput
Setting refresh: true forces Elasticsearch to immediately refresh affected shards after the bulk request completes. Refresh is a costly operation; using it during bulk indexing significantly hurts throughput. Best practice for bulk loads is to omit the refresh parameter (defaults to false) or use refresh: "wait_for" only if the caller must read its own writes immediately afterward.
🐛 Proposed fix
- const result = await client.bulk({ refresh: true, body });
+ const result = await client.bulk({ refresh: false, operations: body });Or, to optimize for bulk ingestion:
- const result = await client.bulk({ refresh: true, body });
+ const result = await client.bulk({ operations: body });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/elasticsearch-patterns/SKILL.md` around lines 253 - 283, The bulkIndex
example uses the v8 client incorrectly and uses an anti-pattern for refresh:
update the client.bulk call in bulkIndex to pass the flattened bulk payload as
operations (not body) and remove or change the refresh parameter (omit refresh
for better throughput or use refresh: "wait_for" only if immediate visibility is
required); locate the client.bulk invocation (client.bulk({ refresh: true, body
})) and replace with client.bulk({ operations }) or client.bulk({ operations,
refresh: "wait_for" }) as appropriate for your use case.
| from elasticsearch import Elasticsearch, helpers | ||
|
|
||
| es = Elasticsearch(os.environ["ELASTICSEARCH_URL"]) | ||
|
|
There was a problem hiding this comment.
Missing import os in the Python snippet.
os.environ is used at line 290, but os is never imported. The snippet will raise a NameError as written.
🐛 Proposed fix
+import os
from elasticsearch import Elasticsearch, helpers
es = Elasticsearch(os.environ["ELASTICSEARCH_URL"])📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from elasticsearch import Elasticsearch, helpers | |
| es = Elasticsearch(os.environ["ELASTICSEARCH_URL"]) | |
| import os | |
| from elasticsearch import Elasticsearch, helpers | |
| es = Elasticsearch(os.environ["ELASTICSEARCH_URL"]) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/elasticsearch-patterns/SKILL.md` around lines 288 - 291, The snippet
uses os.environ but never imports os; add an import for the os module at the top
of the snippet so the call to os.environ["ELASTICSEARCH_URL"] works. Locate the
code block that defines Elasticsearch and helpers (the lines referencing
Elasticsearch, helpers and os.environ) and insert "import os" before those lines
to ensure os is defined for use.
| const result = await this.client.search({ | ||
| index: "products", | ||
| body: { | ||
| query: { bool: { must: must.length ? must : [{ match_all: {} }], filter } }, | ||
| from: (page - 1) * size, | ||
| size, | ||
| highlight: { fields: { name: {}, description: {} } }, | ||
| }, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check if the file exists and read lines 416-424
fd -t f "SKILL.md" | grep -E "elasticsearch-patterns"Repository: affaan-m/everything-claude-code
Length of output: 113
🏁 Script executed:
# Get the file size to ensure we can read it
wc -l skills/elasticsearch-patterns/SKILL.md 2>/dev/null || echo "File not found"Repository: affaan-m/everything-claude-code
Length of output: 117
🏁 Script executed:
# Read lines around 416-424
sed -n '410,430p' skills/elasticsearch-patterns/SKILL.md 2>/dev/null || echo "Cannot read file"Repository: affaan-m/everything-claude-code
Length of output: 891
🌐 Web query:
Elasticsearch JavaScript TypeScript client v8 API search method body parameter
💡 Result:
In the Elasticsearch JavaScript/TypeScript client v8, client.search() does not use a body object the way older clients (and many REST examples) do. The request-body fields are “flattened” into the single request object you pass to search, e.g. query, aggregations, sort, size, etc. (elastic.co)
TypeScript/JS example (v8)
const res = await client.search({
index: 'my-index',
query: { match: { title: 'hello' } },
size: 10,
aggregations: {
tags: { terms: { field: 'tags.keyword' } }
}
})What about body?
In 8.x, the JS client historically exposed an optional body property broadly (as a catch-all), but the documented/typed search API surface is the flattened form above. Elastic also calls out that body handling changes further in 9.0 (stricter typing / moving fields to the root). (elastic.co)
Citations:
- 1: https://www.elastic.co/docs/reference/elasticsearch/clients/javascript/api-reference
- 2: https://www.elastic.co/docs/release-notes/elasticsearch/clients/javascript/breaking-changes?utm_source=openai
client.search({ body: {...} }) is the deprecated v7-style API; v8 uses flattened parameters at the root level.
The @elastic/elasticsearch v8 client removed the body wrapper in favor of flattening all search parameters to the root of the request object. Current code should use client.search({ index: 'products', query: {...}, from: ..., size: ..., highlight: ... }) instead of wrapping them in a body property. While v8 may still accept body as a catch-all for backward compatibility, this pattern will become stricter and break in v9.0.
Proposed fix
- const result = await this.client.search({
- index: "products",
- body: {
- query: { bool: { must: must.length ? must : [{ match_all: {} }], filter } },
- from: (page - 1) * size,
- size,
- highlight: { fields: { name: {}, description: {} } },
- },
- });
+ const result = await this.client.search({
+ index: "products",
+ query: { bool: { must: must.length ? must : [{ match_all: {} }], filter } },
+ from: (page - 1) * size,
+ size,
+ highlight: { fields: { name: {}, description: {} } },
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/elasticsearch-patterns/SKILL.md` around lines 416 - 424, The search
call uses the deprecated v7-style body wrapper; update the client.search
invocation (the call using this.client.search with index "products") to flatten
parameters to the root: pass query: { bool: { must: must.length ? must : [{
match_all: {} }], filter } }, from: (page - 1) * size, size, and highlight: {
fields: { name: {}, description: {} } } directly on the request object instead
of inside a body property so it conforms to `@elastic/elasticsearch` v8 API.
| export async function middleware(request: NextRequest) { | ||
| const { pathname } = request.nextUrl; | ||
|
|
||
| // i18n: redirect to preferred locale | ||
| if (pathname === "/") { | ||
| const locale = request.headers.get("accept-language")?.split(",")[0]?.split("-")[0] ?? "en"; | ||
| if (supportedLocales.includes(locale) && locale !== "en") { | ||
| return NextResponse.redirect(new URL(`/${locale}`, request.url)); | ||
| } | ||
| } | ||
|
|
||
| // Auth: protect routes | ||
| if (protectedPaths.some(p => pathname.startsWith(p))) { | ||
| const session = await getSession(request); | ||
| if (!session) { | ||
| const loginUrl = new URL("/login", request.url); | ||
| loginUrl.searchParams.set("callbackUrl", pathname); | ||
| return NextResponse.redirect(loginUrl); | ||
| } | ||
| } | ||
|
|
||
| return NextResponse.next(); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the skill file and check if it exists
fd "SKILL.md" -p "skills/nextjs-patterns"Repository: affaan-m/everything-claude-code
Length of output: 106
🏁 Script executed:
# List files in the lib directory to find auth file
fd . -p "lib" -t f | head -20Repository: affaan-m/everything-claude-code
Length of output: 165
🏁 Script executed:
# Search for getSession in the codebase
rg -n "getSession" --type ts --type js -A 3Repository: affaan-m/everything-claude-code
Length of output: 50389
🏁 Script executed:
# View the SKILL.md file at lines 315-337
sed -n '315,337p' skills/nextjs-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 835
🏁 Script executed:
# Search for getSession import in the skill file
grep -n "getSession\|lib/auth\|import.*auth" skills/nextjs-patterns/SKILL.md | head -20Repository: affaan-m/everything-claude-code
Length of output: 295
🏁 Script executed:
# Find auth-related files in the repository
fd -e ts -e js "auth" | grep -v node_modules | grep -v test | head -20Repository: affaan-m/everything-claude-code
Length of output: 57
🏁 Script executed:
# View broader context around the middleware example
sed -n '300,345p' skills/nextjs-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 1301
🏁 Script executed:
# Check if there are any Edge Runtime mentions in the skill file
grep -n "Edge\|edge\|Node.js\|node-specific\|bcrypt\|jsonwebtoken\|jose" skills/nextjs-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 57
🏁 Script executed:
# Check the full structure/sections of the skill file
grep -n "^##\|^###" skills/nextjs-patterns/SKILL.md | head -30Repository: affaan-m/everything-claude-code
Length of output: 745
🏁 Script executed:
# Look for any notes or caveats sections near the middleware example
sed -n '280,350p' skills/nextjs-patterns/SKILL.md | grep -i "note\|caveat\|warning\|edge\|runtime"Repository: affaan-m/everything-claude-code
Length of output: 57
🏁 Script executed:
# View lines right after the middleware example to see if there are notes
sed -n '337,360p' skills/nextjs-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 581
🏁 Script executed:
# Check the entire Middleware section from start to next section
sed -n '303,343p' skills/nextjs-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 1218
🏁 Script executed:
# Check if there are any Edge Runtime or middleware runtime considerations elsewhere in the file
grep -n -i "runtime\|edge\|node\|bcrypt\|jwt\|jose\|constraint" skills/nextjs-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 156
🏁 Script executed:
# View the "When to Activate" section to understand the skill's scope/constraints
sed -n '10,30p' skills/nextjs-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 1071
Add Edge Runtime compatibility guidance to the middleware example.
The skill documentation should note that Next.js middleware runs on Edge Runtime, which does not support Node.js-specific APIs. If getSession (from ./lib/auth) uses libraries like jsonwebtoken, bcrypt, or native fs, the middleware will fail at runtime. Either:
- Document this constraint explicitly with a
⚠️ warning, or - Show an Edge-compatible approach (e.g., using
josefor JWT verification or NextAuth'sgetToken)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/nextjs-patterns/SKILL.md` around lines 315 - 337, The middleware
example uses Next.js Edge Runtime (middleware function handling
NextRequest/NextResponse) but calls getSession from ./lib/auth which may rely on
Node-only APIs (jsonwebtoken, bcrypt, fs) and will fail at runtime; update the
doc: add a clear ⚠️ Edge Runtime compatibility warning near the middleware
example stating that middleware runs on the Edge and cannot use Node-specific
libs, and either replace or show an Edge-compatible alternative—e.g., call
NextAuth's getToken or JWT verification using the jose library within
middleware—mention the functions/members to change (middleware, getSession) and
note supportedLocales/protectedPaths logic can remain the same but must use
Edge-safe auth calls.
|
|
||
| Production-grade Svelte 5 patterns with runes, SvelteKit routing, and modern tooling. | ||
|
|
||
| ## When to Activate |
There was a problem hiding this comment.
Missing "How It Works" section; section title mismatch.
The file omits the required "How It Works" section and the "When to Use" section is titled "When to Activate" instead. Per the contributing guidelines, all skills must include "When to Use", "How It Works", and "Examples" as top-level sections.
♻️ Suggested structural additions
## When to Activate
+<!-- rename to "When to Use" per guidelines -->
...
+## How It Works
+
+SvelteKit compiles Svelte 5 components to vanilla JS at build time (zero runtime overhead).
+Runes (`$state`, `$derived`, `$effect`) replace the legacy reactive-declaration model and are
+processed at compile time. Routing is file-based under `src/routes/`; each route segment can
+export a `load` function (server or universal) and `actions` for form handling. Hooks in
+`src/hooks.server.ts` run on every server request and are composed with `sequence()`.Based on learnings: "Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/svelte-patterns/SKILL.md` at line 10, Rename the top-level heading
"When to Activate" to "When to Use", add a new top-level "How It Works" section
between "When to Use" and "Examples", and ensure there is a top-level "Examples"
section; update the SKILL.md content so the three required headings are present
exactly as "When to Use", "How It Works", and "Examples" and move or summarize
existing content under the appropriate headings.
| ```typescript | ||
| // src/routes/blog/+page.server.ts | ||
| import type { PageServerLoad } from "./$types"; | ||
| import { error } from "@sveltejs/kit"; | ||
|
|
||
| export const load: PageServerLoad = async ({ url, locals }) => { | ||
| const page = Number(url.searchParams.get("page")) || 1; | ||
| const perPage = 10; | ||
|
|
||
| const [posts, total] = await Promise.all([ | ||
| db.posts.findMany({ | ||
| skip: (page - 1) * perPage, | ||
| take: perPage, | ||
| orderBy: { createdAt: "desc" }, | ||
| }), | ||
| db.posts.count(), | ||
| ]); | ||
|
|
||
| return { posts, total, page, perPage }; | ||
| }; | ||
| ``` |
There was a problem hiding this comment.
Unused error import in the first load-function snippet.
error is imported at line 185 but never called in this snippet — it is only used in the second snippet (+page.server.ts for [slug]). Readers copying this example will have a dead import.
♻️ Proposed fix
// src/routes/blog/+page.server.ts
import type { PageServerLoad } from "./$types";
-import { error } from "@sveltejs/kit";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/svelte-patterns/SKILL.md` around lines 182 - 202, Remove the unused
error import from the top of the first +page.server.ts snippet: delete the
imported symbol "error" since the exported load function (export const load)
does not call it; keep the rest of the imports and logic intact so only the
unused import is removed.
| <!-- Preload on viewport (for important links) --> | ||
| <a href="/dashboard" data-sveltekit-preload-data="tap">Dashboard</a> | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check file exists and examine naming compliance
find . -name "*svelte-patterns*" -type d
ls -la skills/svelte-patterns/ 2>/dev/null || echo "Directory check failed"Repository: affaan-m/everything-claude-code
Length of output: 275
🌐 Web query:
SvelteKit data-sveltekit-preload-data attribute values "tap" "viewport" documentation
💡 Result:
data-sveltekit-preload-data="tap": starts preloading the target route’s data as soon as atouchstart(mobile) ormousedown(desktop) is registered (i.e., on tap/click-down, not hover). [1] (svelte.net.cn)data-sveltekit-preload-data="hover"(for comparison): starts preloading when the mouse rests over a link (and on mobile it begins ontouchstart). [1] (svelte.net.cn)viewportis not a value fordata-sveltekit-preload-data; it’s a value fordata-sveltekit-preload-code="viewport", meaning the link’s code is preloaded once the link enters the viewport. [1] (svelte.net.cn)
Docs: SvelteKit “Link options” → data-sveltekit-preload-data / data-sveltekit-preload-code. [1] (svelte.net.cn)
Citations:
- 1: https://svelte.net.cn/docs/kit/link-options?utm_source=openai
- 2: https://svelte.net.cn/docs/kit/link-options?utm_source=openai
- 3: https://svelte.net.cn/docs/kit/link-options?utm_source=openai
- 4: https://svelte.net.cn/docs/kit/link-options?utm_source=openai
🌐 Web query:
SvelteKit preload-data vs preload-code difference documentation
💡 Result:
data-sveltekit-preload-data: preloads route code + the route’sloaddata (i.e., it will runloadahead of navigation). Values:"hover"or"tap". This is what makes navigation potentially instantaneous because theloadresult is already cached. [1][2]data-sveltekit-preload-code: preloads only the route’s code (JS/CSS/modules) and does not runload. Values (most → least eager):"eager","viewport","hover","tap". It only has an effect if it’s more eager than anydata-sveltekit-preload-datapresent (because code preloading is already implied by data preloading). [1][2]
Docs: Link options (covers both attributes) [1]; $app/navigation reference for preloadCode vs preloadData [2].
Sources
[1] (svelte.dev)
[2] (svelte.dev)
Citations:
🏁 Script executed:
# Check the file content at lines 409-411
sed -n '405,415p' skills/svelte-patterns/SKILL.mdRepository: affaan-m/everything-claude-code
Length of output: 376
Misleading comment: "tap" is not viewport-based preloading.
"tap" for data-sveltekit-preload-data means "only begin preloading on tap" (touchstart/mousedown) — it has nothing to do with the viewport. "viewport" is only a valid value for data-sveltekit-preload-code, not for data-sveltekit-preload-data.
♻️ Proposed fix
-<!-- Preload on viewport (for important links) -->
-<a href="/dashboard" data-sveltekit-preload-data="tap">Dashboard</a>
+<!-- Preload on tap/click only (avoids false positives for frequently-changing data) -->
+<a href="/dashboard" data-sveltekit-preload-data="tap">Dashboard</a>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <!-- Preload on viewport (for important links) --> | |
| <a href="/dashboard" data-sveltekit-preload-data="tap">Dashboard</a> | |
| ``` | |
| <!-- Preload on tap/click only (avoids false positives for frequently-changing data) --> | |
| <a href="/dashboard" data-sveltekit-preload-data="tap">Dashboard</a> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/svelte-patterns/SKILL.md` around lines 409 - 411, The comment
incorrectly states that data-sveltekit-preload-data="tap" is viewport-based;
update the comment to clarify that data-sveltekit-preload-data accepts "tap"
which begins preloading on user tap (touchstart/mousedown) and is not
viewport-based, and note that "viewport" is a valid value only for
data-sveltekit-preload-code; adjust the explanatory text around the <a
href="/dashboard" data-sveltekit-preload-data="tap">Dashboard</a> example to
reflect this distinction and, if viewport preloading for code is intended,
change to or mention data-sveltekit-preload-code="viewport" instead.
| it("shows admin badge for admin role", () => { | ||
| render(UserCard, { props: { name: "Admin", email: "admin@test.com", role: "admin" } }); | ||
| expect(screen.getByText("AS").closest(".card")).toHaveClass("admin"); | ||
| }); |
There was a problem hiding this comment.
Test will fail: incorrect initials "AS" for single-word name "Admin".
The $derived expression in UserCard.svelte produces "A" for a single-word name like "Admin" ("Admin".split(" ") → ["Admin"] → ["A"] → joined "A"). getByText("AS") will not find any element, so this test example fails.
Fix either the expected text or the test's name input:
🐛 Option A — fix the expected initials
- expect(screen.getByText("AS").closest(".card")).toHaveClass("admin");
+ expect(screen.getByText("A").closest(".card")).toHaveClass("admin");🐛 Option B — use a two-word name so "AS" is correct
- render(UserCard, { props: { name: "Admin", email: "admin@test.com", role: "admin" } });
- expect(screen.getByText("AS").closest(".card")).toHaveClass("admin");
+ render(UserCard, { props: { name: "Admin Smith", email: "admin@test.com", role: "admin" } });
+ expect(screen.getByText("AS").closest(".card")).toHaveClass("admin");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it("shows admin badge for admin role", () => { | |
| render(UserCard, { props: { name: "Admin", email: "admin@test.com", role: "admin" } }); | |
| expect(screen.getByText("AS").closest(".card")).toHaveClass("admin"); | |
| }); | |
| it("shows admin badge for admin role", () => { | |
| render(UserCard, { props: { name: "Admin", email: "admin@test.com", role: "admin" } }); | |
| expect(screen.getByText("A").closest(".card")).toHaveClass("admin"); | |
| }); |
| it("shows admin badge for admin role", () => { | |
| render(UserCard, { props: { name: "Admin", email: "admin@test.com", role: "admin" } }); | |
| expect(screen.getByText("AS").closest(".card")).toHaveClass("admin"); | |
| }); | |
| it("shows admin badge for admin role", () => { | |
| render(UserCard, { props: { name: "Admin Smith", email: "admin@test.com", role: "admin" } }); | |
| expect(screen.getByText("AS").closest(".card")).toHaveClass("admin"); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/svelte-patterns/SKILL.md` around lines 461 - 464, The test expects
initials "AS" but the UserCard.svelte derived initials produce "A" for a
single-word name; update the test to either expect "A" when rendering UserCard
with name "Admin" or change the test input name to a two-word string (e.g.,
"Alice Smith") so the derived initials match "AS"; locate the assertion in the
test that calls render(UserCard, { props: { name: "Admin", ... } }) and adjust
the props.name or the expected getByText value accordingly.
…task-queue, java-enterprise skills and 3 reviewers Add 10 new features: - 7 skills: typescript-strict-patterns, jest-vitest-testing, vite-build-patterns, aws-cloud-patterns, grpc-patterns, task-queue-patterns, java-enterprise-patterns - 3 agents: express-reviewer, kubernetes-reviewer, typescript-reviewer Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… serverless skills and 3 reviewers Add 7 new pattern skills covering Kotlin (coroutines, Flow, sealed classes), Ruby on Rails (ActiveRecord, Hotwire, RSpec), Remix (loaders, actions, progressive enhancement), OpenTelemetry (tracing, metrics, collector config), GCP (Cloud Run, Firestore, Pub/Sub, IAM), GitOps (ArgoCD, Flux, Kustomize), and Serverless (event-driven, SST, Step Functions). Add 3 new reviewer agents: vue-reviewer (reactivity, Composition API), dotnet-reviewer (async/await, EF Core, IDisposable), and aws-reviewer (IAM, S3 security, Lambda optimization, networking). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…r, tRPC skills and rust rules Add 10 new features (8 skills + 1 agent + 1 rule set): - skills: react-patterns, ai-rag-patterns, azure-cloud-patterns, php-laravel-patterns, websocket-realtime-patterns, ddd-patterns, elixir-phoenix-patterns, trpc-patterns - agents: react-reviewer (React-specific code review) - rules: rust/ (coding-style, testing, patterns, hooks, security) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
CRITICAL fixes: - websocket: fix for-of loop syntax error (}); -> }) - react: add response.ok check to fetch() in React Query examples - azure: add error logging to empty catch block in Service Bus - websocket: add JSON.parse try/catch for untrusted input (3 locations) - ai-rag: protect JSON.parse on LLM output, strip markdown fences HIGH fixes: - react: use instanceof Error check on unknown error types - azure: replace non-null assertions with runtime validation - websocket: differentiate JWT errors from unexpected errors in catch - websocket: add connection_lost event on max reconnect attempts - azure: add per-document error handling in change feed - ddd: add try/catch in outbox publisher to prevent cascade failure - trpc: replace deprecated inferAsyncReturnType with Awaited<ReturnType> - react: fix ErrorBoundary wording (ancestor, not sibling) - react: fix undefined items reference in useMemo - react-reviewer: update jest -> vitest in diagnostic commands Code quality (via code-simplifier): - websocket/elixir: add blank lines before --- separators - ddd: remove stray trailing code fence - react: expand useTheme to multi-line function - elixir: add missing @impl true annotation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… command Add 10 new features (18 files): - Java rules (5 files): coding-style, patterns, security, testing, hooks - C#/.NET rules (5 files): coding-style, patterns, security, testing, hooks - PHP/Laravel reviewer agent - Kotlin reviewer agent - Flutter reviewer agent - Event sourcing/CQRS skill (event stores, projections, snapshots) - Contract testing skill (Pact, async contracts, schema-based) - Data pipeline patterns skill (dbt, Airflow, data quality) - Kafka streaming patterns skill (producers, consumers, Streams, ksqlDB) - /release command (version bump, changelog, GitHub release) Inventory: 96 skills, 31 agents, 35 commands, 7 rule dirs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Closing - too large to review as a single PR (27K+ lines). Please split into individual focused PRs for each feature. |
Summary
Massive expansion of the everything-claude-code plugin ecosystem with 81 new features across batches 2-9.
Batch 9 (latest — 10 features, 18 files)
Previous Batches (2-8, 71 features)
Updated Inventory
Test plan
node tests/run-all.js— 976/978 passed (2 pre-existing hook format failures)wc -lwc -l🤖 Generated with Claude Code