Unaddressed review bot suggestions
PR #821 was merged with unaddressed review bot feedback. Each comment
below includes its file path, line number, a direct link to the inline
review comment, and a diff fence with the code context the bot was
flagging. Resolved and outdated threads are filtered out via GitHub's
GraphQL review-thread state. Read the relevant lines, decide whether
the suggestion is correct, and either apply the fix or close this issue
with a wontfix rationale.
Source PR: #821
You are the triager (worker-is-triager rule)
This issue is auto-created from review bot output and dispatched
directly to you. Review bots can be wrong: hallucinated line refs, false
premises about codebase structure, template-driven sweeps without
measurements (see GH#17832-17835 for prior art and prompts/build.txt
section 6a). Do not assume the bot is correct. Verify before acting.
You must end in exactly one of three outcomes — no fourth "hand it back
to the human" path exists. Humans approve decisions; they do not re-do
analysis.
Outcome A — Premise falsified → close the issue
-
Read the cited file:line (listed under Files to modify below).
-
If the bot's claim is factually wrong (file doesn't exist at that
line, function doesn't behave as described, "auto-generated" section
isn't actually auto-generated, etc.), close the issue with a
comment in this shape:
Premise falsified. <what the bot claimed>. <what the code
actually shows, with a file:line citation or one-line quote>.
Not acting.
No PR. No further dispatch. The closing comment trains the next
session reading this thread and the noise filter.
Outcome B — Premise correct + fix is obvious → implement and PR
- Verify the bot's premise as above.
- Read the Worker Guidance section below, open a worktree, implement.
- Open a PR with
Resolves #<this-issue-number> in the body
(use THIS issue's number, not the source PR's) so merge auto-closes it.
- Follow the normal Lifecycle Gate (brief, tests, review-bot-gate,
merge, postflight).
Outcome C — Premise correct but approach is a genuine judgment call
Only use this path if you reach it after Outcomes A and B don't apply:
the bot's finding is real, but the fix requires a decision that is
architectural, policy, breaking-change, or otherwise genuinely outside
what you can resolve autonomously. In that case, post a decision
comment with exactly these fields:
- Premise check: one line, confirming the finding is real.
- Analysis: 2-4 bullets on the trade-offs.
- Recommended path: the option you would take if the decision were
yours, with rationale.
- Specific question: the single decision the human needs to make
(yes/no or pick-one, not open-ended).
Then apply needs-maintainer-review and stop. The human wakes up to a
ready-to-approve recommendation, not a blank task.
Ambiguity about scope or style is not Outcome C. Per
prompts/build.txt "Reasoning responsibility", the model does the
thinking and delivers a recommendation. Only escalate what is genuinely
a maintainer-only decision.
Worker Guidance
Files to modify:
tests/WP_Ultimo/Dashboard_Widgets_Test.php:131
Implementation steps (Outcome B path):
- Read the
diff block under each inline comment below — it shows the
exact code the bot was flagging. Open the file only if you need
surrounding context beyond what the diff tail shows.
- Read the bot's full comment below the diff — it contains the rationale
and any suggested change.
- Verify the premise before implementing (see Outcome A). If the premise
is wrong, switch to Outcome A instead of burning iterations trying to
satisfy a wrong suggestion.
- If multiple comments target the same file, group your edits into one
logical commit.
- Run
shellcheck / markdownlint-cli2 / project tests as appropriate.
Verification:
- Open the new PR with
Resolves #<this-issue> so this followup is auto-closed on merge.
- If the bot's suggestion was incorrect, close this issue with a Outcome A comment — do not open a no-op PR.
Inline comments
coderabbitai on tests/WP_Ultimo/Dashboard_Widgets_Test.php:131
View inline comment
- $script = $wp_scripts->registered['wu-activity-stream'] ?? null;
- $this->assertNotNull($script, 'wu-activity-stream should be registered');
- $this->assertContains('wu-functions', $script->deps);
- $this->assertContains('moment', $script->deps);
-
- if (isset($wp_scripts)) {
- $wp_scripts->queue = $original_queue;
+ $original = $pagenow;
+ $original_queue = isset($wp_scripts) ? $wp_scripts->queue : [];
+ $original_done = isset($wp_scripts) ? $wp_scripts->done : [];
+ $original_screen = function_exists('get_current_screen') ? get_current_screen() : null;
+ $original_screen_id = $original_screen ? $original_screen->id : null;
⚠️ Potential issue | 🟠 Major
🧩 Analysis chain
🌐 Web query:
In WordPress test environments, after calling set_current_screen('dashboard') or set_current_screen('dashboard-network'), what is the recommended way to restore a previously-null screen state: unset($GLOBALS['current_screen']) or set_current_screen('')?
💡 Result:
In WordPress test environments, the recommended way to restore a previously-null screen state after calling set_current_screen('dashboard') or set_current_screen('dashboard-network') is unset($GLOBALS['current_screen']) (along with related globals $taxnow and $typenow).
Citations:
🏁 Script executed:
cat -n tests/WP_Ultimo/Dashboard_Widgets_Test.php | sed -n '125,220p'
Repository: Ultimate-Multisite/ultimate-multisite
Length of output: 3808
Unset $GLOBALS['current_screen'] when restoring to null state.
Both finally blocks skip restoration when $original_screen_id is null/falsy, allowing forced screen states to leak into subsequent tests. The finally blocks must explicitly unset the screen global when the original screen did not exist.
Proposed fix
} finally {
if (isset($wp_scripts)) {
$wp_scripts->queue = $original_queue;
$wp_scripts->done = $original_done;
}
- if ($original_screen_id) {
- set_current_screen($original_screen_id);
- }
+ if (null !== $original_screen_id) {
+ set_current_screen($original_screen_id);
+ } else {
+ unset($GLOBALS['current_screen']);
+ }
$pagenow = $original;
}
Applies to both test methods (lines 165–167 and 212–214).
📝 Committable suggestio
PR review summaries
(none)
aidevops.sh v3.8.22 with claude-sonnet-4-6 spent 1h 38m and 8 tokens on this as a headless worker.
Unaddressed review bot suggestions
PR #821 was merged with unaddressed review bot feedback. Each comment
below includes its file path, line number, a direct link to the inline
review comment, and a
difffence with the code context the bot wasflagging. Resolved and outdated threads are filtered out via GitHub's
GraphQL review-thread state. Read the relevant lines, decide whether
the suggestion is correct, and either apply the fix or close this issue
with a wontfix rationale.
Source PR: #821
You are the triager (worker-is-triager rule)
This issue is auto-created from review bot output and dispatched
directly to you. Review bots can be wrong: hallucinated line refs, false
premises about codebase structure, template-driven sweeps without
measurements (see GH#17832-17835 for prior art and
prompts/build.txtsection 6a). Do not assume the bot is correct. Verify before acting.
You must end in exactly one of three outcomes — no fourth "hand it back
to the human" path exists. Humans approve decisions; they do not re-do
analysis.
Outcome A — Premise falsified → close the issue
Read the cited
file:line(listed under Files to modify below).If the bot's claim is factually wrong (file doesn't exist at that
line, function doesn't behave as described, "auto-generated" section
isn't actually auto-generated, etc.), close the issue with a
comment in this shape:
No PR. No further dispatch. The closing comment trains the next
session reading this thread and the noise filter.
Outcome B — Premise correct + fix is obvious → implement and PR
Resolves #<this-issue-number>in the body(use THIS issue's number, not the source PR's) so merge auto-closes it.
merge, postflight).
Outcome C — Premise correct but approach is a genuine judgment call
Only use this path if you reach it after Outcomes A and B don't apply:
the bot's finding is real, but the fix requires a decision that is
architectural, policy, breaking-change, or otherwise genuinely outside
what you can resolve autonomously. In that case, post a decision
comment with exactly these fields:
yours, with rationale.
(yes/no or pick-one, not open-ended).
Then apply
needs-maintainer-reviewand stop. The human wakes up to aready-to-approve recommendation, not a blank task.
Worker Guidance
Files to modify:
tests/WP_Ultimo/Dashboard_Widgets_Test.php:131Implementation steps (Outcome B path):
diffblock under each inline comment below — it shows theexact code the bot was flagging. Open the file only if you need
surrounding context beyond what the diff tail shows.
and any suggested change.
is wrong, switch to Outcome A instead of burning iterations trying to
satisfy a wrong suggestion.
logical commit.
shellcheck/markdownlint-cli2/ project tests as appropriate.Verification:
Resolves #<this-issue>so this followup is auto-closed on merge.Inline comments
coderabbitai on
tests/WP_Ultimo/Dashboard_Widgets_Test.php:131View inline comment
PR review summaries
(none)
aidevops.sh v3.8.22 with claude-sonnet-4-6 spent 1h 38m and 8 tokens on this as a headless worker.