feat: Move features from issues list to work items list and remove issue/PR specific pages in favor of work items, add global cache for repo details and work items#254
Conversation
…sue/PR specific pages in favor of work items
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughThis PR removes the global issues surface and related issue-specific components/stores, replaces repo-level issue/PR lists with unified work-items, and adds a server-side shared repo cache/sync service, background tasks, and shared GitHub token/rate-limit support. (49 words) Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant API as Server API
participant Sync as RepoSyncService
participant Cache as RepoCache
participant GitHub
Client->>API: GET /api/work-items?repo=owner/repo&...
API->>Cache: readWorkItems(owner,repo)
alt cache hit and not stale
Cache-->>API: cached work-items + meta
API-->>Client: paginated work-items + sync info
par background refresh if stale
API->>Sync: trigger background sync(owner,repo)
Sync->>GitHub: fetch /repos/... & pulls/issues (ETag/conditional)
GitHub-->>Sync: responses (+rate-limit headers)
Sync->>Cache: writeRepoDetail / writeWorkItems
end
else cache miss or cold
API->>Sync: getRepoWorkItemsForRequest(owner,repo)
Sync->>GitHub: fetch issues/pulls, enrich pulls
GitHub-->>Sync: data + headers
Sync->>Cache: writeWorkItems
Sync-->>API: work-items + sync snapshot
API-->>Client: work-items + sync info
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (6)
server/api/issues/state.patch.ts (1)
33-36: Consider wrapping cache invalidation in try-catch to avoid masking mutation success.If
invalidateIssueDetailCacheorinvalidateWorkItemDetailCachethrows (e.g., network error to storage), the successful GitHub state change won't be returned to the client, even though the mutation completed. Wrapping these in a try-catch with logging would ensure the response reflects the actual mutation outcome.♻️ Proposed defensive wrapper
+ try { await invalidateIssueDetailCache(login, repo, number) if (workItemId) { await invalidateWorkItemDetailCache(login, owner, repoName, workItemId) } + } + catch (err) { + console.warn('[state.patch] Cache invalidation failed, continuing:', err) + } return { ok: true, state: res.data.state, stateReason: res.data.state_reason }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/api/issues/state.patch.ts` around lines 33 - 36, Wrap the calls to invalidateIssueDetailCache(login, repo, number) and invalidateWorkItemDetailCache(login, owner, repoName, workItemId) in individual try-catch blocks so any thrown errors are logged but not rethrown; use the project logger (or console.error) to record the exception and continue, ensuring the mutation returns success even if cache invalidation fails—keep the awaits inside the try blocks and do not change the returned mutation result.server/tasks/shared-repo/warm-cache.ts (1)
8-19: Consider adding error handling to prevent task crashes from impacting subsequent runs.If
runSharedRepoMaintenanceCycle()throws an unhandled error, the task will fail. Wrapping in try-catch ensures errors are logged without crashing the scheduled task runner, and allows partial progress reporting.♻️ Proposed error handling
async run() { console.log('[task:shared-repo:warm-cache] Starting maintenance cycle') const startedAt = Date.now() + try { const result = await runSharedRepoMaintenanceCycle() const elapsedMs = Date.now() - startedAt console.log('[task:shared-repo:warm-cache] Completed', { elapsedMs, inspected: result.inspected, warmed: result.warmed, skipped: result.skipped, }) return { result } + } + catch (err) { + const elapsedMs = Date.now() - startedAt + console.error('[task:shared-repo:warm-cache] Failed', { elapsedMs, error: err }) + return { result: { inspected: 0, warmed: [], skipped: [], error: String(err) } } + } },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/tasks/shared-repo/warm-cache.ts` around lines 8 - 19, The run() method in warm-cache.ts should guard against runSharedRepoMaintenanceCycle() throwing by wrapping the call and subsequent logging in a try-catch; inside the try keep the existing timing and success logging, and in catch log the error (console.error or existing logger) with context including elapsedMs and any partial result if runSharedRepoMaintenanceCycle() can return partial progress, then return a non-throwing result object (e.g., { error: true, message: err.message, inspected, warmed, skipped, elapsedMs }) so the scheduled task runner does not crash and subsequent runs continue; update references to runSharedRepoMaintenanceCycle and the run() function to implement this behavior.server/utils/repo-cache.ts (1)
266-279: Sequential metadata reads could be parallelized.The
forloop at lines 273-276 reads each repo's metadata sequentially. For repositories with many cached entries, this could be slow.Consider batching with
Promise.allif the storage backend supports it:const result = await Promise.all( entries.map(async (entry) => { const meta = await readMeta(entry.owner, entry.repo) return { ...entry, meta } }) )This is optional since the maintenance cycle already limits processing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/utils/repo-cache.ts` around lines 266 - 279, The listKnownRepoMetaEntries function currently reads each repo's metadata sequentially in the for loop; change it to run reads in parallel by replacing the loop with a Promise.all over entries.map so each readMeta(entry.owner, entry.repo) is awaited concurrently (keep parseOwnerRepoFromMetaKey and storage().getKeys logic intact and return the array of RepoMetaEntry objects produced by the Promise.all).server/utils/repo-sync-service.ts (3)
805-811: Extra metadata read inside log statement.Line 810 performs
await readMeta(owner, repo)solely for logging the sync status. This duplicates I/O sincereadMetawas already called earlier (line 773) for stale cache checks, but the cold-cache path doesn't have that reference in scope.Consider capturing the status from
queueWorkItemsSyncIfNeededreturn value or omitting it from the log.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/utils/repo-sync-service.ts` around lines 805 - 811, The log is doing an extra await(readMeta(owner, repo)) just to include workItemsSyncStatus, causing duplicate I/O; change the cold-cache path so it captures and reuses the status returned by queueWorkItemsSyncIfNeeded (or have queueWorkItemsSyncIfNeeded return the status) and use that value in the console.log instead of calling readMeta again, or simply omit workItemsSyncStatus from the log if the status isn't available—update references around console.log('[repo-sync] cold work-items fetch complete'...) and the queueWorkItemsSyncIfNeeded/readMeta interaction to avoid the redundant read.
844-852: Activity classification logic may benefit from clarification.Line 850 classifies repos as 'warm' if
ageSinceRequest <= COLD_REPO_WINDOW_MS(24h), regardless of request count. This differs from line 849'swarmcondition requiringrequestCount >= 2.Is the intent to keep any recently-accessed repo from going completely cold? If so, consider adding a code comment to clarify this graduated fallback behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/utils/repo-sync-service.ts` around lines 844 - 852, The classification in classifyRepoActivity currently treats any repo with ageSinceRequest <= COLD_REPO_WINDOW_MS as 'warm' regardless of requestCount; update the function by adding a concise comment above the conditional block (or above the specific line using ageSinceRequest <= COLD_REPO_WINDOW_MS) that explains the intended graduated fallback behavior: recently-accessed repos within the COLD_REPO_WINDOW_MS should remain 'warm' even if requestCount < 2 to avoid immediately marking them 'cold'; reference the variables HOT_REPO_WINDOW_MS, WARM_REPO_WINDOW_MS, COLD_REPO_WINDOW_MS, ageSinceRequest, and requestCount in the comment so future readers understand the rationale.
236-254: Fast work-items fetch also skips PR detail enrichment.Same as
fetchAllWorkItemsFromGitHub— the emptypullDetailsmap means no issue-PR linking. This may be acceptable for the fast preview use case (cold cache response), but note that the initial data shown to users will lack linked issue/PR relationships until the background sync completes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/utils/repo-sync-service.ts` around lines 236 - 254, fetchTopWorkItemsFromGitHub currently passes an empty pullDetails map into buildWorkItemsFromRaw, so PR/issue linking is omitted for the fast path; update this function to populate pullDetails before calling buildWorkItemsFromRaw (for example, call the same PR enrichment helper used by fetchAllWorkItemsFromGitHub or implement a lightweight fetchPullDetailsForIssues(token, owner, repo, issues, limit) that fetches PR metadata for the returned issues and fills the Map<number, GitHubPullRequest & { body?: string }>) and then pass that populated map into buildWorkItemsFromRaw so the fast-preview includes linked PR relationships while keeping performance constraints (e.g., limit or parallelism) as appropriate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/composables/useWorkItemBadges.ts`:
- Around line 3-15: stateBadgeColor currently accepts only a raw itemState and
thus never hits the DRAFT mapping for items whose state is derived differently
(so Draft items get the OPEN color); update useWorkItemBadges by changing
stateBadgeColor to accept the full work item (or compute the canonical state the
same way stateBadgeLabel does) and derive the state internally (using the same
logic as stateBadgeLabel) before looking up STATE_COLOR; then update all call
sites to pass the whole item into stateBadgeColor rather than a raw state string
so the DRAFT entry in STATE_COLOR can be reached.
In `@app/pages/repos/`[owner]/[repo]/work-items/index.vue:
- Around line 50-56: The current serialization of filters via
allFilters.value.join('|') loses and corrupts labels that contain '|' or ',';
replace the lossy join with a safe array encoding when building the route/query
(e.g., emit filters as a JSON-encoded string using
JSON.stringify(allFilters.value) or as repeated query params rather than a
single delimited string) so the server parser receives the original array
intact; update any consumer that reads the repo/state/search/sort/filters
payload accordingly (look for the filters key and the allFilters.value
reference).
- Around line 188-190: knownAuthors is being built only from the current page
(section.data / workItems.value) so author/assignee filter lists are incomplete
for paginated repos; update the data provider to compute and return
availableAuthors and availableAssignees (in addition to availableLabels) from
the full result set (not just section.data) and then use those new lists in the
author and assignee dropdowns instead of knownAuthors/workItems.value so the
filter menus include users who appear only on other pages; update references to
knownAuthors and any reuse at lines ~340-359 to consume
availableAuthors/availableAssignees.
In `@app/pages/repos/`[owner]/[repo]/work-items/new.vue:
- Around line 47-63: The watchEffect using repoFullName and apiFetch must reset
repo-scoped state before awaiting the fetch so stale values from a previous repo
cannot be used; inside watchEffect (around the existing currentRepo/local
capture before calling apiFetch) clear repositoryId.value, templates.value, and
hasTemplates.value and set step.value to an intermediate/loading state, then
perform the apiFetch and only assign
repositoryId.value/templates.value/hasTemplates.value/step.value when
repoFullName.value === currentRepo; ensure the catch also checks the same
currentRepo before touching step.value so you never leave stale repo-specific
identifiers live while a new fetch is pending.
In `@package.json`:
- Around line 81-82: Remove the unused devDependency "vue-tsc" from
package.json: delete the "vue-tsc" entry under devDependencies since type
checking is already performed via the existing "test:types": "nuxi typecheck"
script; ensure no scripts, CI steps, or tooling reference the "vue-tsc" symbol
(e.g., any npm/yarn scripts or workflow steps) before committing so the
dependency removal won’t break the pipeline.
In `@server/api/github/shared-rate-limit.get.ts`:
- Around line 10-35: Replace the single check of info.limit === 0 with a check
that detects missing or expired per-bucket entries (e.g., check fields for the
REST and GraphQL buckets on the object returned by getSharedRateLimit(), or that
their reset timestamps are in the past) and when either bucket is absent or
expired call githubFetchWithToken(sharedToken, '/rate_limit') and call
updateRateLimitFromHeaders for both the core (rest) and graphql buckets to
reseed the shared cache, then return getSharedRateLimit(); keep the same header
keys and the existing last-parameter true to indicate seeding.
In `@server/utils/repo-sync-service.ts`:
- Around line 639-658: The meta flag workItemsSyncStatus is set to 'running'
inside queueWorkItemsSyncIfNeeded before the background task actually acquires
the lock, which can leave the repo stuck; change this by deferring the status
flip into the background worker (syncWorkItems) immediately after successful
lock acquisition (or, alternatively, add a TTL-based/reset path in
queueWorkItemsSyncIfNeeded that schedules a meta reset if the worker never
starts). Specifically, remove the preemptive meta.workItemsSyncStatus =
'running' and meta write from queueWorkItemsSyncIfNeeded (which calls
triggerBackgroundWorkItemsSync), and instead set meta.workItemsSyncStatus =
'running' and persist via writeMeta inside syncWorkItems right after obtaining
the lock (and ensure syncWorkItems clears status and sets workItemsLastError on
completion/failure); if you prefer a shorter change, add a timeout/reset
mechanism in queueWorkItemsSyncIfNeeded that clears the 'running' status after a
reasonable TTL unless a heartbeat/confirmation from syncWorkItems is received.
- Around line 206-234: fetchAllWorkItemsFromGitHub builds an empty pullDetails
Map and passes it to buildWorkItemsFromRaw, so PR body text isn’t available for
collectIssueLinksFromText and issue-PR linking is missed during full sync; fix
by calling the same PR detail enrichment used in buildIncrementalWorkItems
(invoke fetchPullDetailsForIssues or equivalent) inside
fetchAllWorkItemsFromGitHub before buildWorkItemsFromRaw so pullDetails is
populated with PR bodies (optionally gating this behind a flag or limit to avoid
excessive API calls).
In `@server/utils/work-item-builder.ts`:
- Line 149: The pullInsights Map is created but never populated, so calls like
pullInsights.get(primaryLinkedPull.number) yield undefined and
reviewDecision/ciStatus remain null; to fix, call the existing
fetchPullInsights(...) function and assign its returned Map to pullInsights
before it is used (ensure you pass the required token through the caller chain),
or if PR insights are not needed remove fetchPullInsights and all references to
pullInsights and the reviewDecision/ciStatus fallbacks; locate the Map
declaration pullInsights, the fetchPullInsights function, and usages (e.g.,
primaryLinkedPull.number, reviewDecision, ciStatus in WorkItem) and either wire
the fetch call to populate the Map or delete the dead code.
- Around line 41-109: buildWorkItemsFromRaw currently leaves pullInsights empty
so reviewDecision and ciStatus are always null; call the existing
fetchPullInsights(token, owner, repo, pullNumbers) from inside
buildWorkItemsFromRaw before you read pullInsights (collect the pull request
numbers from raw items, await fetchPullInsights, and assign the returned Map to
pullInsights), handle errors the same way as other fetches (fall back to nulls
for missing PRs), and ensure you pass the correct GitHub token, owner and repo
variables and preserve the Map shape so subsequent calls to
pullInsights.get(pullNumber) return the fetched data.
---
Nitpick comments:
In `@server/api/issues/state.patch.ts`:
- Around line 33-36: Wrap the calls to invalidateIssueDetailCache(login, repo,
number) and invalidateWorkItemDetailCache(login, owner, repoName, workItemId) in
individual try-catch blocks so any thrown errors are logged but not rethrown;
use the project logger (or console.error) to record the exception and continue,
ensuring the mutation returns success even if cache invalidation fails—keep the
awaits inside the try blocks and do not change the returned mutation result.
In `@server/tasks/shared-repo/warm-cache.ts`:
- Around line 8-19: The run() method in warm-cache.ts should guard against
runSharedRepoMaintenanceCycle() throwing by wrapping the call and subsequent
logging in a try-catch; inside the try keep the existing timing and success
logging, and in catch log the error (console.error or existing logger) with
context including elapsedMs and any partial result if
runSharedRepoMaintenanceCycle() can return partial progress, then return a
non-throwing result object (e.g., { error: true, message: err.message,
inspected, warmed, skipped, elapsedMs }) so the scheduled task runner does not
crash and subsequent runs continue; update references to
runSharedRepoMaintenanceCycle and the run() function to implement this behavior.
In `@server/utils/repo-cache.ts`:
- Around line 266-279: The listKnownRepoMetaEntries function currently reads
each repo's metadata sequentially in the for loop; change it to run reads in
parallel by replacing the loop with a Promise.all over entries.map so each
readMeta(entry.owner, entry.repo) is awaited concurrently (keep
parseOwnerRepoFromMetaKey and storage().getKeys logic intact and return the
array of RepoMetaEntry objects produced by the Promise.all).
In `@server/utils/repo-sync-service.ts`:
- Around line 805-811: The log is doing an extra await(readMeta(owner, repo))
just to include workItemsSyncStatus, causing duplicate I/O; change the
cold-cache path so it captures and reuses the status returned by
queueWorkItemsSyncIfNeeded (or have queueWorkItemsSyncIfNeeded return the
status) and use that value in the console.log instead of calling readMeta again,
or simply omit workItemsSyncStatus from the log if the status isn't
available—update references around console.log('[repo-sync] cold work-items
fetch complete'...) and the queueWorkItemsSyncIfNeeded/readMeta interaction to
avoid the redundant read.
- Around line 844-852: The classification in classifyRepoActivity currently
treats any repo with ageSinceRequest <= COLD_REPO_WINDOW_MS as 'warm' regardless
of requestCount; update the function by adding a concise comment above the
conditional block (or above the specific line using ageSinceRequest <=
COLD_REPO_WINDOW_MS) that explains the intended graduated fallback behavior:
recently-accessed repos within the COLD_REPO_WINDOW_MS should remain 'warm' even
if requestCount < 2 to avoid immediately marking them 'cold'; reference the
variables HOT_REPO_WINDOW_MS, WARM_REPO_WINDOW_MS, COLD_REPO_WINDOW_MS,
ageSinceRequest, and requestCount in the comment so future readers understand
the rationale.
- Around line 236-254: fetchTopWorkItemsFromGitHub currently passes an empty
pullDetails map into buildWorkItemsFromRaw, so PR/issue linking is omitted for
the fast path; update this function to populate pullDetails before calling
buildWorkItemsFromRaw (for example, call the same PR enrichment helper used by
fetchAllWorkItemsFromGitHub or implement a lightweight
fetchPullDetailsForIssues(token, owner, repo, issues, limit) that fetches PR
metadata for the returned issues and fills the Map<number, GitHubPullRequest & {
body?: string }>) and then pass that populated map into buildWorkItemsFromRaw so
the fast-preview includes linked PR relationships while keeping performance
constraints (e.g., limit or parallelism) as appropriate.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ee162e15-5629-4853-805b-3fa6c93e29f6
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (51)
app/components/issue/IssueRepoSelect.vueapp/components/issue/IssueRow.vueapp/components/issue/IssueRowSkeleton.vueapp/components/issue/IssueToolbar.vueapp/components/repo/RepoCard.vueapp/components/repo/RepoIssueList.vueapp/components/repo/RepoIssueRow.vueapp/components/repo/RepoPrList.vueapp/components/repo/RepoPrRow.vueapp/components/repo/RepoWorkItemList.vueapp/components/ui/RateLimitIndicator.vueapp/components/ui/SideBar.vueapp/composables/useWorkItemBadges.tsapp/pages/issues/index.vueapp/pages/repos/[owner]/[repo]/index.vueapp/pages/repos/[owner]/[repo]/issues/index.vueapp/pages/repos/[owner]/[repo]/work-items/index.vueapp/pages/repos/[owner]/[repo]/work-items/new.vueapp/pages/user/[id].vueapp/stores/issues.tsi18n/locales/de.jsoni18n/locales/en.jsoni18n/schema.jsonnuxt.config.tspackage.jsonserver/api/github/shared-rate-limit.get.tsserver/api/issues/assignees.post.tsserver/api/issues/index.get.tsserver/api/issues/labels.post.tsserver/api/issues/search.get.tsserver/api/issues/state.patch.tsserver/api/pull-requests/review-comments.delete.tsserver/api/pull-requests/review-comments.patch.tsserver/api/pull-requests/review-comments.post.tsserver/api/repository/[owner]/[repo]/index.get.tsserver/api/repository/[owner]/[repo]/pulls.get.tsserver/api/repository/[owner]/[repo]/pulls/[number]/reviews.post.tsserver/api/repository/[owner]/[repo]/pulls/[number]/reviews/[reviewId]/dismiss.post.tsserver/api/repository/[owner]/[repo]/work-items.get.tsserver/api/repository/[owner]/[repo]/work-items/[id]/invalidate.post.tsserver/api/work-items/index.get.tsserver/tasks/shared-repo/warm-cache.tsserver/utils/github-app.tsserver/utils/github-graphql.tsserver/utils/github.tsserver/utils/repo-cache.tsserver/utils/repo-sync-service.tsserver/utils/work-item-builder.tsserver/utils/work-items-query.tsshared/types/work-item.tstest/nuxt/issueStore.test.ts
💤 Files with no reviewable changes (15)
- app/components/issue/IssueRowSkeleton.vue
- app/components/repo/RepoIssueRow.vue
- server/api/issues/search.get.ts
- app/components/repo/RepoPrRow.vue
- app/components/repo/RepoPrList.vue
- server/api/issues/index.get.ts
- app/pages/repos/[owner]/[repo]/issues/index.vue
- app/components/issue/IssueRow.vue
- server/api/repository/[owner]/[repo]/pulls.get.ts
- app/pages/issues/index.vue
- test/nuxt/issueStore.test.ts
- app/stores/issues.ts
- app/components/repo/RepoIssueList.vue
- app/components/issue/IssueRepoSelect.vue
- app/components/issue/IssueToolbar.vue
| const STATE_COLOR: Record<string, string> = { | ||
| OPEN: 'success', | ||
| CLOSED: 'neutral', | ||
| MERGED: 'primary', | ||
| DRAFT: 'neutral', | ||
| } | ||
|
|
||
| export function useWorkItemBadges() { | ||
| const { t } = useI18n() | ||
|
|
||
| function stateBadgeColor(itemState: string) { | ||
| return STATE_COLOR[itemState] ?? 'neutral' | ||
| } |
There was a problem hiding this comment.
Draft PRs never hit the neutral badge color.
stateBadgeColor() only looks at the raw state, so the DRAFT entry above is unreachable for the same items that stateBadgeLabel() renders as draft. That leaves a "Draft" label with the open/success tone.
♻️ Suggested fix
- function stateBadgeColor(itemState: string) {
- return STATE_COLOR[itemState] ?? 'neutral'
+ function stateBadgeColor(item: WorkItem) {
+ if (item.type === 'pull' && item.isDraft) return STATE_COLOR.DRAFT
+ return STATE_COLOR[item.state] ?? 'neutral'
}Update the call sites to pass the whole item.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/composables/useWorkItemBadges.ts` around lines 3 - 15, stateBadgeColor
currently accepts only a raw itemState and thus never hits the DRAFT mapping for
items whose state is derived differently (so Draft items get the OPEN color);
update useWorkItemBadges by changing stateBadgeColor to accept the full work
item (or compute the canonical state the same way stateBadgeLabel does) and
derive the state internally (using the same logic as stateBadgeLabel) before
looking up STATE_COLOR; then update all call sites to pass the whole item into
stateBadgeColor rather than a raw state string so the DRAFT entry in STATE_COLOR
can be reached.
| () => ({ | ||
| repo: repoFullName.value, | ||
| state: state.value, | ||
| search: search.value.trim(), | ||
| sort: sortKey.value, | ||
| filters: allFilters.value.join('|'), | ||
| }), |
There was a problem hiding this comment.
Don't serialize raw filter values into a delimited string.
filters is built from raw label names, but the server parser splits on | and ,. Labels containing either character will be broken into multiple filters and stop matching. Repeated query params or JSON-encoding the array would avoid the lossy join here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/pages/repos/`[owner]/[repo]/work-items/index.vue around lines 50 - 56,
The current serialization of filters via allFilters.value.join('|') loses and
corrupts labels that contain '|' or ','; replace the lossy join with a safe
array encoding when building the route/query (e.g., emit filters as a
JSON-encoded string using JSON.stringify(allFilters.value) or as repeated query
params rather than a single delimited string) so the server parser receives the
original array intact; update any consumer that reads the
repo/state/search/sort/filters payload accordingly (look for the filters key and
the allFilters.value reference).
| const knownAuthors = computed(() => { | ||
| const authors = new Set(workItems.value.map(i => i.author.login)) | ||
| return [...authors].sort() |
There was a problem hiding this comment.
Author/assignee filter options are page-scoped right now.
knownAuthors is built from section.data, which is only the current page, and the assignee menu reuses that same list. On paginated repos this hides valid author/assignee values, and the assignee dropdown misses users who are only assignees on matching items. Returning availableAuthors / availableAssignees alongside availableLabels would keep these filters complete.
Also applies to: 340-359
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/pages/repos/`[owner]/[repo]/work-items/index.vue around lines 188 - 190,
knownAuthors is being built only from the current page (section.data /
workItems.value) so author/assignee filter lists are incomplete for paginated
repos; update the data provider to compute and return availableAuthors and
availableAssignees (in addition to availableLabels) from the full result set
(not just section.data) and then use those new lists in the author and assignee
dropdowns instead of knownAuthors/workItems.value so the filter menus include
users who appear only on other pages; update references to knownAuthors and any
reuse at lines ~340-359 to consume availableAuthors/availableAssignees.
| watchEffect(async () => { | ||
| if (!repo.value) { | ||
| step.value = 'form' | ||
| return | ||
| } | ||
|
|
||
| const currentRepo = repoFullName.value | ||
| try { | ||
| const data = await apiFetch<{ repositoryId: string, templates: IssueTemplate[] }>( | ||
| '/api/issues/templates', | ||
| { params: { repo: repo.value } }, | ||
| { params: { repo: repoFullName.value } }, | ||
| ) | ||
| if (repoFullName.value !== currentRepo) return | ||
| repositoryId.value = data.repositoryId | ||
| templates.value = data.templates | ||
| hasTemplates.value = data.templates.length > 0 | ||
| step.value = data.templates.length ? 'templates' : 'form' | ||
| } | ||
| catch { | ||
| if (repoFullName.value !== currentRepo) return | ||
| step.value = 'form' | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Nuxt 3 / Vue Router 4, when navigating between URLs handled by the same page component but with different dynamic params, is the existing component instance reused by default, and should page-local state be reset manually on param changes?
💡 Result:
Yes—by default the same component instance is reused when only dynamic params change (e.g. /users/johnny → /users/jolyne), so page-local state is not automatically reset; you should react to param changes (e.g. refetch/reset in onBeforeRouteUpdate or a watch on route.params). [1]
In Nuxt 3, pages are rendered via <NuxtPage> which wraps Vue Router’s <RouterView>, so this same reuse behavior applies unless you opt into forcing a re-render/remount by changing the page key (via <NuxtPage page-key="…"> or definePageMeta({ key: … })). [2]
🏁 Script executed:
# First, find the file and check its size
find . -path "*/app/pages/repos/*/[repo]/work-items/new.vue" -o -path "*/app/pages/repos/**/new.vue" | head -20Repository: flumen-dev/flumen.dev
Length of output: 116
🏁 Script executed:
# Broader search for the new.vue file in work-items
fd -t f "new.vue" | grep -i "work-items"Repository: flumen-dev/flumen.dev
Length of output: 114
🏁 Script executed:
# Search for the file more broadly
git ls-files | grep "work-items.*new.vue"Repository: flumen-dev/flumen.dev
Length of output: 114
🏁 Script executed:
# Check repository structure to understand the path
git ls-files | grep "pages.*repos" | head -20Repository: flumen-dev/flumen.dev
Length of output: 284
🏁 Script executed:
wc -l app/pages/repos/[owner]/[repo]/work-items/new.vueRepository: flumen-dev/flumen.dev
Length of output: 118
🏁 Script executed:
cat -n app/pages/repos/[owner]/[repo]/work-items/new.vueRepository: flumen-dev/flumen.dev
Length of output: 11753
🏁 Script executed:
# Verify watchEffect dependencies and reactivity
rg -A 5 "watchEffect\(async" app/pages/repos/[owner]/[repo]/work-items/new.vueRepository: flumen-dev/flumen.dev
Length of output: 305
🏁 Script executed:
# Check if there are any other param watchers or reset mechanisms
rg -B 2 -A 8 "onBeforeRouteUpdate|watch.*route\.params|watch.*repoFullName" app/pages/repos/[owner]/[repo]/work-items/new.vueRepository: flumen-dev/flumen.dev
Length of output: 47
Reset repo-scoped creation state before the next fetch resolves.
If this page instance is reused across [owner]/[repo] changes, the previous repo's repositoryId and template state stay live until apiFetch() finishes. A fast submit—or a failed fetch—can then post repo B's URL with repo A's repositoryId, which is an unsafe cross-repo write.
🛡️ Suggested fix
watchEffect(async () => {
const currentRepo = repoFullName.value
+ repositoryId.value = ''
+ templates.value = []
+ hasTemplates.value = false
+ selectedFormTemplate.value = null
+ step.value = 'loading'
try {
const data = await apiFetch<{ repositoryId: string, templates: IssueTemplate[] }>(
'/api/issues/templates',
{ params: { repo: repoFullName.value } },
)
if (repoFullName.value !== currentRepo) return
repositoryId.value = data.repositoryId
templates.value = data.templates
hasTemplates.value = data.templates.length > 0
step.value = data.templates.length ? 'templates' : 'form'
}
catch {
if (repoFullName.value !== currentRepo) return
+ repositoryId.value = ''
+ templates.value = []
+ hasTemplates.value = false
step.value = 'form'
}
})Also applies to: 99-105
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/pages/repos/`[owner]/[repo]/work-items/new.vue around lines 47 - 63, The
watchEffect using repoFullName and apiFetch must reset repo-scoped state before
awaiting the fetch so stale values from a previous repo cannot be used; inside
watchEffect (around the existing currentRepo/local capture before calling
apiFetch) clear repositoryId.value, templates.value, and hasTemplates.value and
set step.value to an intermediate/loading state, then perform the apiFetch and
only assign repositoryId.value/templates.value/hasTemplates.value/step.value
when repoFullName.value === currentRepo; ensure the catch also checks the same
currentRepo before touching step.value so you never leave stale repo-specific
identifiers live while a new fetch is pending.
| "vue-i18n-extract": "2.0.7", | ||
| "vue-tsc": "3.2.6" |
There was a problem hiding this comment.
Remove unused vue-tsc devDependency.
The pipeline reports that vue-tsc is unused. The project already has type checking via "test:types": "nuxi typecheck" (line 17), which uses Nuxt's built-in type checker. Adding vue-tsc without invoking it anywhere creates maintenance overhead and causes the CI pipeline to fail.
🧹 Proposed fix
- "vue-i18n-extract": "2.0.7",
- "vue-tsc": "3.2.6"
+ "vue-i18n-extract": "2.0.7"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "vue-i18n-extract": "2.0.7", | |
| "vue-tsc": "3.2.6" | |
| "vue-i18n-extract": "2.0.7" |
🧰 Tools
🪛 GitHub Actions: ci
[error] 82-82: knip found unused devDependency: "vue-tsc" (Unused devDependencies: 1).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` around lines 81 - 82, Remove the unused devDependency "vue-tsc"
from package.json: delete the "vue-tsc" entry under devDependencies since type
checking is already performed via the existing "test:types": "nuxi typecheck"
script; ensure no scripts, CI steps, or tooling reference the "vue-tsc" symbol
(e.g., any npm/yarn scripts or workflow steps) before committing so the
dependency removal won’t break the pipeline.
| const info = getSharedRateLimit() | ||
|
|
||
| // Seed cache if nothing tracked yet. | ||
| if (info.limit === 0) { | ||
| const { data } = await githubFetchWithToken<{ | ||
| resources: { | ||
| core: { limit: number, remaining: number, reset: number } | ||
| graphql: { limit: number, remaining: number, reset: number } | ||
| } | ||
| }>(sharedToken, '/rate_limit') | ||
|
|
||
| const core = data.resources.core | ||
| const graphql = data.resources.graphql | ||
| updateRateLimitFromHeaders(new Headers({ | ||
| 'x-ratelimit-limit': String(core.limit), | ||
| 'x-ratelimit-remaining': String(core.remaining), | ||
| 'x-ratelimit-reset': String(core.reset), | ||
| }), 'rest', undefined, true) | ||
| updateRateLimitFromHeaders(new Headers({ | ||
| 'x-ratelimit-limit': String(graphql.limit), | ||
| 'x-ratelimit-remaining': String(graphql.remaining), | ||
| 'x-ratelimit-reset': String(graphql.reset), | ||
| }), 'graphql', undefined, true) | ||
|
|
||
| return getSharedRateLimit() | ||
| } |
There was a problem hiding this comment.
Refresh the shared cache when a bucket is missing or expired.
getSharedRateLimit() only gives you the aggregate sum. Once the process has seen just REST or just GraphQL traffic, info.limit is already non-zero and this branch stops calling /rate_limit, so the shared PAT indicator can underreport the missing bucket indefinitely. The same short-circuit also leaves stale values in place after the cached reset has passed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/api/github/shared-rate-limit.get.ts` around lines 10 - 35, Replace the
single check of info.limit === 0 with a check that detects missing or expired
per-bucket entries (e.g., check fields for the REST and GraphQL buckets on the
object returned by getSharedRateLimit(), or that their reset timestamps are in
the past) and when either bucket is absent or expired call
githubFetchWithToken(sharedToken, '/rate_limit') and call
updateRateLimitFromHeaders for both the core (rest) and graphql buckets to
reseed the shared cache, then return getSharedRateLimit(); keep the same header
keys and the existing last-parameter true to indicate seeding.
| async function fetchAllWorkItemsFromGitHub( | ||
| token: string, | ||
| owner: string, | ||
| repo: string, | ||
| ): Promise<WorkItem[]> { | ||
| const startedAt = Date.now() | ||
| console.log('[repo-sync] full work-items build started', { owner, repo }) | ||
|
|
||
| const issues = await fetchCanonicalIssues(token, owner, repo) | ||
| console.log('[repo-sync] full work-items build issues fetched', { | ||
| owner, | ||
| repo, | ||
| issues: issues.length, | ||
| elapsedMs: Date.now() - startedAt, | ||
| }) | ||
|
|
||
| const pullDetails = new Map<number, GitHubPullRequest & { body?: string }>() | ||
| const workItems = await buildWorkItemsFromRaw(owner, repo, issues, pullDetails) | ||
|
|
||
| console.log('[repo-sync] full work-items build completed', { | ||
| owner, | ||
| repo, | ||
| issues: issues.length, | ||
| workItems: workItems.length, | ||
| elapsedMs: Date.now() - startedAt, | ||
| }) | ||
|
|
||
| return workItems | ||
| } |
There was a problem hiding this comment.
Full sync skips PR detail enrichment — issue-PR linking will be incomplete.
fetchAllWorkItemsFromGitHub creates an empty pullDetails map (line 222) and passes it to buildWorkItemsFromRaw. Without PR body content, collectIssueLinksFromText cannot extract linked issue numbers from PR descriptions.
This contrasts with buildIncrementalWorkItems (line 419), which does call fetchPullDetailsForIssues. The inconsistency means:
- Full syncs: No issue-PR linking via body text
- Incremental syncs: Proper linking
Consider fetching PR details for full syncs as well, or document this as intentional for performance reasons.
🔧 Proposed fix to fetch PR details during full sync
async function fetchAllWorkItemsFromGitHub(
token: string,
owner: string,
repo: string,
): Promise<WorkItem[]> {
const startedAt = Date.now()
console.log('[repo-sync] full work-items build started', { owner, repo })
const issues = await fetchCanonicalIssues(token, owner, repo)
console.log('[repo-sync] full work-items build issues fetched', {
owner,
repo,
issues: issues.length,
elapsedMs: Date.now() - startedAt,
})
- const pullDetails = new Map<number, GitHubPullRequest & { body?: string }>()
+ const pullDetails = await fetchPullDetailsForIssues(token, owner, repo, issues)
const workItems = await buildWorkItemsFromRaw(owner, repo, issues, pullDetails)Note: This will increase API calls during full sync. If this is a concern, consider making it optional or limiting to the first N PRs.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function fetchAllWorkItemsFromGitHub( | |
| token: string, | |
| owner: string, | |
| repo: string, | |
| ): Promise<WorkItem[]> { | |
| const startedAt = Date.now() | |
| console.log('[repo-sync] full work-items build started', { owner, repo }) | |
| const issues = await fetchCanonicalIssues(token, owner, repo) | |
| console.log('[repo-sync] full work-items build issues fetched', { | |
| owner, | |
| repo, | |
| issues: issues.length, | |
| elapsedMs: Date.now() - startedAt, | |
| }) | |
| const pullDetails = new Map<number, GitHubPullRequest & { body?: string }>() | |
| const workItems = await buildWorkItemsFromRaw(owner, repo, issues, pullDetails) | |
| console.log('[repo-sync] full work-items build completed', { | |
| owner, | |
| repo, | |
| issues: issues.length, | |
| workItems: workItems.length, | |
| elapsedMs: Date.now() - startedAt, | |
| }) | |
| return workItems | |
| } | |
| async function fetchAllWorkItemsFromGitHub( | |
| token: string, | |
| owner: string, | |
| repo: string, | |
| ): Promise<WorkItem[]> { | |
| const startedAt = Date.now() | |
| console.log('[repo-sync] full work-items build started', { owner, repo }) | |
| const issues = await fetchCanonicalIssues(token, owner, repo) | |
| console.log('[repo-sync] full work-items build issues fetched', { | |
| owner, | |
| repo, | |
| issues: issues.length, | |
| elapsedMs: Date.now() - startedAt, | |
| }) | |
| const pullDetails = await fetchPullDetailsForIssues(token, owner, repo, issues) | |
| const workItems = await buildWorkItemsFromRaw(owner, repo, issues, pullDetails) | |
| console.log('[repo-sync] full work-items build completed', { | |
| owner, | |
| repo, | |
| issues: issues.length, | |
| workItems: workItems.length, | |
| elapsedMs: Date.now() - startedAt, | |
| }) | |
| return workItems | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/utils/repo-sync-service.ts` around lines 206 - 234,
fetchAllWorkItemsFromGitHub builds an empty pullDetails Map and passes it to
buildWorkItemsFromRaw, so PR body text isn’t available for
collectIssueLinksFromText and issue-PR linking is missed during full sync; fix
by calling the same PR detail enrichment used in buildIncrementalWorkItems
(invoke fetchPullDetailsForIssues or equivalent) inside
fetchAllWorkItemsFromGitHub before buildWorkItemsFromRaw so pullDetails is
populated with PR bodies (optionally gating this behind a flag or limit to avoid
excessive API calls).
| export async function fetchPullInsights( | ||
| token: string, | ||
| owner: string, | ||
| repo: string, | ||
| pullNumbers: number[], | ||
| ): Promise<Map<number, { reviewDecision: WorkItem['reviewDecision'], ciStatus: WorkItem['ciStatus'] }>> { | ||
| const result = new Map<number, { reviewDecision: WorkItem['reviewDecision'], ciStatus: WorkItem['ciStatus'] }>() | ||
| if (!pullNumbers.length) return result | ||
|
|
||
| for (const batch of chunk(pullNumbers, 40)) { | ||
| const fields = batch | ||
| .map((number, index) => ` | ||
| pr${index}: pullRequest(number: ${number}) { | ||
| number | ||
| reviewDecision | ||
| commits(last: 1) { | ||
| nodes { | ||
| commit { | ||
| statusCheckRollup { state } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| `) | ||
| .join('\n') | ||
|
|
||
| const query = ` | ||
| query($owner: String!, $repo: String!) { | ||
| repository(owner: $owner, name: $repo) { | ||
| ${fields} | ||
| } | ||
| } | ||
| ` | ||
|
|
||
| try { | ||
| const data = await githubGraphQL<Record<string, Record<string, { | ||
| number: number | ||
| reviewDecision: WorkItem['reviewDecision'] | ||
| commits?: { nodes: Array<{ commit: { statusCheckRollup: { state: string } | null } }> } | ||
| } | null>>>(token, query, { owner, repo }) | ||
|
|
||
| const repository = data.repository ?? {} | ||
| for (const pull of Object.values(repository)) { | ||
| if (!pull) continue | ||
| const ciRaw = pull.commits?.nodes?.[0]?.commit?.statusCheckRollup?.state | ||
| result.set(pull.number, { | ||
| reviewDecision: pull.reviewDecision ?? null, | ||
| ciStatus: mapCiStatus(ciRaw), | ||
| }) | ||
| } | ||
|
|
||
| for (const pullNumber of batch) { | ||
| if (!result.has(pullNumber)) { | ||
| result.set(pullNumber, { reviewDecision: null, ciStatus: null }) | ||
| } | ||
| } | ||
| } | ||
| catch (error) { | ||
| console.error('[work-item-builder] Failed to fetch pull insights batch', { owner, repo, batch, error }) | ||
| for (const pullNumber of batch) { | ||
| if (!result.has(pullNumber)) { | ||
| result.set(pullNumber, { reviewDecision: null, ciStatus: null }) | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return result | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify if fetchPullInsights is called anywhere in the codebase
# Expected: Should find at least one call site, otherwise it's dead code
rg -n 'fetchPullInsights' --type=tsRepository: flumen-dev/flumen.dev
Length of output: 142
🏁 Script executed:
#!/bin/bash
# Search for buildWorkItemsFromRaw function and pullInsights usage
rg -n 'buildWorkItemsFromRaw|pullInsights' server/utils/work-item-builder.ts -A 2 -B 2Repository: flumen-dev/flumen.dev
Length of output: 963
🏁 Script executed:
#!/bin/bash
# Search for fetchPullInsights calls across entire codebase to be thorough
rg -n 'fetchPullInsights' --type=ts --type=jsRepository: flumen-dev/flumen.dev
Length of output: 142
fetchPullInsights is exported but never called, leaving PR review decisions and CI status always null.
The function fetches reviewDecision and ciStatus via GraphQL (lines 41-109) but isn't invoked anywhere in the codebase. In buildWorkItemsFromRaw, the pullInsights Map is initialized empty at line 149 and remains unpopulated. Calls to pullInsights.get() at lines 170 and 207 always return undefined, causing these fields to remain null in the generated work items.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/utils/work-item-builder.ts` around lines 41 - 109,
buildWorkItemsFromRaw currently leaves pullInsights empty so reviewDecision and
ciStatus are always null; call the existing fetchPullInsights(token, owner,
repo, pullNumbers) from inside buildWorkItemsFromRaw before you read
pullInsights (collect the pull request numbers from raw items, await
fetchPullInsights, and assign the returned Map to pullInsights), handle errors
the same way as other fetches (fall back to nulls for missing PRs), and ensure
you pass the correct GitHub token, owner and repo variables and preserve the Map
shape so subsequent calls to pullInsights.get(pullNumber) return the fetched
data.
| } satisfies RepoPullRequest | ||
| }) | ||
|
|
||
| const pullInsights = new Map<number, { reviewDecision: WorkItem['reviewDecision'], ciStatus: WorkItem['ciStatus'] }>() |
There was a problem hiding this comment.
pullInsights is never populated — reviewDecision and ciStatus will always be null.
The Map is created empty and no code populates it. The fetchPullInsights function (lines 41-109) appears designed for this purpose but is never called. As a result:
- Line 170:
pullInsights.get(primaryLinkedPull.number)returnsundefined - Lines 185-186, 222-223:
reviewDecisionandciStatusdefault tonullfor all work items
If these fields should be populated, call fetchPullInsights and pass the result.
🐛 Proposed fix to populate pullInsights
export async function buildWorkItemsFromRaw(
owner: string,
repo: string,
issues: GitHubIssueWithPull[],
pullDetailsByNumber: Map<number, GitHubPullRequest & { body?: string }>,
+ token?: string,
): Promise<WorkItem[]> {
const mappedIssues = issues
.filter(i => !('pull_request' in i))
.map(i => toRepoIssue(i))
const mappedPulls = issues
.filter(i => 'pull_request' in i)
.map((issue) => {
// ... existing mapping logic
})
- const pullInsights = new Map<number, { reviewDecision: WorkItem['reviewDecision'], ciStatus: WorkItem['ciStatus'] }>()
+ const pullNumbers = mappedPulls.map(pr => pr.number)
+ const pullInsights = token
+ ? await fetchPullInsights(token, owner, repo, pullNumbers)
+ : new Map<number, { reviewDecision: WorkItem['reviewDecision'], ciStatus: WorkItem['ciStatus'] }>()Note: This requires passing the token through callers. Alternatively, if PR insights aren't needed yet, consider removing the dead fetchPullInsights function and the unused pullInsights references to avoid confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/utils/work-item-builder.ts` at line 149, The pullInsights Map is
created but never populated, so calls like
pullInsights.get(primaryLinkedPull.number) yield undefined and
reviewDecision/ciStatus remain null; to fix, call the existing
fetchPullInsights(...) function and assign its returned Map to pullInsights
before it is used (ensure you pass the required token through the caller chain),
or if PR insights are not needed remove fetchPullInsights and all references to
pullInsights and the reviewDecision/ciStatus fallbacks; locate the Map
declaration pullInsights, the fetchPullInsights function, and usages (e.g.,
primaryLinkedPull.number, reviewDecision, ciStatus in WorkItem) and either wire
the fetch call to populate the Map or delete the dead code.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
server/tasks/shared-repo/sync-repo.ts (1)
23-27: Add structured failure logging with elapsed time.Only success is logged today. If Line 25 throws, there’s no task-level error log including
owner,repo,reason, and elapsed time, which makes incident triage harder.Proposed refactor (success + failure observability)
console.log('[task:shared-repo:sync-repo] Starting', { owner, repo, reason }) const startedAt = Date.now() - await runSharedRepoSyncForRepo(owner, repo, reason) - const elapsedMs = Date.now() - startedAt - console.log('[task:shared-repo:sync-repo] Completed', { owner, repo, reason, elapsedMs }) + try { + await runSharedRepoSyncForRepo(owner, repo, reason) + const elapsedMs = Date.now() - startedAt + console.log('[task:shared-repo:sync-repo] Completed', { owner, repo, reason, elapsedMs }) + } catch (error) { + const elapsedMs = Date.now() - startedAt + console.error('[task:shared-repo:sync-repo] Failed', { + owner, + repo, + reason, + elapsedMs, + error: error instanceof Error ? error.message : String(error), + }) + throw error + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/tasks/shared-repo/sync-repo.ts` around lines 23 - 27, Wrap the call to runSharedRepoSyncForRepo(owner, repo, reason) in a try/catch (or try/finally) that computes elapsedMs = Date.now() - startedAt whether it succeeds or fails, and on failure log a structured task-level error including owner, repo, reason, elapsedMs and the error (e.g., error.message / error.stack) before rethrowing; update the console.log success line to remain in the try block and add a console.error (or processLogger.error) in the catch that mirrors the success fields plus the error details so failures are observable.server/api/work-items/index.get.ts (1)
34-37: Consider edge case inhasNextPagewhen sync is partial.When
sync.isPartialis true and the client reaches the last page of currently available items,hasNextPagewill still returntrue(due toitems.length > 0). This causes the client to request one more page that returns empty results beforehasNextPagefinally becomesfalse.This is self-correcting but may cause a brief "loading more" state that yields nothing. If the client UI handles empty pages gracefully, this is acceptable. Otherwise, consider adding a check like
end < queryResult.items.length || (sync.isPartial && end < queryResult.items.length + pageSize)to avoid the empty fetch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/api/work-items/index.get.ts` around lines 34 - 37, hasNextPage currently uses items.length > 0 when sync.isPartial which causes a true value on the last partial page and triggers an extra empty fetch; update the hasNextPage calculation in the block that computes start, end, items and hasNextPage so that when sync.isPartial is true you check the end against the total available window (e.g., end < queryResult.items.length || (sync.isPartial && end < queryResult.items.length + pageSize)) instead of items.length > 0; adjust the hasNextPage expression near the variables start, end, items, pageSize, queryResult and sync.isPartial to prevent emitting a next page when only an empty page would follow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/tasks/shared-repo/sync-repo.ts`:
- Around line 23-35: The task currently logs success and returns a
success-shaped result even when runSharedRepoSyncForRepo no-ops; update
runSharedRepoSyncForRepo to return a structured status (e.g., 'completed' |
'skipped' | 'failed') and then change the task in sync-repo.ts to inspect that
return value, logging "[task:shared-repo:sync-repo] Skipped" when status ===
'skipped' (include owner/repo/reason), return a result object that includes the
status field (owner, repo, reason, status, elapsedMs when applicable), and
propagate failures (status === 'failed') so callers/UI can distinguish real runs
from skips; reference runSharedRepoSyncForRepo and the task handler in
sync-repo.ts when making these changes.
In `@server/utils/repo-cache.ts`:
- Around line 246-250: The touchActivity function currently reads the entire
RepoSyncMeta (via readMeta) and writes it back (via writeMeta), causing lost
updates and race conditions for high-frequency requests; change this to perform
a targeted atomic update instead—either add a new updateMeta(owner, repo, patch)
helper that performs a merge/CAS-style partial update (only touching
lastRequestedAt and requestCount) or change writeMeta to accept a partial/merge
option and use it in touchActivity so only requestCount and lastRequestedAt are
modified without overwriting other fields like sync status, error, or etag; keep
references to touchActivity, readMeta, writeMeta and the RepoSyncMeta fields
when implementing the partial update.
- Around line 49-63: The key generators (metaKey, detailKey, workItemsKey,
syncLockKey) use owner/repo as-is, causing casing-sensitive duplicate entries;
normalize the inputs (e.g., trim and toLowerCase) before building the key
strings so Owner/Repo and owner/repo map to the same cache/lock keys, and update
these four functions (metaKey, detailKey, workItemsKey, syncLockKey) to use the
normalized owner/repo values.
- Around line 217-240: acquireSyncLock currently writes a timestamp only and
releaseSyncLock blindly removes the key, allowing stale workers to delete a
newer lock; change acquireSyncLock to generate a unique ownership token (e.g.,
crypto.randomUUID()), store that token together with the timestamp under
syncLockKey (via storage().setItem) and return the token (or null on failure),
and change releaseSyncLock to accept the token and only remove the key if the
stored token matches (use storage().getItem to compare before calling
storage().removeItem). Update the function signatures for acquireSyncLock and
releaseSyncLock and use the existing symbols syncLockKey, SYNC_LOCK_TTL_MS,
storage().getItem, storage().setItem, and storage().removeItem when implementing
the check-and-delete.
---
Nitpick comments:
In `@server/api/work-items/index.get.ts`:
- Around line 34-37: hasNextPage currently uses items.length > 0 when
sync.isPartial which causes a true value on the last partial page and triggers
an extra empty fetch; update the hasNextPage calculation in the block that
computes start, end, items and hasNextPage so that when sync.isPartial is true
you check the end against the total available window (e.g., end <
queryResult.items.length || (sync.isPartial && end < queryResult.items.length +
pageSize)) instead of items.length > 0; adjust the hasNextPage expression near
the variables start, end, items, pageSize, queryResult and sync.isPartial to
prevent emitting a next page when only an empty page would follow.
In `@server/tasks/shared-repo/sync-repo.ts`:
- Around line 23-27: Wrap the call to runSharedRepoSyncForRepo(owner, repo,
reason) in a try/catch (or try/finally) that computes elapsedMs = Date.now() -
startedAt whether it succeeds or fails, and on failure log a structured
task-level error including owner, repo, reason, elapsedMs and the error (e.g.,
error.message / error.stack) before rethrowing; update the console.log success
line to remain in the try block and add a console.error (or processLogger.error)
in the catch that mirrors the success fields plus the error details so failures
are observable.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9b7e684b-3747-42eb-a2d7-d31dd24d8195
📒 Files selected for processing (6)
nuxt.config.tsserver/api/repository/[owner]/[repo]/work-items.get.tsserver/api/work-items/index.get.tsserver/tasks/shared-repo/sync-repo.tsserver/utils/repo-cache.tsserver/utils/repo-sync-service.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- nuxt.config.ts
- server/api/repository/[owner]/[repo]/work-items.get.ts
- server/utils/repo-sync-service.ts
| console.log('[task:shared-repo:sync-repo] Starting', { owner, repo, reason }) | ||
| const startedAt = Date.now() | ||
| await runSharedRepoSyncForRepo(owner, repo, reason) | ||
| const elapsedMs = Date.now() - startedAt | ||
| console.log('[task:shared-repo:sync-repo] Completed', { owner, repo, reason, elapsedMs }) | ||
|
|
||
| return { | ||
| result: { | ||
| owner, | ||
| repo, | ||
| reason, | ||
| }, | ||
| } |
There was a problem hiding this comment.
Task can report success even when no sync actually runs.
At Line 25, runSharedRepoSyncForRepo may no-op and return when the shared token is missing (server/utils/repo-sync-service.ts, Lines 660-664), but this task still logs “Completed” and returns a success-shaped result. That can mislead callers and UI flows that expect an actual sync attempt/result.
Proposed fix (propagate sync outcome)
- await runSharedRepoSyncForRepo(owner, repo, reason)
+ const sync = await runSharedRepoSyncForRepo(owner, repo, reason)
const elapsedMs = Date.now() - startedAt
- console.log('[task:shared-repo:sync-repo] Completed', { owner, repo, reason, elapsedMs })
+ console.log('[task:shared-repo:sync-repo] Completed', {
+ owner,
+ repo,
+ reason,
+ elapsedMs,
+ status: sync.status,
+ })
return {
result: {
owner,
repo,
reason,
+ status: sync.status,
},
}(Requires runSharedRepoSyncForRepo to return a structured status like completed | skipped.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/tasks/shared-repo/sync-repo.ts` around lines 23 - 35, The task
currently logs success and returns a success-shaped result even when
runSharedRepoSyncForRepo no-ops; update runSharedRepoSyncForRepo to return a
structured status (e.g., 'completed' | 'skipped' | 'failed') and then change the
task in sync-repo.ts to inspect that return value, logging
"[task:shared-repo:sync-repo] Skipped" when status === 'skipped' (include
owner/repo/reason), return a result object that includes the status field
(owner, repo, reason, status, elapsedMs when applicable), and propagate failures
(status === 'failed') so callers/UI can distinguish real runs from skips;
reference runSharedRepoSyncForRepo and the task handler in sync-repo.ts when
making these changes.
| function metaKey(owner: string, repo: string): string { | ||
| return `shared-repo:${owner}~${repo}:meta` | ||
| } | ||
|
|
||
| export function detailKey(owner: string, repo: string): string { | ||
| return `shared-repo:${owner}~${repo}:detail` | ||
| } | ||
|
|
||
| export function workItemsKey(owner: string, repo: string): string { | ||
| return `shared-repo:${owner}~${repo}:work-items` | ||
| } | ||
|
|
||
| function syncLockKey(owner: string, repo: string, scope: string): string { | ||
| return `shared-repo:${owner}~${repo}:lock:${scope}` | ||
| } |
There was a problem hiding this comment.
Normalize owner and repo before generating shared keys.
These keys currently depend on the caller’s casing, so the same GitHub repo can end up with separate meta/cache/lock entries for Owner/Repo and owner/repo. That breaks the shared dedupe/invalidation behavior this module is introducing and can also make listKnownRepoMetaEntries() report duplicates.
♻️ Proposed fix
+function normalizeRepoKeyPart(value: string): string {
+ return value.toLowerCase()
+}
+
function metaKey(owner: string, repo: string): string {
- return `shared-repo:${owner}~${repo}:meta`
+ return `shared-repo:${normalizeRepoKeyPart(owner)}~${normalizeRepoKeyPart(repo)}:meta`
}
export function detailKey(owner: string, repo: string): string {
- return `shared-repo:${owner}~${repo}:detail`
+ return `shared-repo:${normalizeRepoKeyPart(owner)}~${normalizeRepoKeyPart(repo)}:detail`
}
export function workItemsKey(owner: string, repo: string): string {
- return `shared-repo:${owner}~${repo}:work-items`
+ return `shared-repo:${normalizeRepoKeyPart(owner)}~${normalizeRepoKeyPart(repo)}:work-items`
}
function syncLockKey(owner: string, repo: string, scope: string): string {
- return `shared-repo:${owner}~${repo}:lock:${scope}`
+ return `shared-repo:${normalizeRepoKeyPart(owner)}~${normalizeRepoKeyPart(repo)}:lock:${scope}`
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/utils/repo-cache.ts` around lines 49 - 63, The key generators
(metaKey, detailKey, workItemsKey, syncLockKey) use owner/repo as-is, causing
casing-sensitive duplicate entries; normalize the inputs (e.g., trim and
toLowerCase) before building the key strings so Owner/Repo and owner/repo map to
the same cache/lock keys, and update these four functions (metaKey, detailKey,
workItemsKey, syncLockKey) to use the normalized owner/repo values.
| export async function acquireSyncLock(owner: string, repo: string, scope: string): Promise<boolean> { | ||
| const key = syncLockKey(owner, repo, scope) | ||
|
|
||
| const existing = await storage().getItem<number>(key) | ||
| if (existing && existing + SYNC_LOCK_TTL_MS > Date.now()) { | ||
| console.warn('[repo-cache] sync lock blocked', { | ||
| owner, | ||
| repo, | ||
| scope, | ||
| storageLockActive: true, | ||
| existingLockAge: Date.now() - existing, | ||
| }) | ||
| return false | ||
| } | ||
|
|
||
| // Storage lock is intentionally the only dedup source for serverless runtimes. | ||
| // There is still a small TOCTOU race between read and set that can duplicate work. | ||
| await storage().setItem(key, Date.now(), { ttl: Math.ceil(SYNC_LOCK_TTL_MS / 1000) }) | ||
| return true | ||
| } | ||
|
|
||
| export async function releaseSyncLock(owner: string, repo: string, scope: string): Promise<void> { | ||
| const key = syncLockKey(owner, repo, scope) | ||
| await storage().removeItem(key) |
There was a problem hiding this comment.
Make lock release ownership-safe.
The lock value has no owner token, and releaseSyncLock() deletes whatever is stored under the key. If a slow sync outlives the TTL and another worker acquires a fresh lock, the first worker will still remove that newer lock when it finishes, reopening the door for overlapping syncs.
🔒 Proposed fix
-export async function acquireSyncLock(owner: string, repo: string, scope: string): Promise<boolean> {
+export async function acquireSyncLock(owner: string, repo: string, scope: string): Promise<string | null> {
const key = syncLockKey(owner, repo, scope)
+ const token = crypto.randomUUID()
+ const now = Date.now()
- const existing = await storage().getItem<number>(key)
- if (existing && existing + SYNC_LOCK_TTL_MS > Date.now()) {
+ const existing = await storage().getItem<{ token: string, acquiredAt: number }>(key)
+ if (existing && existing.acquiredAt + SYNC_LOCK_TTL_MS > now) {
console.warn('[repo-cache] sync lock blocked', {
owner,
repo,
scope,
storageLockActive: true,
- existingLockAge: Date.now() - existing,
+ existingLockAge: now - existing.acquiredAt,
})
- return false
+ return null
}
// Storage lock is intentionally the only dedup source for serverless runtimes.
// There is still a small TOCTOU race between read and set that can duplicate work.
- await storage().setItem(key, Date.now(), { ttl: Math.ceil(SYNC_LOCK_TTL_MS / 1000) })
- return true
+ await storage().setItem(key, { token, acquiredAt: now }, { ttl: Math.ceil(SYNC_LOCK_TTL_MS / 1000) })
+ return token
}
-export async function releaseSyncLock(owner: string, repo: string, scope: string): Promise<void> {
+export async function releaseSyncLock(owner: string, repo: string, scope: string, token: string): Promise<void> {
const key = syncLockKey(owner, repo, scope)
- await storage().removeItem(key)
+ const existing = await storage().getItem<{ token: string }>(key)
+ if (existing?.token === token) {
+ await storage().removeItem(key)
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/utils/repo-cache.ts` around lines 217 - 240, acquireSyncLock currently
writes a timestamp only and releaseSyncLock blindly removes the key, allowing
stale workers to delete a newer lock; change acquireSyncLock to generate a
unique ownership token (e.g., crypto.randomUUID()), store that token together
with the timestamp under syncLockKey (via storage().setItem) and return the
token (or null on failure), and change releaseSyncLock to accept the token and
only remove the key if the stored token matches (use storage().getItem to
compare before calling storage().removeItem). Update the function signatures for
acquireSyncLock and releaseSyncLock and use the existing symbols syncLockKey,
SYNC_LOCK_TTL_MS, storage().getItem, storage().setItem, and storage().removeItem
when implementing the check-and-delete.
| export async function touchActivity(owner: string, repo: string): Promise<void> { | ||
| const meta = await readMeta(owner, repo) | ||
| meta.lastRequestedAt = Date.now() | ||
| meta.requestCount += 1 | ||
| await writeMeta(owner, repo, meta) |
There was a problem hiding this comment.
Avoid whole-record writes for per-request activity updates.
This path runs on every request, but it overwrites the full RepoSyncMeta document. Concurrent requests will lose requestCount increments, and a late touchActivity() write can also revert sync status / error / ETag fields written by another code path in the meantime.
Split request activity into its own key, or add a merge/CAS-style updateMeta() helper so this only updates lastRequestedAt and requestCount.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/utils/repo-cache.ts` around lines 246 - 250, The touchActivity
function currently reads the entire RepoSyncMeta (via readMeta) and writes it
back (via writeMeta), causing lost updates and race conditions for
high-frequency requests; change this to perform a targeted atomic update
instead—either add a new updateMeta(owner, repo, patch) helper that performs a
merge/CAS-style partial update (only touching lastRequestedAt and requestCount)
or change writeMeta to accept a partial/merge option and use it in touchActivity
so only requestCount and lastRequestedAt are modified without overwriting other
fields like sync status, error, or etag; keep references to touchActivity,
readMeta, writeMeta and the RepoSyncMeta fields when implementing the partial
update.
Related issue(s)
Closes #212
Type of change
Checklist
Screenshots
(If UI change)
Summary by CodeRabbit
New Features
Removed Features
Changes