Conversation
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughAdds GitHub API rate-limit support: client RateLimitIndicator component with 10s polling, server per-user rate-limit cache and endpoint, header-parsing hooks in GitHub utilities, i18n keys/schema updates, unit tests, and small polling adjustments elsewhere. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as RateLimitIndicator (Client)
participant Endpoint as /api/github/rate-limit (Server)
participant Cache as In-Memory Cache
participant GitHub as GitHub API
Client->>Endpoint: GET /api/github/rate-limit
activate Endpoint
Endpoint->>Cache: check cached rate limit for user
alt Cache missing or low
Endpoint->>GitHub: GET REST /rate_limit
activate GitHub
GitHub-->>Endpoint: REST response + headers
deactivate GitHub
Endpoint->>Cache: updateRateLimitFromHeaders(headers, 'rest', userId)
Endpoint->>GitHub: GraphQL query for rate limit
activate GitHub
GitHub-->>Endpoint: GraphQL response + headers
deactivate GitHub
Endpoint->>Cache: updateRateLimitFromHeaders(headers, 'graphql', userId)
end
Endpoint->>Cache: getRateLimit(userId)
Cache-->>Endpoint: {limit, remaining, reset}
Endpoint-->>Client: {limit, remaining, reset}
deactivate Endpoint
Note over Client,Endpoint: Repeats every 10s
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related issues
Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 3❌ Failed checks (3 warnings)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (4)
server/api/github/rate-limit.get.ts (2)
7-7: Simplify the redundant condition.The check
info.limit === 0is already covered byinfo.limit <= 5000, making the first condition redundant.Suggested fix
- if (info.limit === 0 || info.limit <= 5000) { + if (info.limit <= 5000) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/api/github/rate-limit.get.ts` at line 7, The conditional in server/api/github/rate-limit.get.ts is redundant: replace the combined check "info.limit === 0 || info.limit <= 5000" with the simpler "info.limit <= 5000" (or just remove the "info.limit === 0" part) inside the function/method that evaluates the GitHub rate limit (the code using the local variable `info`), so only the <= 5000 comparison remains.
15-21: REST rate limit update is redundant.
githubFetchWithTokenalready callsupdateRateLimitFromHeaders(response.headers)internally (line 134 ingithub.ts), so the REST rate limit is updated automatically after the/rate_limitfetch. Only the GraphQL rate limit needs to be manually seeded from the response body.Suggested fix
const core = data.resources.core const graphql = data.resources.graphql - updateRateLimitFromHeaders(new Headers({ - 'x-ratelimit-limit': String(core.limit), - 'x-ratelimit-remaining': String(core.remaining), - 'x-ratelimit-reset': String(core.reset), - }), 'rest') + // REST is already updated by githubFetchWithToken; only GraphQL needs manual seeding updateRateLimitFromHeaders(new Headers({ 'x-ratelimit-limit': String(graphql.limit), 'x-ratelimit-remaining': String(graphql.remaining), 'x-ratelimit-reset': String(graphql.reset), }), 'graphql')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/api/github/rate-limit.get.ts` around lines 15 - 21, Remove the redundant REST update call that reconstructs headers from data.resources.core because githubFetchWithToken already updates REST limits; instead seed only the GraphQL rate limit using the response body. Replace/remove the updateRateLimitFromHeaders invocation that uses core and call updateRateLimitFromHeaders with headers built from data.resources.graphql (use the existing graphql variable) and the 'graphql' source, leaving githubFetchWithToken to handle REST updates.server/utils/github.ts (2)
209-259: Same inconsistency applies togithubCachedFetchAllWithToken.This function also makes paginated requests without updating the rate limit cache. Apply the same fix as
githubFetchAllWithTokenfor consistency.Suggested fix
if (etag) { await storage.setItem(cacheKey, { etag, data: items, pageCount } satisfies CacheEntry<T[]>) } + updateRateLimitFromHeaders(firstResponse.headers) + return { data: items, status: 200, headers: firstResponse.headers }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/utils/github.ts` around lines 209 - 259, githubCachedFetchAllWithToken is not updating the GitHub rate-limit cache when making requests (firstResponse and the paginated fetches via fetchGitHub), causing inconsistent rate-limit state; after each network call (after receiving firstResponse and after each res in remainingPages.map) call the same rate-limit update helper used in githubFetchAllWithToken (e.g. updateRateLimitCache or the project’s rate-limit updater) with the response.headers and userId so the rate-limit store is kept in sync. Ensure you add the call both right after processing firstResponse and inside the map/loop that handles each page response.
140-164: Rate limit tracking is inconsistent across fetch functions.
githubFetchAllWithTokenmakes multiple paginated requests but doesn't callupdateRateLimitFromHeaders. This creates inconsistency withgithubFetchWithTokenandgithubCachedFetchWithToken, which do track rate limits.Consider updating rate limits at least once after pagination completes:
Suggested fix
const remainingPages = parseRemainingPages(firstResponse.headers.get('link')) if (remainingPages.length) { const pages = await Promise.all( remainingPages.map(async (pageUrl) => { const res = await fetchGitHub(pageUrl, headers, endpoint) return res.json() as Promise<T[]> }), ) for (const page of pages) items.push(...page) } + updateRateLimitFromHeaders(firstResponse.headers) + return { data: items, status: 200, headers: firstResponse.headers }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/utils/github.ts` around lines 140 - 164, githubFetchAllWithToken currently fetches multiple paginated responses but never updates rate limit tracking; call updateRateLimitFromHeaders with the response headers to keep behavior consistent with githubFetchWithToken/githubCachedFetchWithToken. After getting firstResponse, invoke updateRateLimitFromHeaders(firstResponse.headers) and also call updateRateLimitFromHeaders(res.headers) inside the remainingPages.map for each fetchGitHub(pageUrl, headers, endpoint) response (or after Promise.all once you have all page responses iterate their headers and update), ensuring githubFetchAllWithToken updates rate limits from each response header it receives.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/utils/github.ts`:
- Line 10: The module-level rateLimits singleton (rateLimits) currently mixes
rate limit data across all users; update it to be keyed by userId (e.g., change
its type to a map from userId to per-route RateLimitInfo maps) and update all
reads/writes to index by the current user's id (use the authenticated user's id
when storing/retrieving rate limit entries) so each user has isolated rate limit
state; if multi-user support is not required, instead add a clear comment above
the rateLimits declaration documenting that it is intentionally
global/single-user for dev use.
In `@test/unit/rateLimit.test.ts`:
- Around line 12-40: Tests share module-scoped rateLimits in
server/utils/github.ts causing order-dependent failures; add and export a
test-only reset function (e.g., resetRateLimitForTest) in that module which
clears keys on the rateLimits object, then update the test file to call
resetRateLimitForTest in a beforeEach (or at start of each it) so
updateRateLimitFromHeaders/getRateLimit run against a clean state; reference the
functions updateRateLimitFromHeaders, getRateLimit, and the new
resetRateLimitForTest when making the changes.
---
Nitpick comments:
In `@server/api/github/rate-limit.get.ts`:
- Line 7: The conditional in server/api/github/rate-limit.get.ts is redundant:
replace the combined check "info.limit === 0 || info.limit <= 5000" with the
simpler "info.limit <= 5000" (or just remove the "info.limit === 0" part) inside
the function/method that evaluates the GitHub rate limit (the code using the
local variable `info`), so only the <= 5000 comparison remains.
- Around line 15-21: Remove the redundant REST update call that reconstructs
headers from data.resources.core because githubFetchWithToken already updates
REST limits; instead seed only the GraphQL rate limit using the response body.
Replace/remove the updateRateLimitFromHeaders invocation that uses core and call
updateRateLimitFromHeaders with headers built from data.resources.graphql (use
the existing graphql variable) and the 'graphql' source, leaving
githubFetchWithToken to handle REST updates.
In `@server/utils/github.ts`:
- Around line 209-259: githubCachedFetchAllWithToken is not updating the GitHub
rate-limit cache when making requests (firstResponse and the paginated fetches
via fetchGitHub), causing inconsistent rate-limit state; after each network call
(after receiving firstResponse and after each res in remainingPages.map) call
the same rate-limit update helper used in githubFetchAllWithToken (e.g.
updateRateLimitCache or the project’s rate-limit updater) with the
response.headers and userId so the rate-limit store is kept in sync. Ensure you
add the call both right after processing firstResponse and inside the map/loop
that handles each page response.
- Around line 140-164: githubFetchAllWithToken currently fetches multiple
paginated responses but never updates rate limit tracking; call
updateRateLimitFromHeaders with the response headers to keep behavior consistent
with githubFetchWithToken/githubCachedFetchWithToken. After getting
firstResponse, invoke updateRateLimitFromHeaders(firstResponse.headers) and also
call updateRateLimitFromHeaders(res.headers) inside the remainingPages.map for
each fetchGitHub(pageUrl, headers, endpoint) response (or after Promise.all once
you have all page responses iterate their headers and update), ensuring
githubFetchAllWithToken updates rate limits from each response header it
receives.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 63eea6fc-a9cb-454d-b11b-353007146a4a
📒 Files selected for processing (11)
app/components/ui/RateLimitIndicator.vueapp/components/ui/SideBar.vueapp/composables/useCheckRuns.tsapp/composables/useWorkItemPolling.tsi18n/locales/de.jsoni18n/locales/en.jsoni18n/schema.jsonserver/api/github/rate-limit.get.tsserver/utils/github-graphql.tsserver/utils/github.tstest/unit/rateLimit.test.ts
| describe('rate limit tracking', () => { | ||
| it('aggregates REST and GraphQL limits', () => { | ||
| updateRateLimitFromHeaders(fakeHeaders(5000, 4900, 1000), 'rest') | ||
| updateRateLimitFromHeaders(fakeHeaders(5000, 4800, 1100), 'graphql') | ||
|
|
||
| const info = getRateLimit() | ||
| expect(info.limit).toBe(10000) | ||
| expect(info.remaining).toBe(9700) | ||
| expect(info.reset).toBe(1100) | ||
| }) | ||
|
|
||
| it('updates values on subsequent calls', () => { | ||
| updateRateLimitFromHeaders(fakeHeaders(5000, 4900, 1000), 'rest') | ||
| updateRateLimitFromHeaders(fakeHeaders(5000, 4850, 1000), 'rest') | ||
|
|
||
| const info = getRateLimit() | ||
| // REST remaining should be 4850 (latest), graphql still 4800 from previous test | ||
| expect(info.remaining).toBe(4850 + 4800) | ||
| }) | ||
|
|
||
| it('ignores headers without valid limit', () => { | ||
| updateRateLimitFromHeaders(fakeHeaders(5000, 4000, 2000), 'rest') | ||
| updateRateLimitFromHeaders(new Headers(), 'rest') | ||
|
|
||
| // Empty headers have limit=0, so they're ignored — previous values stay | ||
| const info = getRateLimit() | ||
| expect(info.limit).toBeGreaterThan(0) | ||
| }) | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Inspect module-scoped rate-limit state =="
rg -n -C3 "rateLimits|getRateLimit|updateRateLimitFromHeaders|resetRateLimit" server/utils/github.ts
echo
echo "== Inspect test coupling to previous test state =="
rg -n -C2 "previous test|updates values on subsequent calls|aggregates REST and GraphQL limits" test/unit/rateLimit.test.tsRepository: flumen-dev/flumen.dev
Length of output: 2061
Make these tests state-isolated; they currently depend on execution order.
Line 28 explicitly relies on data from a previous test (graphql still 4800 from previous test). The rateLimits object in server/utils/github.ts is module-scoped and shared across tests, making the suite brittle and non-deterministic.
✅ Suggested direction (isolate module state per test)
-import { describe, expect, it } from 'vitest'
+import { beforeEach, describe, expect, it } from 'vitest'
import { getRateLimit, updateRateLimitFromHeaders } from '../../server/utils/github'
+import { resetRateLimitForTest } from '../../server/utils/github'
@@
describe('rate limit tracking', () => {
+ beforeEach(() => {
+ resetRateLimitForTest()
+ })
+
it('aggregates REST and GraphQL limits', () => {
@@
it('updates values on subsequent calls', () => {
updateRateLimitFromHeaders(fakeHeaders(5000, 4900, 1000), 'rest')
+ updateRateLimitFromHeaders(fakeHeaders(5000, 4800, 1000), 'graphql')
updateRateLimitFromHeaders(fakeHeaders(5000, 4850, 1000), 'rest')
@@
- // REST remaining should be 4850 (latest), graphql still 4800 from previous test
+ // REST remaining should be 4850 (latest), graphql is 4800
expect(info.remaining).toBe(4850 + 4800)
})And in server/utils/github.ts, expose a test-only reset helper:
export function resetRateLimitForTest() {
for (const key of Object.keys(rateLimits) as Array<'rest' | 'graphql'>) {
delete rateLimits[key]
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/rateLimit.test.ts` around lines 12 - 40, Tests share module-scoped
rateLimits in server/utils/github.ts causing order-dependent failures; add and
export a test-only reset function (e.g., resetRateLimitForTest) in that module
which clears keys on the rateLimits object, then update the test file to call
resetRateLimitForTest in a beforeEach (or at start of each it) so
updateRateLimitFromHeaders/getRateLimit run against a clean state; reference the
functions updateRateLimitFromHeaders, getRateLimit, and the new
resetRateLimitForTest when making the changes.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
test/unit/rateLimit.test.ts (1)
15-49:⚠️ Potential issue | 🟠 MajorMake these tests state-independent; they currently rely on execution order.
Line 33 and Line 47 assert
USER_Avalues seeded by an earlier test. That makes the suite flaky when test order changes.Suggested fix
-import { describe, expect, it } from 'vitest' -import { getRateLimit, updateRateLimitFromHeaders } from '../../server/utils/github' +import { beforeEach, describe, expect, it } from 'vitest' +import { getRateLimit, resetRateLimitForTest, updateRateLimitFromHeaders } from '../../server/utils/github' @@ describe('rate limit tracking', () => { + beforeEach(() => { + resetRateLimitForTest() + }) + it('aggregates REST and GraphQL limits per user', () => { @@ it('isolates rate limits between users', () => { + updateRateLimitFromHeaders(fakeHeaders(5000, 4900, 1000), 'rest', USER_A) + updateRateLimitFromHeaders(fakeHeaders(5000, 4800, 1100), 'graphql', USER_A) updateRateLimitFromHeaders(fakeHeaders(5000, 100, 2000), 'rest', USER_B) updateRateLimitFromHeaders(fakeHeaders(5000, 200, 2000), 'graphql', USER_B) @@ - expect(infoA.remaining).toBe(9700) // from previous test + expect(infoA.remaining).toBe(9700) @@ it('ignores headers without userId', () => { + updateRateLimitFromHeaders(fakeHeaders(5000, 4900, 1000), 'rest', USER_A) + updateRateLimitFromHeaders(fakeHeaders(5000, 4800, 1100), 'graphql', USER_A) updateRateLimitFromHeaders(fakeHeaders(5000, 0, 3000), 'rest')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/rateLimit.test.ts` around lines 15 - 49, Tests rely on shared rate-limit state (asserting USER_A values seeded by previous tests); make them state-independent by either clearing the rate-limit store before each test (add a beforeEach that calls a reset/clear function on the rate-limit store, e.g., resetRateLimits() or clearRateLimits()) or by using fresh unique user IDs per test instead of reusing USER_A/USER_B; update the tests that call updateRateLimitFromHeaders, getRateLimit and fakeHeaders to use one of these approaches so each it-block starts with a known empty state.
🧹 Nitpick comments (1)
server/utils/github.ts (1)
10-10: Add an eviction strategy forrateLimitsPerUserto avoid unbounded growth.The in-memory map keeps entries forever per seen user. On long-lived instances, this can grow without bound. Consider TTL-based cleanup or size-bounded eviction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/utils/github.ts` at line 10, The in-memory Map rateLimitsPerUser currently retains entries forever causing unbounded growth; update it to use an eviction strategy (e.g., TTL or size-bounded LRU) by replacing or wrapping rateLimitsPerUser with a cache that evicts stale users: implement timestamps on RateLimitInfo entries and run a periodic cleanup task to remove entries older than a configured TTL, or swap the Map for an LRU cache with a max size and eviction policy; ensure all reads/writes go through the new cached accessors (where you update/get entries) so eviction is enforced and add tests/config for TTL/max size.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/utils/github.ts`:
- Line 137: The call to updateRateLimitFromHeaders in githubFetchWithToken
currently omits the userId so the guard inside updateRateLimitFromHeaders
prevents updating rateLimitsPerUser; update the githubFetchWithToken
implementation to pass the appropriate userId (from the token context or
function argument) into updateRateLimitFromHeaders so that rateLimitsPerUser is
refreshed after each response; locate the githubFetchWithToken function and
ensure it forwards the same userId shape used elsewhere (the one checked by the
guard), and verify githubFetch paths that call githubFetchWithToken likewise
pass or infer the userId.
---
Duplicate comments:
In `@test/unit/rateLimit.test.ts`:
- Around line 15-49: Tests rely on shared rate-limit state (asserting USER_A
values seeded by previous tests); make them state-independent by either clearing
the rate-limit store before each test (add a beforeEach that calls a reset/clear
function on the rate-limit store, e.g., resetRateLimits() or clearRateLimits())
or by using fresh unique user IDs per test instead of reusing USER_A/USER_B;
update the tests that call updateRateLimitFromHeaders, getRateLimit and
fakeHeaders to use one of these approaches so each it-block starts with a known
empty state.
---
Nitpick comments:
In `@server/utils/github.ts`:
- Line 10: The in-memory Map rateLimitsPerUser currently retains entries forever
causing unbounded growth; update it to use an eviction strategy (e.g., TTL or
size-bounded LRU) by replacing or wrapping rateLimitsPerUser with a cache that
evicts stale users: implement timestamps on RateLimitInfo entries and run a
periodic cleanup task to remove entries older than a configured TTL, or swap the
Map for an LRU cache with a max size and eviction policy; ensure all
reads/writes go through the new cached accessors (where you update/get entries)
so eviction is enforced and add tests/config for TTL/max size.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 910d3c28-b44b-4854-b9ee-ffda4dc08b9e
📒 Files selected for processing (3)
server/api/github/rate-limit.get.tsserver/utils/github.tstest/unit/rateLimit.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- server/api/github/rate-limit.get.ts
Summary
Related issue(s)
Closes #123
Type of change
Checklist
Summary by CodeRabbit
New Features
Refactor
Tests