Skip to content

feat: GitHub api rate limit counter#237

Merged
Flo0806 merged 7 commits intomainfrom
feat/api-usage
Mar 15, 2026
Merged

feat: GitHub api rate limit counter#237
Flo0806 merged 7 commits intomainfrom
feat/api-usage

Conversation

@Flo0806
Copy link
Contributor

@Flo0806 Flo0806 commented Mar 15, 2026

Summary

  • Implement a counter that shows current count of the rate limit state

Related issue(s)

Closes #123

Type of change

  • Bug fix
  • Feature
  • Refactor
  • Docs
  • CI

Checklist

  • Tests added/updated
  • i18n keys added/updated (if needed)
  • No breaking changes

Summary by CodeRabbit

  • New Features

    • Live GitHub API rate-limit indicator in the sidebar (shown when logged in and sidebar expanded), with tooltip, reset timing, and auto-refresh (~10s)
    • New server endpoint exposes current rate-limit data; localized UI strings added (English + German)
  • Refactor

    • Polling cadence shortened (20s → 15s); check-run polling simplified and manual refetch exposed; CI checks refresh when items update
  • Tests

    • Unit tests for rate-limit aggregation and per-user isolation

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 15, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e9540a5d-7fd8-4348-b20a-6e305f76852f

📥 Commits

Reviewing files that changed from the base of the PR and between 0a01ce4 and f455a7e.

📒 Files selected for processing (2)
  • server/utils/github-graphql.ts
  • server/utils/github.ts

📝 Walkthrough

Walkthrough

Adds GitHub API rate-limit support: client RateLimitIndicator component with 10s polling, server per-user rate-limit cache and endpoint, header-parsing hooks in GitHub utilities, i18n keys/schema updates, unit tests, and small polling adjustments elsewhere.

Changes

Cohort / File(s) Summary
Rate Limit UI
app/components/ui/RateLimitIndicator.vue, app/components/ui/SideBar.vue
New Vue 3 component polling /api/github/rate-limit every 10s showing usage, progress bar, and reset info; integrated into sidebar when logged in and expanded.
Server: GitHub utils & API
server/utils/github.ts, server/utils/github-graphql.ts, server/api/github/rate-limit.get.ts
Adds per-user in-memory rate-limit cache and RateLimitInfo; updateRateLimitFromHeaders updates cache from REST/GraphQL responses; new endpoint seeds cache by calling GitHub when needed.
Composables / Polling
app/composables/useCheckRuns.ts, app/composables/useWorkItemPolling.ts
Expose refetch from useCheckRuns; simplify polling watcher to watch pending only. Reduce work-item polling interval from 20_000ms to 15_000ms.
Internationalization
i18n/locales/en.json, i18n/locales/de.json, i18n/schema.json
Adds rateLimit localization keys and schema entries (label, resetsIn, resetsNow, tooltip).
Tests
test/unit/rateLimit.test.ts
New unit tests for rate-limit aggregation, per-user isolation, unknown-user behavior, and ignoring headers without user id.
Work Item UI
app/components/work-item/WorkItemHeader.vue
Uses refetch from useCheckRuns and watches workItem.updatedAt to trigger CI check refresh.
Misc / Manifest
package.json
Minor one-line manifest change.

Sequence Diagram(s)

sequenceDiagram
    participant Client as RateLimitIndicator (Client)
    participant Endpoint as /api/github/rate-limit (Server)
    participant Cache as In-Memory Cache
    participant GitHub as GitHub API

    Client->>Endpoint: GET /api/github/rate-limit
    activate Endpoint
    Endpoint->>Cache: check cached rate limit for user
    alt Cache missing or low
        Endpoint->>GitHub: GET REST /rate_limit
        activate GitHub
        GitHub-->>Endpoint: REST response + headers
        deactivate GitHub
        Endpoint->>Cache: updateRateLimitFromHeaders(headers, 'rest', userId)
        Endpoint->>GitHub: GraphQL query for rate limit
        activate GitHub
        GitHub-->>Endpoint: GraphQL response + headers
        deactivate GitHub
        Endpoint->>Cache: updateRateLimitFromHeaders(headers, 'graphql', userId)
    end
    Endpoint->>Cache: getRateLimit(userId)
    Cache-->>Endpoint: {limit, remaining, reset}
    Endpoint-->>Client: {limit, remaining, reset}
    deactivate Endpoint
    Note over Client,Endpoint: Repeats every 10s
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related issues

Possibly related PRs

"I nibble bytes beneath the screen's glow,
Counting headers where the rate limits show.
Cache in my burrow, polls hop on cue,
Sidebar lights up — the numbers true.
Hop, refresh, repeat — a rabbit's view." 🐇✨

🚥 Pre-merge checks | ✅ 2 | ❌ 3

❌ Failed checks (3 warnings)

Check name Status Explanation Resolution
Linked Issues check ⚠️ Warning The PR implements a rate limit counter, but the linked issue #123 requires star rating functionality. The code changes do not address the star rating requirements. The PR should implement GitHub star rating functionality as specified in issue #123, including star/unstar actions, state persistence, and comprehensive tests.
Out of Scope Changes check ⚠️ Warning The PR adds rate limit tracking infrastructure (components, composables, API routes, utilities) that is not mentioned in the linked issue #123 about star rating functionality. Clarify if the rate limit counter is required for star rating or remove these changes if unrelated to issue #123. The current implementation appears to be out of scope for the linked issue.
Docstring Coverage ⚠️ Warning Docstring coverage is 12.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title accurately describes the main feature: implementing a GitHub API rate limit counter.
Description check ✅ Passed The PR description follows the template structure with all required sections (Summary, Related issues, Type of change, Checklist) properly filled out.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/api-usage
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
server/api/github/rate-limit.get.ts (2)

7-7: Simplify the redundant condition.

The check info.limit === 0 is already covered by info.limit <= 5000, making the first condition redundant.

Suggested fix
-  if (info.limit === 0 || info.limit <= 5000) {
+  if (info.limit <= 5000) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/api/github/rate-limit.get.ts` at line 7, The conditional in
server/api/github/rate-limit.get.ts is redundant: replace the combined check
"info.limit === 0 || info.limit <= 5000" with the simpler "info.limit <= 5000"
(or just remove the "info.limit === 0" part) inside the function/method that
evaluates the GitHub rate limit (the code using the local variable `info`), so
only the <= 5000 comparison remains.

15-21: REST rate limit update is redundant.

githubFetchWithToken already calls updateRateLimitFromHeaders(response.headers) internally (line 134 in github.ts), so the REST rate limit is updated automatically after the /rate_limit fetch. Only the GraphQL rate limit needs to be manually seeded from the response body.

Suggested fix
     const core = data.resources.core
     const graphql = data.resources.graphql
-    updateRateLimitFromHeaders(new Headers({
-      'x-ratelimit-limit': String(core.limit),
-      'x-ratelimit-remaining': String(core.remaining),
-      'x-ratelimit-reset': String(core.reset),
-    }), 'rest')
+    // REST is already updated by githubFetchWithToken; only GraphQL needs manual seeding
     updateRateLimitFromHeaders(new Headers({
       'x-ratelimit-limit': String(graphql.limit),
       'x-ratelimit-remaining': String(graphql.remaining),
       'x-ratelimit-reset': String(graphql.reset),
     }), 'graphql')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/api/github/rate-limit.get.ts` around lines 15 - 21, Remove the
redundant REST update call that reconstructs headers from data.resources.core
because githubFetchWithToken already updates REST limits; instead seed only the
GraphQL rate limit using the response body. Replace/remove the
updateRateLimitFromHeaders invocation that uses core and call
updateRateLimitFromHeaders with headers built from data.resources.graphql (use
the existing graphql variable) and the 'graphql' source, leaving
githubFetchWithToken to handle REST updates.
server/utils/github.ts (2)

209-259: Same inconsistency applies to githubCachedFetchAllWithToken.

This function also makes paginated requests without updating the rate limit cache. Apply the same fix as githubFetchAllWithToken for consistency.

Suggested fix
   if (etag) {
     await storage.setItem(cacheKey, { etag, data: items, pageCount } satisfies CacheEntry<T[]>)
   }

+  updateRateLimitFromHeaders(firstResponse.headers)
+
   return { data: items, status: 200, headers: firstResponse.headers }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/utils/github.ts` around lines 209 - 259, githubCachedFetchAllWithToken
is not updating the GitHub rate-limit cache when making requests (firstResponse
and the paginated fetches via fetchGitHub), causing inconsistent rate-limit
state; after each network call (after receiving firstResponse and after each res
in remainingPages.map) call the same rate-limit update helper used in
githubFetchAllWithToken (e.g. updateRateLimitCache or the project’s rate-limit
updater) with the response.headers and userId so the rate-limit store is kept in
sync. Ensure you add the call both right after processing firstResponse and
inside the map/loop that handles each page response.

140-164: Rate limit tracking is inconsistent across fetch functions.

githubFetchAllWithToken makes multiple paginated requests but doesn't call updateRateLimitFromHeaders. This creates inconsistency with githubFetchWithToken and githubCachedFetchWithToken, which do track rate limits.

Consider updating rate limits at least once after pagination completes:

Suggested fix
   const remainingPages = parseRemainingPages(firstResponse.headers.get('link'))
   if (remainingPages.length) {
     const pages = await Promise.all(
       remainingPages.map(async (pageUrl) => {
         const res = await fetchGitHub(pageUrl, headers, endpoint)
         return res.json() as Promise<T[]>
       }),
     )
     for (const page of pages) items.push(...page)
   }

+  updateRateLimitFromHeaders(firstResponse.headers)
+
   return { data: items, status: 200, headers: firstResponse.headers }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/utils/github.ts` around lines 140 - 164, githubFetchAllWithToken
currently fetches multiple paginated responses but never updates rate limit
tracking; call updateRateLimitFromHeaders with the response headers to keep
behavior consistent with githubFetchWithToken/githubCachedFetchWithToken. After
getting firstResponse, invoke updateRateLimitFromHeaders(firstResponse.headers)
and also call updateRateLimitFromHeaders(res.headers) inside the
remainingPages.map for each fetchGitHub(pageUrl, headers, endpoint) response (or
after Promise.all once you have all page responses iterate their headers and
update), ensuring githubFetchAllWithToken updates rate limits from each response
header it receives.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/utils/github.ts`:
- Line 10: The module-level rateLimits singleton (rateLimits) currently mixes
rate limit data across all users; update it to be keyed by userId (e.g., change
its type to a map from userId to per-route RateLimitInfo maps) and update all
reads/writes to index by the current user's id (use the authenticated user's id
when storing/retrieving rate limit entries) so each user has isolated rate limit
state; if multi-user support is not required, instead add a clear comment above
the rateLimits declaration documenting that it is intentionally
global/single-user for dev use.

In `@test/unit/rateLimit.test.ts`:
- Around line 12-40: Tests share module-scoped rateLimits in
server/utils/github.ts causing order-dependent failures; add and export a
test-only reset function (e.g., resetRateLimitForTest) in that module which
clears keys on the rateLimits object, then update the test file to call
resetRateLimitForTest in a beforeEach (or at start of each it) so
updateRateLimitFromHeaders/getRateLimit run against a clean state; reference the
functions updateRateLimitFromHeaders, getRateLimit, and the new
resetRateLimitForTest when making the changes.

---

Nitpick comments:
In `@server/api/github/rate-limit.get.ts`:
- Line 7: The conditional in server/api/github/rate-limit.get.ts is redundant:
replace the combined check "info.limit === 0 || info.limit <= 5000" with the
simpler "info.limit <= 5000" (or just remove the "info.limit === 0" part) inside
the function/method that evaluates the GitHub rate limit (the code using the
local variable `info`), so only the <= 5000 comparison remains.
- Around line 15-21: Remove the redundant REST update call that reconstructs
headers from data.resources.core because githubFetchWithToken already updates
REST limits; instead seed only the GraphQL rate limit using the response body.
Replace/remove the updateRateLimitFromHeaders invocation that uses core and call
updateRateLimitFromHeaders with headers built from data.resources.graphql (use
the existing graphql variable) and the 'graphql' source, leaving
githubFetchWithToken to handle REST updates.

In `@server/utils/github.ts`:
- Around line 209-259: githubCachedFetchAllWithToken is not updating the GitHub
rate-limit cache when making requests (firstResponse and the paginated fetches
via fetchGitHub), causing inconsistent rate-limit state; after each network call
(after receiving firstResponse and after each res in remainingPages.map) call
the same rate-limit update helper used in githubFetchAllWithToken (e.g.
updateRateLimitCache or the project’s rate-limit updater) with the
response.headers and userId so the rate-limit store is kept in sync. Ensure you
add the call both right after processing firstResponse and inside the map/loop
that handles each page response.
- Around line 140-164: githubFetchAllWithToken currently fetches multiple
paginated responses but never updates rate limit tracking; call
updateRateLimitFromHeaders with the response headers to keep behavior consistent
with githubFetchWithToken/githubCachedFetchWithToken. After getting
firstResponse, invoke updateRateLimitFromHeaders(firstResponse.headers) and also
call updateRateLimitFromHeaders(res.headers) inside the remainingPages.map for
each fetchGitHub(pageUrl, headers, endpoint) response (or after Promise.all once
you have all page responses iterate their headers and update), ensuring
githubFetchAllWithToken updates rate limits from each response header it
receives.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 63eea6fc-a9cb-454d-b11b-353007146a4a

📥 Commits

Reviewing files that changed from the base of the PR and between 6d05f8a and d19ac69.

📒 Files selected for processing (11)
  • app/components/ui/RateLimitIndicator.vue
  • app/components/ui/SideBar.vue
  • app/composables/useCheckRuns.ts
  • app/composables/useWorkItemPolling.ts
  • i18n/locales/de.json
  • i18n/locales/en.json
  • i18n/schema.json
  • server/api/github/rate-limit.get.ts
  • server/utils/github-graphql.ts
  • server/utils/github.ts
  • test/unit/rateLimit.test.ts

Comment on lines +12 to +40
describe('rate limit tracking', () => {
it('aggregates REST and GraphQL limits', () => {
updateRateLimitFromHeaders(fakeHeaders(5000, 4900, 1000), 'rest')
updateRateLimitFromHeaders(fakeHeaders(5000, 4800, 1100), 'graphql')

const info = getRateLimit()
expect(info.limit).toBe(10000)
expect(info.remaining).toBe(9700)
expect(info.reset).toBe(1100)
})

it('updates values on subsequent calls', () => {
updateRateLimitFromHeaders(fakeHeaders(5000, 4900, 1000), 'rest')
updateRateLimitFromHeaders(fakeHeaders(5000, 4850, 1000), 'rest')

const info = getRateLimit()
// REST remaining should be 4850 (latest), graphql still 4800 from previous test
expect(info.remaining).toBe(4850 + 4800)
})

it('ignores headers without valid limit', () => {
updateRateLimitFromHeaders(fakeHeaders(5000, 4000, 2000), 'rest')
updateRateLimitFromHeaders(new Headers(), 'rest')

// Empty headers have limit=0, so they're ignored — previous values stay
const info = getRateLimit()
expect(info.limit).toBeGreaterThan(0)
})
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Inspect module-scoped rate-limit state =="
rg -n -C3 "rateLimits|getRateLimit|updateRateLimitFromHeaders|resetRateLimit" server/utils/github.ts

echo
echo "== Inspect test coupling to previous test state =="
rg -n -C2 "previous test|updates values on subsequent calls|aggregates REST and GraphQL limits" test/unit/rateLimit.test.ts

Repository: flumen-dev/flumen.dev

Length of output: 2061


Make these tests state-isolated; they currently depend on execution order.

Line 28 explicitly relies on data from a previous test (graphql still 4800 from previous test). The rateLimits object in server/utils/github.ts is module-scoped and shared across tests, making the suite brittle and non-deterministic.

✅ Suggested direction (isolate module state per test)
-import { describe, expect, it } from 'vitest'
+import { beforeEach, describe, expect, it } from 'vitest'
 import { getRateLimit, updateRateLimitFromHeaders } from '../../server/utils/github'
+import { resetRateLimitForTest } from '../../server/utils/github'
@@
 describe('rate limit tracking', () => {
+  beforeEach(() => {
+    resetRateLimitForTest()
+  })
+
   it('aggregates REST and GraphQL limits', () => {
@@
   it('updates values on subsequent calls', () => {
     updateRateLimitFromHeaders(fakeHeaders(5000, 4900, 1000), 'rest')
+    updateRateLimitFromHeaders(fakeHeaders(5000, 4800, 1000), 'graphql')
     updateRateLimitFromHeaders(fakeHeaders(5000, 4850, 1000), 'rest')
@@
-    // REST remaining should be 4850 (latest), graphql still 4800 from previous test
+    // REST remaining should be 4850 (latest), graphql is 4800
     expect(info.remaining).toBe(4850 + 4800)
   })

And in server/utils/github.ts, expose a test-only reset helper:

export function resetRateLimitForTest() {
  for (const key of Object.keys(rateLimits) as Array<'rest' | 'graphql'>) {
    delete rateLimits[key]
  }
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/unit/rateLimit.test.ts` around lines 12 - 40, Tests share module-scoped
rateLimits in server/utils/github.ts causing order-dependent failures; add and
export a test-only reset function (e.g., resetRateLimitForTest) in that module
which clears keys on the rateLimits object, then update the test file to call
resetRateLimitForTest in a beforeEach (or at start of each it) so
updateRateLimitFromHeaders/getRateLimit run against a clean state; reference the
functions updateRateLimitFromHeaders, getRateLimit, and the new
resetRateLimitForTest when making the changes.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
test/unit/rateLimit.test.ts (1)

15-49: ⚠️ Potential issue | 🟠 Major

Make these tests state-independent; they currently rely on execution order.

Line 33 and Line 47 assert USER_A values seeded by an earlier test. That makes the suite flaky when test order changes.

Suggested fix
-import { describe, expect, it } from 'vitest'
-import { getRateLimit, updateRateLimitFromHeaders } from '../../server/utils/github'
+import { beforeEach, describe, expect, it } from 'vitest'
+import { getRateLimit, resetRateLimitForTest, updateRateLimitFromHeaders } from '../../server/utils/github'
@@
 describe('rate limit tracking', () => {
+  beforeEach(() => {
+    resetRateLimitForTest()
+  })
+
   it('aggregates REST and GraphQL limits per user', () => {
@@
   it('isolates rate limits between users', () => {
+    updateRateLimitFromHeaders(fakeHeaders(5000, 4900, 1000), 'rest', USER_A)
+    updateRateLimitFromHeaders(fakeHeaders(5000, 4800, 1100), 'graphql', USER_A)
     updateRateLimitFromHeaders(fakeHeaders(5000, 100, 2000), 'rest', USER_B)
     updateRateLimitFromHeaders(fakeHeaders(5000, 200, 2000), 'graphql', USER_B)
@@
-    expect(infoA.remaining).toBe(9700) // from previous test
+    expect(infoA.remaining).toBe(9700)
@@
   it('ignores headers without userId', () => {
+    updateRateLimitFromHeaders(fakeHeaders(5000, 4900, 1000), 'rest', USER_A)
+    updateRateLimitFromHeaders(fakeHeaders(5000, 4800, 1100), 'graphql', USER_A)
     updateRateLimitFromHeaders(fakeHeaders(5000, 0, 3000), 'rest')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/unit/rateLimit.test.ts` around lines 15 - 49, Tests rely on shared
rate-limit state (asserting USER_A values seeded by previous tests); make them
state-independent by either clearing the rate-limit store before each test (add
a beforeEach that calls a reset/clear function on the rate-limit store, e.g.,
resetRateLimits() or clearRateLimits()) or by using fresh unique user IDs per
test instead of reusing USER_A/USER_B; update the tests that call
updateRateLimitFromHeaders, getRateLimit and fakeHeaders to use one of these
approaches so each it-block starts with a known empty state.
🧹 Nitpick comments (1)
server/utils/github.ts (1)

10-10: Add an eviction strategy for rateLimitsPerUser to avoid unbounded growth.

The in-memory map keeps entries forever per seen user. On long-lived instances, this can grow without bound. Consider TTL-based cleanup or size-bounded eviction.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/utils/github.ts` at line 10, The in-memory Map rateLimitsPerUser
currently retains entries forever causing unbounded growth; update it to use an
eviction strategy (e.g., TTL or size-bounded LRU) by replacing or wrapping
rateLimitsPerUser with a cache that evicts stale users: implement timestamps on
RateLimitInfo entries and run a periodic cleanup task to remove entries older
than a configured TTL, or swap the Map for an LRU cache with a max size and
eviction policy; ensure all reads/writes go through the new cached accessors
(where you update/get entries) so eviction is enforced and add tests/config for
TTL/max size.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/utils/github.ts`:
- Line 137: The call to updateRateLimitFromHeaders in githubFetchWithToken
currently omits the userId so the guard inside updateRateLimitFromHeaders
prevents updating rateLimitsPerUser; update the githubFetchWithToken
implementation to pass the appropriate userId (from the token context or
function argument) into updateRateLimitFromHeaders so that rateLimitsPerUser is
refreshed after each response; locate the githubFetchWithToken function and
ensure it forwards the same userId shape used elsewhere (the one checked by the
guard), and verify githubFetch paths that call githubFetchWithToken likewise
pass or infer the userId.

---

Duplicate comments:
In `@test/unit/rateLimit.test.ts`:
- Around line 15-49: Tests rely on shared rate-limit state (asserting USER_A
values seeded by previous tests); make them state-independent by either clearing
the rate-limit store before each test (add a beforeEach that calls a reset/clear
function on the rate-limit store, e.g., resetRateLimits() or clearRateLimits())
or by using fresh unique user IDs per test instead of reusing USER_A/USER_B;
update the tests that call updateRateLimitFromHeaders, getRateLimit and
fakeHeaders to use one of these approaches so each it-block starts with a known
empty state.

---

Nitpick comments:
In `@server/utils/github.ts`:
- Line 10: The in-memory Map rateLimitsPerUser currently retains entries forever
causing unbounded growth; update it to use an eviction strategy (e.g., TTL or
size-bounded LRU) by replacing or wrapping rateLimitsPerUser with a cache that
evicts stale users: implement timestamps on RateLimitInfo entries and run a
periodic cleanup task to remove entries older than a configured TTL, or swap the
Map for an LRU cache with a max size and eviction policy; ensure all
reads/writes go through the new cached accessors (where you update/get entries)
so eviction is enforced and add tests/config for TTL/max size.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 910d3c28-b44b-4854-b9ee-ffda4dc08b9e

📥 Commits

Reviewing files that changed from the base of the PR and between d19ac69 and ed59966.

📒 Files selected for processing (3)
  • server/api/github/rate-limit.get.ts
  • server/utils/github.ts
  • test/unit/rateLimit.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • server/api/github/rate-limit.get.ts

@Flo0806 Flo0806 merged commit 747efc2 into main Mar 15, 2026
8 of 9 checks passed
@Flo0806 Flo0806 deleted the feat/api-usage branch March 15, 2026 10:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant