fix: reduce excessive file writes from account state updates#460
fix: reduce excessive file writes from account state updates#460mrm007 wants to merge 2 commits intoNoeFabris:mainfrom
Conversation
WalkthroughThe PR debounces and reduces account-file writes by introducing a snapshot-based persistence in AccountManager: a private Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryEliminated excessive file writes by removing unconditional save on every API request, implementing snapshot-based deduplication, and increasing debounce from 1s to 5s. Key improvements:
The changes directly address issue #436 by reducing unnecessary disk I/O, file watcher thrashing, and lockfile contention during normal operation. Confidence Score: 5/5
Important Files Changed
Flowchartflowchart TD
A[API Request Success] --> B[markAccountUsed updates lastUsed]
B --> C[requestSaveToDisk called]
C --> D{savePending?}
D -->|Yes| E[Early return - debounce active]
D -->|No| F[Set savePending=true]
F --> G[Start 5s timer]
G --> H[executeSave after 5s]
H --> I[Build storage state]
I --> J[JSON.stringify snapshot]
J --> K{snapshot != lastSavedSnapshot?}
K -->|Yes| L[saveAccounts to disk]
L --> M[Update lastSavedSnapshot]
K -->|No| N[Skip write - no changes]
M --> O[Resolve pending promises]
N --> O
Last reviewed commit: f7d1145 |
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
src/plugin.tssrc/plugin/accounts.test.tssrc/plugin/accounts.ts
🧰 Additional context used
🧬 Code graph analysis (1)
src/plugin/accounts.ts (1)
src/plugin/storage.ts (2)
AccountStorageV4(217-225)saveAccounts(693-719)
🔇 Additional comments (6)
src/plugin.ts (1)
2331-2332: Success-path persistence placement looks right.
Ties lastUsed updates to successful responses without pre-emptive saves.src/plugin/accounts.ts (3)
312-316: Snapshot state tracking is a good addition.
Sets the stage for write deduplication.
977-1010: Nice extraction of storage state construction.
Centralizing this avoids drift between snapshot comparison and actual persistence.
1016-1045: Debounce + snapshot compare look solid.
The 5s delay and snapshot guard should significantly cut redundant writes.src/plugin/accounts.test.ts (2)
3-5: Test setup now mirrors the public storage API.
MockingsaveAccountsmatches the new persistence path.
1069-1170: Debounce timing + no-change write test are well aligned.
The 5.5s window and snapshot-skip test cover the new persistence behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/plugin/accounts.ts`:
- Around line 1012-1014: saveToDisk() writes the storage state but doesn't
update the in-memory lastSavedSnapshot, causing the debounced saver to think the
state changed; fix by capturing the snapshot from this.buildStorageState(),
await saveAccounts(snapshot), and then set this.lastSavedSnapshot = snapshot (or
a deep-cloned/serialized form used by your comparator) so subsequent debounced
saves correctly detect no-op changes; update the saveToDisk method (and any
equivalent direct-save paths) to perform this assignment only after the await
completes successfully.
- Remove unconditional requestSaveToDisk() on every API request (line 1702) - Add snapshot deduplication to skip no-op writes - Increase debounce from 1s to 5s for rate-limit storm resilience - Move save trigger to after markAccountUsed() where lastUsed is persisted Co-authored-by: Amp <amp@ampcode.com> Amp-Thread-ID: https://ampcode.com/threads/T-019c6dae-36aa-7524-849b-d8c515ba0cd1
35d5149 to
a4ac1e4
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/plugin/accounts.test.ts (1)
1146-1170: Good coverage for the snapshot dedup path.The test correctly validates the core optimization: a second
requestSaveToDisk()with no intervening state mutation is elided.Consider adding a complementary test that verifies a write does happen when state changes between two debounced saves (e.g., call
manager.markAccountUsed(0)between the two requests) to guard against over-aggressive dedup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugin/accounts.test.ts` around lines 1146 - 1170, Add a complementary test to ensure a save occurs when account state changes between two debounced requests: create an AccountManager with a stored AccountStorageV4, mock saveAccounts, call manager.requestSaveToDisk(), advance timers to trigger the first save, then call manager.requestSaveToDisk() again but mutate state with manager.markAccountUsed(0) before advancing timers the second time, and assert saveAccounts was called again; reference AccountManager, requestSaveToDisk, markAccountUsed, saveAccounts, vi.useFakeTimers and vi.advanceTimersByTimeAsync to locate the relevant test scaffolding.src/plugin/accounts.ts (1)
977-1010:buildStorageState()passes mutable sub-objects by reference.Line 992 embeds the live
a.rateLimitResetTimesobject into the returned state. BetweenJSON.stringify(state)(snapshot) and the asyncawait saveAccounts(state)inexecuteSave, another code path could mutate those rate-limit maps, causing the on-disk write to differ from the captured snapshot string. In practice this is low-risk because: (a) the merge insidesaveAccountsalready means the snapshot won't exactly reflect on-disk state, and (b) mutations during the narrow async window are unlikely. Still, a shallow copy ({ ...a.rateLimitResetTimes }) would make the snapshot strictly consistent with the persisted data.Shallow-copy mutable sub-objects
- rateLimitResetTimes: Object.keys(a.rateLimitResetTimes).length > 0 ? a.rateLimitResetTimes : undefined, + rateLimitResetTimes: Object.keys(a.rateLimitResetTimes).length > 0 ? { ...a.rateLimitResetTimes } : undefined,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugin/accounts.ts` around lines 977 - 1010, buildStorageState() currently returns references to mutable sub-objects (e.g., a.rateLimitResetTimes, a.cachedQuota, a.fingerprintHistory) which can be mutated after JSON.stringify(state) but before awaiting saveAccounts(state); shallow-copy these objects when building the returned AccountStorageV4 (for each account map/record like rateLimitResetTimes and cachedQuota and arrays like fingerprintHistory) so the snapshot passed to executeSave/saveAccounts is stable and won't reflect concurrent mutations — update the mapping in buildStorageState to spread or slice these fields into new objects/arrays before returning the state.
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
src/plugin.tssrc/plugin/accounts.test.tssrc/plugin/accounts.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/plugin.ts
🧰 Additional context used
🧬 Code graph analysis (2)
src/plugin/accounts.test.ts (1)
src/plugin/storage.ts (2)
saveAccounts(693-719)AccountStorageV4(217-225)
src/plugin/accounts.ts (1)
src/plugin/storage.ts (2)
AccountStorageV4(217-225)saveAccounts(693-719)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Greptile Review
🔇 Additional comments (3)
src/plugin/accounts.ts (2)
1037-1057: Snapshot dedup inexecuteSavelooks correct.Building the state once, stringifying for comparison, and reusing the same
stateobject forsaveAccountsavoids the stale-snapshot race that would occur if state were captured at two different times. ThelastSavedSnapshotupdate happens only after a successful write, so a failed save won't suppress the retry — good.
1018-1026: Debounce increase from 1s → 5s is reasonable for the stated goal.One thing to note:
requestSaveToDiskguards withsavePendingbut never clearssaveTimeouton early return. SincesavePendingis set totruebefore scheduling and reset tofalseat the start ofexecuteSave, callingrequestSaveToDiskwhile an execution is in-flight (afterexecuteSavestarts but before it resolves) would schedule a second timer. This is pre-existing behavior and unlikely to cause problems in practice, just something to be aware of.src/plugin/accounts.test.ts (1)
1069-1144: Mock migration and timing adjustments look good.Switching from spying on
saveToDiskto mocking the publicsaveAccountsfunction is a better approach — it tests the actual persistence behavior without coupling to internal method structure. The 5500ms timers correctly exceed the new 5000ms debounce window.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/plugin/accounts.ts`:
- Around line 1012-1016: The saveToDisk change correctly updates
lastSavedSnapshot after writing, but ensure it only updates on successful save
and captures the serialized state consistently: in saveToDisk() call
buildStorageState() to get state, await saveAccounts(state) inside a try/catch,
and on success set this.lastSavedSnapshot = JSON.stringify(state); on error
rethrow or log and do not update lastSavedSnapshot; reference the saveToDisk,
buildStorageState, saveAccounts, and lastSavedSnapshot symbols when making this
change.
---
Nitpick comments:
In `@src/plugin/accounts.test.ts`:
- Around line 1146-1170: Add a complementary test to ensure a save occurs when
account state changes between two debounced requests: create an AccountManager
with a stored AccountStorageV4, mock saveAccounts, call
manager.requestSaveToDisk(), advance timers to trigger the first save, then call
manager.requestSaveToDisk() again but mutate state with
manager.markAccountUsed(0) before advancing timers the second time, and assert
saveAccounts was called again; reference AccountManager, requestSaveToDisk,
markAccountUsed, saveAccounts, vi.useFakeTimers and vi.advanceTimersByTimeAsync
to locate the relevant test scaffolding.
In `@src/plugin/accounts.ts`:
- Around line 977-1010: buildStorageState() currently returns references to
mutable sub-objects (e.g., a.rateLimitResetTimes, a.cachedQuota,
a.fingerprintHistory) which can be mutated after JSON.stringify(state) but
before awaiting saveAccounts(state); shallow-copy these objects when building
the returned AccountStorageV4 (for each account map/record like
rateLimitResetTimes and cachedQuota and arrays like fingerprintHistory) so the
snapshot passed to executeSave/saveAccounts is stable and won't reflect
concurrent mutations — update the mapping in buildStorageState to spread or
slice these fields into new objects/arrays before returning the state.
…, add test for state-change write Amp-Thread-ID: https://ampcode.com/threads/T-019c6dae-36aa-7524-849b-d8c515ba0cd1 Co-authored-by: Amp <amp@ampcode.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/plugin/accounts.ts (1)
1037-1057:⚠️ Potential issue | 🟡 Minor
flushSaveToDiskcan return before an in-progress write completes.
savePendingis cleared synchronously beforeawait saveAccounts(state). OnceexecuteSavesuspends at thatawait, any concurrent call toflushSaveToDisksees!savePending === trueand returns immediately — so a shutdown handler awaitingflushSaveToDisk()may proceed before the disk write finishes. The atomic temp-file rename insaveAccountslimits corruption risk, but data written since the previous save would be silently dropped.🛡️ Proposed fix: add a
saveInFlightflagprivate savePending = false; private saveTimeout: ReturnType<typeof setTimeout> | null = null; private savePromiseResolvers: Array<() => void> = []; private lastSavedSnapshot: string | null = null; + private saveInFlight = false;async flushSaveToDisk(): Promise<void> { - if (!this.savePending) { + if (!this.savePending && !this.saveInFlight) { return; } return new Promise<void>((resolve) => { this.savePromiseResolvers.push(resolve); }); } private async executeSave(): Promise<void> { this.savePending = false; this.saveTimeout = null; + this.saveInFlight = true; try { const state = this.buildStorageState(); const snapshot = JSON.stringify(state); if (snapshot !== this.lastSavedSnapshot) { await saveAccounts(state); this.lastSavedSnapshot = snapshot; } } catch { // best-effort persistence; avoid unhandled rejection from timer-driven saves } finally { + this.saveInFlight = false; const resolvers = this.savePromiseResolvers; this.savePromiseResolvers = []; for (const resolve of resolvers) { resolve(); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugin/accounts.ts` around lines 1037 - 1057, The executeSave() clears savePending before awaiting saveAccounts(...) which allows flushSaveToDisk() to return while a write is still in flight; add a saveInFlight boolean (set true before awaiting saveAccounts and false in finally) and update flushSaveToDisk() to also wait while saveInFlight is true (by queuing on savePromiseResolvers as it already does) so callers only return after the disk write completes; adjust executeSave() to set/clear saveInFlight around the await, keep savePending semantics for scheduling, and ensure lastSavedSnapshot and resolve logic remain in the finally block.
🧹 Nitpick comments (2)
src/plugin/accounts.ts (1)
992-992: Key-order sensitivity in snapshot comparison may cause spurious writes.
{ ...a.rateLimitResetTimes }preserves JS insertion order. WhenclearExpiredRateLimitsdeletes a key andmarkRateLimitedlater re-inserts it, the entry moves to the end of the object. Two consecutive calls tobuildStorageStatewith the same logical content can then produce differentJSON.stringifyoutputs, triggering a false-positive dirty detection and an unnecessary write.♻️ Proposed fix: sort keys during serialization
- rateLimitResetTimes: Object.keys(a.rateLimitResetTimes).length > 0 ? { ...a.rateLimitResetTimes } : undefined, + rateLimitResetTimes: (() => { + const entries = Object.entries(a.rateLimitResetTimes); + if (!entries.length) return undefined; + return Object.fromEntries(entries.sort(([a], [b]) => a.localeCompare(b))); + })(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugin/accounts.ts` at line 992, The current spread {...a.rateLimitResetTimes} preserves insertion order causing snapshot diffs when keys are removed and re-added; update the serialization in buildStorageState (the line building rateLimitResetTimes) to produce a deterministically ordered object by iterating Object.keys(a.rateLimitResetTimes).sort() and reducing into a new object so keys are always in sorted order (return undefined when empty as before); replace the existing conditional expression that uses the spread with this sorted-key construction to avoid spurious dirty detections.src/plugin/accounts.test.ts (1)
1069-1197: New throttling tests are well-structured and cover the key behavioral contracts.Migrating to
vi.mocked(saveAccounts)over spying on the internalsaveToDiskis the right abstraction boundary. The two new tests correctly exercise state-change detection (markAccountUsedmutateslastUsed) and the no-op dedup path; fake-timer setup and 5500 ms advances are consistent with the 5000 ms debounce.One minor ergonomics issue: all five tests manually call
saveMock.mockClear()on lines 1073, 1099, 1124, 1149, and 1175. A nestedbeforeEachwithin the describe block would be more robust — any future test in this block that forgetsmockClear()would inherit stale call counts.♻️ Proposed refactor: centralize
mockCleardescribe("Issue `#174`: saveToDisk throttling", () => { + beforeEach(() => { + vi.mocked(saveAccounts).mockClear(); + }); + it("requestSaveToDisk coalesces multiple calls into one write", async () => { vi.useFakeTimers(); - const saveMock = vi.mocked(saveAccounts); - saveMock.mockClear(); + const saveMock = vi.mocked(saveAccounts); ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugin/accounts.test.ts` around lines 1069 - 1197, The tests in the "Issue `#174`: saveToDisk throttling" describe block repeatedly call saveMock.mockClear() in each it block; add a nested beforeEach inside that describe which calls vi.mocked(saveAccounts).mockClear() (and any shared setup like vi.useFakeTimers() if desired) so all tests start with a cleared mock and you don't rely on per-test manual clearing; update references to the mocked function via saveMock or vi.mocked(saveAccounts) in the beforeEach to locate the mock clean-up.
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
src/plugin/accounts.test.tssrc/plugin/accounts.ts
🧰 Additional context used
🧬 Code graph analysis (2)
src/plugin/accounts.test.ts (2)
src/plugin/storage.ts (2)
saveAccounts(693-719)AccountStorageV4(217-225)src/plugin/accounts.ts (1)
AccountManager(298-1239)
src/plugin/accounts.ts (1)
src/plugin/storage.ts (2)
AccountStorageV4(217-225)saveAccounts(693-719)
🔇 Additional comments (1)
src/plugin/accounts.ts (1)
977-1016: Snapshot deduplication implementation LGTM — past feedback correctly addressed.
buildStorageStateis cleanly extracted, uses a defensive shallow copy forrateLimitResetTimes, andsaveToDisknow correctly updateslastSavedSnapshotafter a successful write, addressing the prior review comment that identified the stale-snapshot risk.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/plugin/accounts.ts`:
- Around line 1037-1057: The executeSave() clears savePending before awaiting
saveAccounts(...) which allows flushSaveToDisk() to return while a write is
still in flight; add a saveInFlight boolean (set true before awaiting
saveAccounts and false in finally) and update flushSaveToDisk() to also wait
while saveInFlight is true (by queuing on savePromiseResolvers as it already
does) so callers only return after the disk write completes; adjust
executeSave() to set/clear saveInFlight around the await, keep savePending
semantics for scheduling, and ensure lastSavedSnapshot and resolve logic remain
in the finally block.
---
Nitpick comments:
In `@src/plugin/accounts.test.ts`:
- Around line 1069-1197: The tests in the "Issue `#174`: saveToDisk throttling"
describe block repeatedly call saveMock.mockClear() in each it block; add a
nested beforeEach inside that describe which calls
vi.mocked(saveAccounts).mockClear() (and any shared setup like
vi.useFakeTimers() if desired) so all tests start with a cleared mock and you
don't rely on per-test manual clearing; update references to the mocked function
via saveMock or vi.mocked(saveAccounts) in the beforeEach to locate the mock
clean-up.
In `@src/plugin/accounts.ts`:
- Line 992: The current spread {...a.rateLimitResetTimes} preserves insertion
order causing snapshot diffs when keys are removed and re-added; update the
serialization in buildStorageState (the line building rateLimitResetTimes) to
produce a deterministically ordered object by iterating
Object.keys(a.rateLimitResetTimes).sort() and reducing into a new object so keys
are always in sorted order (return undefined when empty as before); replace the
existing conditional expression that uses the spread with this sorted-key
construction to avoid spurious dirty detections.
|
Reviewed the latest suggestions — none require changes:
|
…ss review issues - PR NoeFabris#476: guard isOAuthAuth against undefined/null (auth.ts) - PR NoeFabris#465: use ensureProjectContext for google_search project‑ID resolution - PR NoeFabris#460: throttle excessive file writes from account state updates - Fix abort‑during‑cooldown error‑escape (wrap all calls) - Remove stale THINKING_RECOVERY_NEEDED sentinel branch - Change abort status from HTTP 499 to standard 408 (Request Timeout) - All tests pass, type‑check clean
…ss review issues - PR NoeFabris#476: guard isOAuthAuth against undefined/null (auth.ts) - PR NoeFabris#465: use ensureProjectContext for google_search project‑ID resolution - PR NoeFabris#460: throttle excessive file writes from account state updates - Fix abort‑during‑cooldown error‑escape (wrap all calls) - Remove stale THINKING_RECOVERY_NEEDED sentinel branch - Change abort status from HTTP 499 to standard 408 (Request Timeout) - All tests pass, type‑check clean
Fixes #436
Replaces #459 which I accidentally closed. This PR incorporates the review feedback from that PR (stale snapshot fix, semicolon consistency).
Problem
saveAccounts()writesantigravity-accounts.jsonfar too frequently during normal operation. Each write acquires aproper-lockfilelock (mkdir/rmdir cycle), writes a temp file, and renames it — producing 4+ FS events per save. During active use with rate limit cycling, this fires dozens of times per minute, thrashing the config directory and overwhelming file watchers.Changes
Remove unconditional
requestSaveToDisk()on every API request — the call at line 1702 fired on every request iteration even when no persistent state changed (only in-memory toast state was updated). Moved the save trigger to aftermarkAccountUsed()on success, wherelastUsedis actually persisted.Add snapshot deduplication —
executeSave()now compares a JSON snapshot of the current storage state against the last saved snapshot, skipping the write entirely when nothing changed. The samebuildStorageState()result is used for both the snapshot and the write to avoid stale-snapshot issues during async saves.Increase debounce from 1s → 5s — rate limit reset times and quota cache are ephemeral state; a 5s window dramatically reduces writes during rate-limit storms while still being timely enough for persistence.