Skip to content
This repository was archived by the owner on Mar 30, 2026. It is now read-only.

fix: reduce excessive file writes from account state updates#460

Open
mrm007 wants to merge 2 commits intoNoeFabris:mainfrom
mrm007:fix/excessive-file-writes
Open

fix: reduce excessive file writes from account state updates#460
mrm007 wants to merge 2 commits intoNoeFabris:mainfrom
mrm007:fix/excessive-file-writes

Conversation

@mrm007
Copy link
Copy Markdown

@mrm007 mrm007 commented Feb 17, 2026

Fixes #436

Replaces #459 which I accidentally closed. This PR incorporates the review feedback from that PR (stale snapshot fix, semicolon consistency).

Problem

saveAccounts() writes antigravity-accounts.json far too frequently during normal operation. Each write acquires a proper-lockfile lock (mkdir/rmdir cycle), writes a temp file, and renames it — producing 4+ FS events per save. During active use with rate limit cycling, this fires dozens of times per minute, thrashing the config directory and overwhelming file watchers.

Changes

  1. Remove unconditional requestSaveToDisk() on every API request — the call at line 1702 fired on every request iteration even when no persistent state changed (only in-memory toast state was updated). Moved the save trigger to after markAccountUsed() on success, where lastUsed is actually persisted.

  2. Add snapshot deduplicationexecuteSave() now compares a JSON snapshot of the current storage state against the last saved snapshot, skipping the write entirely when nothing changed. The same buildStorageState() result is used for both the snapshot and the write to avoid stale-snapshot issues during async saves.

  3. Increase debounce from 1s → 5s — rate limit reset times and quota cache are ephemeral state; a 5s window dramatically reduces writes during rate-limit storms while still being timely enough for persistence.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 17, 2026

Walkthrough

The PR debounces and reduces account-file writes by introducing a snapshot-based persistence in AccountManager: a private lastSavedSnapshot field and buildStorageState() method. saveToDisk/requestSaveToDisk now compute a storage snapshot, compare it to the last saved snapshot, and only persist when different; the debounce window was increased to ~5000 ms. A call to request saving was moved in src/plugin.ts to occur after a successful antigravity response. Tests were updated to use the public saveAccounts mock, extended timer advances, and add cases for unchanged-state skips and subsequent-state writes.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main objective of the PR: reducing excessive file writes from account state updates.
Description check ✅ Passed The description is well-detailed, explains the problem, lists specific changes made, and references the linked issue #436.
Linked Issues check ✅ Passed The PR successfully implements all coding objectives from issue #436: removes unconditional disk writes [436], adds snapshot deduplication [436], and increases debounce window from 1s to 5s [436].
Out of Scope Changes check ✅ Passed All changes are directly related to reducing file write frequency and implementing debouncing logic as specified in issue #436; no out-of-scope modifications detected.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Feb 17, 2026

Greptile Summary

Eliminated excessive file writes by removing unconditional save on every API request, implementing snapshot-based deduplication, and increasing debounce from 1s to 5s.

Key improvements:

  • Moved requestSaveToDisk() call from request loop (line 1702) to after markAccountUsed() on success (line 2331) - ensures saves only trigger when persistent state (lastUsed) actually changes, not on every ephemeral toast update
  • Added snapshot deduplication in executeSave() - compares JSON snapshot of current state against lastSavedSnapshot, skipping write entirely when nothing changed
  • Shallow-copied rateLimitResetTimes in buildStorageState() to prevent stale snapshot issues during async saves
  • Increased debounce from 1s to 5s - dramatically reduces writes during rate-limit storms while maintaining timely persistence
  • Comprehensive test coverage for debounce behavior and state-change detection

The changes directly address issue #436 by reducing unnecessary disk I/O, file watcher thrashing, and lockfile contention during normal operation.

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk
  • The implementation is clean, well-tested, and directly addresses the stated problem. The shallow-copy fix for rateLimitResetTimes prevents stale snapshot issues, the debounce increase is conservative (5s is still responsive), and comprehensive tests validate both the state-change detection and debounce behavior. The moved requestSaveToDisk call is precisely placed after the actual persistent state change.
  • No files require special attention

Important Files Changed

Filename Overview
src/plugin/accounts.ts Added snapshot deduplication, increased debounce to 5s, and refactored saveToDisk into buildStorageState + executeSave with shallow-copy fix for rateLimitResetTimes
src/plugin.ts Moved requestSaveToDisk call from request loop (line 1702) to after markAccountUsed on success (line 2331), ensuring saves only trigger on actual state changes
src/plugin/accounts.test.ts Updated tests for 5s debounce, added tests for state-change detection (write when changed, skip when unchanged), switched from spy mocks to module mocks

Flowchart

flowchart TD
    A[API Request Success] --> B[markAccountUsed updates lastUsed]
    B --> C[requestSaveToDisk called]
    C --> D{savePending?}
    D -->|Yes| E[Early return - debounce active]
    D -->|No| F[Set savePending=true]
    F --> G[Start 5s timer]
    G --> H[executeSave after 5s]
    H --> I[Build storage state]
    I --> J[JSON.stringify snapshot]
    J --> K{snapshot != lastSavedSnapshot?}
    K -->|Yes| L[saveAccounts to disk]
    L --> M[Update lastSavedSnapshot]
    K -->|No| N[Skip write - no changes]
    M --> O[Resolve pending promises]
    N --> O
Loading

Last reviewed commit: f7d1145

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between f7e0c50 and 35d5149.

📒 Files selected for processing (3)
  • src/plugin.ts
  • src/plugin/accounts.test.ts
  • src/plugin/accounts.ts
🧰 Additional context used
🧬 Code graph analysis (1)
src/plugin/accounts.ts (1)
src/plugin/storage.ts (2)
  • AccountStorageV4 (217-225)
  • saveAccounts (693-719)
🔇 Additional comments (6)
src/plugin.ts (1)

2331-2332: Success-path persistence placement looks right.
Ties lastUsed updates to successful responses without pre-emptive saves.

src/plugin/accounts.ts (3)

312-316: Snapshot state tracking is a good addition.
Sets the stage for write deduplication.


977-1010: Nice extraction of storage state construction.
Centralizing this avoids drift between snapshot comparison and actual persistence.


1016-1045: Debounce + snapshot compare look solid.
The 5s delay and snapshot guard should significantly cut redundant writes.

src/plugin/accounts.test.ts (2)

3-5: Test setup now mirrors the public storage API.
Mocking saveAccounts matches the new persistence path.


1069-1170: Debounce timing + no-change write test are well aligned.
The 5.5s window and snapshot-skip test cover the new persistence behavior.

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/plugin/accounts.ts`:
- Around line 1012-1014: saveToDisk() writes the storage state but doesn't
update the in-memory lastSavedSnapshot, causing the debounced saver to think the
state changed; fix by capturing the snapshot from this.buildStorageState(),
await saveAccounts(snapshot), and then set this.lastSavedSnapshot = snapshot (or
a deep-cloned/serialized form used by your comparator) so subsequent debounced
saves correctly detect no-op changes; update the saveToDisk method (and any
equivalent direct-save paths) to perform this assignment only after the await
completes successfully.

- Remove unconditional requestSaveToDisk() on every API request (line 1702)
- Add snapshot deduplication to skip no-op writes
- Increase debounce from 1s to 5s for rate-limit storm resilience
- Move save trigger to after markAccountUsed() where lastUsed is persisted

Co-authored-by: Amp <amp@ampcode.com>
Amp-Thread-ID: https://ampcode.com/threads/T-019c6dae-36aa-7524-849b-d8c515ba0cd1
@mrm007 mrm007 force-pushed the fix/excessive-file-writes branch from 35d5149 to a4ac1e4 Compare February 17, 2026 22:55
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
src/plugin/accounts.test.ts (1)

1146-1170: Good coverage for the snapshot dedup path.

The test correctly validates the core optimization: a second requestSaveToDisk() with no intervening state mutation is elided.

Consider adding a complementary test that verifies a write does happen when state changes between two debounced saves (e.g., call manager.markAccountUsed(0) between the two requests) to guard against over-aggressive dedup.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugin/accounts.test.ts` around lines 1146 - 1170, Add a complementary
test to ensure a save occurs when account state changes between two debounced
requests: create an AccountManager with a stored AccountStorageV4, mock
saveAccounts, call manager.requestSaveToDisk(), advance timers to trigger the
first save, then call manager.requestSaveToDisk() again but mutate state with
manager.markAccountUsed(0) before advancing timers the second time, and assert
saveAccounts was called again; reference AccountManager, requestSaveToDisk,
markAccountUsed, saveAccounts, vi.useFakeTimers and vi.advanceTimersByTimeAsync
to locate the relevant test scaffolding.
src/plugin/accounts.ts (1)

977-1010: buildStorageState() passes mutable sub-objects by reference.

Line 992 embeds the live a.rateLimitResetTimes object into the returned state. Between JSON.stringify(state) (snapshot) and the async await saveAccounts(state) in executeSave, another code path could mutate those rate-limit maps, causing the on-disk write to differ from the captured snapshot string. In practice this is low-risk because: (a) the merge inside saveAccounts already means the snapshot won't exactly reflect on-disk state, and (b) mutations during the narrow async window are unlikely. Still, a shallow copy ({ ...a.rateLimitResetTimes }) would make the snapshot strictly consistent with the persisted data.

Shallow-copy mutable sub-objects
-        rateLimitResetTimes: Object.keys(a.rateLimitResetTimes).length > 0 ? a.rateLimitResetTimes : undefined,
+        rateLimitResetTimes: Object.keys(a.rateLimitResetTimes).length > 0 ? { ...a.rateLimitResetTimes } : undefined,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugin/accounts.ts` around lines 977 - 1010, buildStorageState()
currently returns references to mutable sub-objects (e.g.,
a.rateLimitResetTimes, a.cachedQuota, a.fingerprintHistory) which can be mutated
after JSON.stringify(state) but before awaiting saveAccounts(state);
shallow-copy these objects when building the returned AccountStorageV4 (for each
account map/record like rateLimitResetTimes and cachedQuota and arrays like
fingerprintHistory) so the snapshot passed to executeSave/saveAccounts is stable
and won't reflect concurrent mutations — update the mapping in buildStorageState
to spread or slice these fields into new objects/arrays before returning the
state.
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 35d5149 and a4ac1e4.

📒 Files selected for processing (3)
  • src/plugin.ts
  • src/plugin/accounts.test.ts
  • src/plugin/accounts.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/plugin.ts
🧰 Additional context used
🧬 Code graph analysis (2)
src/plugin/accounts.test.ts (1)
src/plugin/storage.ts (2)
  • saveAccounts (693-719)
  • AccountStorageV4 (217-225)
src/plugin/accounts.ts (1)
src/plugin/storage.ts (2)
  • AccountStorageV4 (217-225)
  • saveAccounts (693-719)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Greptile Review
🔇 Additional comments (3)
src/plugin/accounts.ts (2)

1037-1057: Snapshot dedup in executeSave looks correct.

Building the state once, stringifying for comparison, and reusing the same state object for saveAccounts avoids the stale-snapshot race that would occur if state were captured at two different times. The lastSavedSnapshot update happens only after a successful write, so a failed save won't suppress the retry — good.


1018-1026: Debounce increase from 1s → 5s is reasonable for the stated goal.

One thing to note: requestSaveToDisk guards with savePending but never clears saveTimeout on early return. Since savePending is set to true before scheduling and reset to false at the start of executeSave, calling requestSaveToDisk while an execution is in-flight (after executeSave starts but before it resolves) would schedule a second timer. This is pre-existing behavior and unlikely to cause problems in practice, just something to be aware of.

src/plugin/accounts.test.ts (1)

1069-1144: Mock migration and timing adjustments look good.

Switching from spying on saveToDisk to mocking the public saveAccounts function is a better approach — it tests the actual persistence behavior without coupling to internal method structure. The 5500ms timers correctly exceed the new 5000ms debounce window.

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/plugin/accounts.ts`:
- Around line 1012-1016: The saveToDisk change correctly updates
lastSavedSnapshot after writing, but ensure it only updates on successful save
and captures the serialized state consistently: in saveToDisk() call
buildStorageState() to get state, await saveAccounts(state) inside a try/catch,
and on success set this.lastSavedSnapshot = JSON.stringify(state); on error
rethrow or log and do not update lastSavedSnapshot; reference the saveToDisk,
buildStorageState, saveAccounts, and lastSavedSnapshot symbols when making this
change.

---

Nitpick comments:
In `@src/plugin/accounts.test.ts`:
- Around line 1146-1170: Add a complementary test to ensure a save occurs when
account state changes between two debounced requests: create an AccountManager
with a stored AccountStorageV4, mock saveAccounts, call
manager.requestSaveToDisk(), advance timers to trigger the first save, then call
manager.requestSaveToDisk() again but mutate state with
manager.markAccountUsed(0) before advancing timers the second time, and assert
saveAccounts was called again; reference AccountManager, requestSaveToDisk,
markAccountUsed, saveAccounts, vi.useFakeTimers and vi.advanceTimersByTimeAsync
to locate the relevant test scaffolding.

In `@src/plugin/accounts.ts`:
- Around line 977-1010: buildStorageState() currently returns references to
mutable sub-objects (e.g., a.rateLimitResetTimes, a.cachedQuota,
a.fingerprintHistory) which can be mutated after JSON.stringify(state) but
before awaiting saveAccounts(state); shallow-copy these objects when building
the returned AccountStorageV4 (for each account map/record like
rateLimitResetTimes and cachedQuota and arrays like fingerprintHistory) so the
snapshot passed to executeSave/saveAccounts is stable and won't reflect
concurrent mutations — update the mapping in buildStorageState to spread or
slice these fields into new objects/arrays before returning the state.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/plugin/accounts.ts (1)

1037-1057: ⚠️ Potential issue | 🟡 Minor

flushSaveToDisk can return before an in-progress write completes.

savePending is cleared synchronously before await saveAccounts(state). Once executeSave suspends at that await, any concurrent call to flushSaveToDisk sees !savePending === true and returns immediately — so a shutdown handler awaiting flushSaveToDisk() may proceed before the disk write finishes. The atomic temp-file rename in saveAccounts limits corruption risk, but data written since the previous save would be silently dropped.

🛡️ Proposed fix: add a saveInFlight flag
  private savePending = false;
  private saveTimeout: ReturnType<typeof setTimeout> | null = null;
  private savePromiseResolvers: Array<() => void> = [];
  private lastSavedSnapshot: string | null = null;
+ private saveInFlight = false;
  async flushSaveToDisk(): Promise<void> {
-   if (!this.savePending) {
+   if (!this.savePending && !this.saveInFlight) {
      return;
    }
    return new Promise<void>((resolve) => {
      this.savePromiseResolvers.push(resolve);
    });
  }

  private async executeSave(): Promise<void> {
    this.savePending = false;
    this.saveTimeout = null;
+   this.saveInFlight = true;

    try {
      const state = this.buildStorageState();
      const snapshot = JSON.stringify(state);
      if (snapshot !== this.lastSavedSnapshot) {
        await saveAccounts(state);
        this.lastSavedSnapshot = snapshot;
      }
    } catch {
      // best-effort persistence; avoid unhandled rejection from timer-driven saves
    } finally {
+     this.saveInFlight = false;
      const resolvers = this.savePromiseResolvers;
      this.savePromiseResolvers = [];
      for (const resolve of resolvers) {
        resolve();
      }
    }
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugin/accounts.ts` around lines 1037 - 1057, The executeSave() clears
savePending before awaiting saveAccounts(...) which allows flushSaveToDisk() to
return while a write is still in flight; add a saveInFlight boolean (set true
before awaiting saveAccounts and false in finally) and update flushSaveToDisk()
to also wait while saveInFlight is true (by queuing on savePromiseResolvers as
it already does) so callers only return after the disk write completes; adjust
executeSave() to set/clear saveInFlight around the await, keep savePending
semantics for scheduling, and ensure lastSavedSnapshot and resolve logic remain
in the finally block.
🧹 Nitpick comments (2)
src/plugin/accounts.ts (1)

992-992: Key-order sensitivity in snapshot comparison may cause spurious writes.

{ ...a.rateLimitResetTimes } preserves JS insertion order. When clearExpiredRateLimits deletes a key and markRateLimited later re-inserts it, the entry moves to the end of the object. Two consecutive calls to buildStorageState with the same logical content can then produce different JSON.stringify outputs, triggering a false-positive dirty detection and an unnecessary write.

♻️ Proposed fix: sort keys during serialization
- rateLimitResetTimes: Object.keys(a.rateLimitResetTimes).length > 0 ? { ...a.rateLimitResetTimes } : undefined,
+ rateLimitResetTimes: (() => {
+   const entries = Object.entries(a.rateLimitResetTimes);
+   if (!entries.length) return undefined;
+   return Object.fromEntries(entries.sort(([a], [b]) => a.localeCompare(b)));
+ })(),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugin/accounts.ts` at line 992, The current spread
{...a.rateLimitResetTimes} preserves insertion order causing snapshot diffs when
keys are removed and re-added; update the serialization in buildStorageState
(the line building rateLimitResetTimes) to produce a deterministically ordered
object by iterating Object.keys(a.rateLimitResetTimes).sort() and reducing into
a new object so keys are always in sorted order (return undefined when empty as
before); replace the existing conditional expression that uses the spread with
this sorted-key construction to avoid spurious dirty detections.
src/plugin/accounts.test.ts (1)

1069-1197: New throttling tests are well-structured and cover the key behavioral contracts.

Migrating to vi.mocked(saveAccounts) over spying on the internal saveToDisk is the right abstraction boundary. The two new tests correctly exercise state-change detection (markAccountUsed mutates lastUsed) and the no-op dedup path; fake-timer setup and 5500 ms advances are consistent with the 5000 ms debounce.

One minor ergonomics issue: all five tests manually call saveMock.mockClear() on lines 1073, 1099, 1124, 1149, and 1175. A nested beforeEach within the describe block would be more robust — any future test in this block that forgets mockClear() would inherit stale call counts.

♻️ Proposed refactor: centralize mockClear
  describe("Issue `#174`: saveToDisk throttling", () => {
+   beforeEach(() => {
+     vi.mocked(saveAccounts).mockClear();
+   });
+
    it("requestSaveToDisk coalesces multiple calls into one write", async () => {
      vi.useFakeTimers();
-     const saveMock = vi.mocked(saveAccounts);
-     saveMock.mockClear();
+     const saveMock = vi.mocked(saveAccounts);
      ...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugin/accounts.test.ts` around lines 1069 - 1197, The tests in the
"Issue `#174`: saveToDisk throttling" describe block repeatedly call
saveMock.mockClear() in each it block; add a nested beforeEach inside that
describe which calls vi.mocked(saveAccounts).mockClear() (and any shared setup
like vi.useFakeTimers() if desired) so all tests start with a cleared mock and
you don't rely on per-test manual clearing; update references to the mocked
function via saveMock or vi.mocked(saveAccounts) in the beforeEach to locate the
mock clean-up.
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between a4ac1e4 and f7d1145.

📒 Files selected for processing (2)
  • src/plugin/accounts.test.ts
  • src/plugin/accounts.ts
🧰 Additional context used
🧬 Code graph analysis (2)
src/plugin/accounts.test.ts (2)
src/plugin/storage.ts (2)
  • saveAccounts (693-719)
  • AccountStorageV4 (217-225)
src/plugin/accounts.ts (1)
  • AccountManager (298-1239)
src/plugin/accounts.ts (1)
src/plugin/storage.ts (2)
  • AccountStorageV4 (217-225)
  • saveAccounts (693-719)
🔇 Additional comments (1)
src/plugin/accounts.ts (1)

977-1016: Snapshot deduplication implementation LGTM — past feedback correctly addressed.

buildStorageState is cleanly extracted, uses a defensive shallow copy for rateLimitResetTimes, and saveToDisk now correctly updates lastSavedSnapshot after a successful write, addressing the prior review comment that identified the stale-snapshot risk.

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/plugin/accounts.ts`:
- Around line 1037-1057: The executeSave() clears savePending before awaiting
saveAccounts(...) which allows flushSaveToDisk() to return while a write is
still in flight; add a saveInFlight boolean (set true before awaiting
saveAccounts and false in finally) and update flushSaveToDisk() to also wait
while saveInFlight is true (by queuing on savePromiseResolvers as it already
does) so callers only return after the disk write completes; adjust
executeSave() to set/clear saveInFlight around the await, keep savePending
semantics for scheduling, and ensure lastSavedSnapshot and resolve logic remain
in the finally block.

---

Nitpick comments:
In `@src/plugin/accounts.test.ts`:
- Around line 1069-1197: The tests in the "Issue `#174`: saveToDisk throttling"
describe block repeatedly call saveMock.mockClear() in each it block; add a
nested beforeEach inside that describe which calls
vi.mocked(saveAccounts).mockClear() (and any shared setup like
vi.useFakeTimers() if desired) so all tests start with a cleared mock and you
don't rely on per-test manual clearing; update references to the mocked function
via saveMock or vi.mocked(saveAccounts) in the beforeEach to locate the mock
clean-up.

In `@src/plugin/accounts.ts`:
- Line 992: The current spread {...a.rateLimitResetTimes} preserves insertion
order causing snapshot diffs when keys are removed and re-added; update the
serialization in buildStorageState (the line building rateLimitResetTimes) to
produce a deterministically ordered object by iterating
Object.keys(a.rateLimitResetTimes).sort() and reducing into a new object so keys
are always in sorted order (return undefined when empty as before); replace the
existing conditional expression that uses the spread with this sorted-key
construction to avoid spurious dirty detections.

@mrm007
Copy link
Copy Markdown
Author

mrm007 commented Feb 17, 2026

Reviewed the latest suggestions — none require changes:

  1. flushSaveToDisk() returning while write is in-flight: Pre-existing behavior, not introduced by this PR. In practice it's fine: flushSaveToDisk() queues onto savePromiseResolvers which are only resolved in the finally block after the write completes. The only scenario where savePending is false when flushSaveToDisk() is called is when no save was requested — so there's nothing to wait for.

  2. Repeated mockClear() in tests: Valid nitpick but purely cosmetic — each test is self-contained and explicit. Not worth a churn commit.

  3. Sorted keys for rateLimitResetTimes: In V8, string-keyed properties maintain insertion order. Keys are set via account.rateLimitResetTimes[key] = timestamp which either adds or updates in place — they're never removed and re-added in a different order. JSON.stringify serializes in insertion order deterministically, so spurious snapshot diffs from key reordering won't happen.

ljepson pushed a commit to ljepson/opencode-antigravity-auth that referenced this pull request Feb 20, 2026
…ss review issues

- PR NoeFabris#476: guard isOAuthAuth against undefined/null (auth.ts)
- PR NoeFabris#465: use ensureProjectContext for google_search project‑ID resolution
- PR NoeFabris#460: throttle excessive file writes from account state updates
- Fix abort‑during‑cooldown error‑escape (wrap all  calls)
- Remove stale THINKING_RECOVERY_NEEDED sentinel branch
- Change abort status from HTTP 499 to standard 408 (Request Timeout)
- All tests pass, type‑check clean
ljepson pushed a commit to ljepson/opencode-antigravity-auth that referenced this pull request Feb 20, 2026
…ss review issues

- PR NoeFabris#476: guard isOAuthAuth against undefined/null (auth.ts)
- PR NoeFabris#465: use ensureProjectContext for google_search project‑ID resolution
- PR NoeFabris#460: throttle excessive file writes from account state updates
- Fix abort‑during‑cooldown error‑escape (wrap all  calls)
- Remove stale THINKING_RECOVERY_NEEDED sentinel branch
- Change abort status from HTTP 499 to standard 408 (Request Timeout)
- All tests pass, type‑check clean
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Excessive file writes from account state updates — lock file thrashing

1 participant