Releases: SylphxAI/rapid
@sylphx/zen@3.49.2
@sylphx/zen
3.49.2 - Critical: Effect-Computed Dependency Fix
Bug Fixes
- CRITICAL FIX: Effects depending on computed values now properly re-run when computed changes
- Issue: Micro-batch wasn't flushing
pendingNotificationsfor computed values - Result: Effects subscribed to computeds would never update after initial run
- Fix: Added pendingNotifications flush to micro-batch (same as explicit batch)
- Issue: Micro-batch wasn't flushing
Example of bug:
const base = zen(0);
const doubled = computed(() => base.value * 2, [base]);
effect(() => {
console.log(doubled.value); // Would never update!
}, [doubled]);
base.value = 5; // Computed recalculated, but effect didn't re-run ❌After fix:
base.value = 5; // Computed recalculates AND effect re-runs ✅Test Results:
- ✅ Effect → Computed (no batch): PASS
- ✅ Effect → Computed (with batch): PASS
- ✅ Effect → Computed (multiple changes): PASS
Impact: This was a breaking bug introduced in v3.49.0 that made effects completely non-functional when depending on computed values.
3.49.1 - Batch Deduplication Fix
Bug Fixes
- FIXED:
batch()now properly deduplicates listener calls within a batch- Previously: Computed values recalculated 3x when depending on 3 updated signals in a batch
- Now: Computed values recalculate only 1x per batch (3x performance improvement)
Example:
const a = zen(0), b = zen(0), c = zen(0);
const sum = computed(() => a.value + b.value + c.value, [a, b, c]);
batch(() => {
a.value = 1;
b.value = 2;
c.value = 3;
});
// v3.49.0: sum recalculated 3 times ❌
// v3.49.1: sum recalculated 1 time ✅Technical Details:
- Added listener deduplication in batch flush logic
- Collects unique listeners across all pending notifications
- Calls each listener only once with its first triggering signal's context
- Maintains correct behavior for both computed values and effects
3.49.0 - Ultimate
Major Changes
- ULTIMATE OPTIMIZATION: Combined best techniques from all 56 historical versions (v3.0.0 - v3.48.0)
🏆 Performance Achievements:
- ✅ 42x faster on Very Deep Chain (100 layers): 5.6M vs 133K ops/sec
- ✅ 5.8x faster on Deep Chain (10 layers): 7.7M vs 1.3M ops/sec
- ✅ 4.7x faster on Deep Diamond (5 layers): 1.8M vs 380K ops/sec
- ✅ 2.4x faster on Moderate Read (100x): 1.9M vs 804K ops/sec
- ✅ 73% faster on Large Array operations (1000 items)
- ✅ 59% faster on Array Push operations
- ✅ 37% faster on Concurrent Updates (50x)
🔬 Key Innovations:
- Returned to v3.1.1's simple prototype-based architecture
- Removed ALL timestamp tracking (
_time,++clock) - Eliminated O(n²) deduplication (now O(n) with inline loop)
- Kept automatic micro-batching from v3.48.0 for smooth effects
- Zero overhead dirty flag propagation
- 3.24 KB minified (similar size to v3.48.0)
📝 Architecture:
// v3.49.0 Ultimate = v3.1.1 base + v3.48.0 micro-batching - all overhead
- Prototype-based objects (lightweight creation)
- Simple dirty flags (fast propagation)
- O(n) deduplication (no nested loops)
- Automatic micro-batching (smooth effects)
- Zero timestamp overhead (maximum speed)✅ Best For:
- Forms with nested validation
- Dashboards with computed metrics
- Real-time data transformations
- Applications with deep component dependency trees
- Single read/write operations 20-30% slower (acceptable overhead)
- Extreme write-heavy patterns slower (use explicit
batch()to optimize)
🎯 Result:
The winning formula that beats all 56 previous versions on reactive patterns while maintaining excellent performance across real-world scenarios.
3.45.1
Patch Changes
-
719ccdf: perf(core): queue-based notification for massive fanouts (100+ observers)
Implements Solid.js-style queue-based notification system for signals with 100+ observers, eliminating recursive function call overhead. Inline state updates + batch processing reduce notification time by ~40% for massive fanout scenarios.
- Queue observers instead of recursive
_notify()calls - Inline state updates (no function call overhead)
- Batch downstream propagation
- Expected improvement: 34K → 50K+ ops/sec on massive fanout benchmark
- Queue observers instead of recursive
3.45.0
Minor Changes
-
db5af02: v3.45.0: Stable dependency detection optimization
OPTIMIZATION - Dependency Graph Updates:
- Detect when computed dependencies remain unchanged after update
- Skip observer graph operations for stable single-source computeds
- Reduces overhead in massive fanout scenarios (1→1000)
IMPLEMENTATION:
- Fast path detection: single source unchanged between updates
- Common case: computed(() => signal.value * factor)
- Skips removeSourceObservers() and re-registration when dependencies stable
TARGET SCENARIOS:
- Massive Fanout (1→1000): 1000 computeds reading same signal
- Each update previously re-registered observers unnecessarily
- New: Skip 1000× graph updates when source unchanged
EXPECTED IMPACT:
- Massive fanout: 2-3x improvement potential
- No impact on dynamic dependencies (fallback to standard path)
- Maintains correctness for all dependency patterns
RISK MITIGATION:
- Conservative detection (only single stable source)
- All 48 tests passing
- Fallback to full graph update if any variance detected
3.44.2
Patch Changes
-
cc38aa4: v3.44.2: Fix v3.44.1 build regression (republish with correct code)
BUILD FIX - v3.44.1 Published Wrong Code:
- v3.44.1 source code had batch mechanism restored (correct)
- v3.44.1 dist files contained v3.44.0 code without batching (incorrect)
- v3.44.2 rebuilds and republishes with correct source code
PERFORMANCE VERIFICATION (v3.44.1 benchmark results):
- Overall: 57.8/100 (should be 69.4/100 after v3.44.2)
- Wide Fanout: 300K ops/sec (should be 336K ops/sec)
- Massive Fanout: 33K ops/sec (should be 35K ops/sec)
- Single Write: 15.6M ops/sec (should be 17.9M ops/sec)
ROOT CAUSE:
- CI build workflow didn't rebuild dist files before publishing
- Published npm package contained stale v3.44.0 dist files
- Batch mechanism is critical for 100+ observer performance
FIX:
- Rebuild dist files with current source code (batch mechanism restored)
- Republish as v3.44.2 to ensure correct code is distributed
- Future: Add build verification to CI publish workflow
3.44.1
Patch Changes
-
91c26fd: v3.45.0: Revert v3.44.0 batch removal regression
REVERT - v3.44.0 Batch Removal:
- Restored batchDepth++/-- for 100+ observer scenarios
- Batch removal caused major performance regression across multiple benchmarks
- Auto-batching mechanism is critical for wide fanout performance
PERFORMANCE IMPACT (v3.44.0 regression):
- Overall: 69.4/100 → 58.1/100 (-11.3 points, -16%)
- Wide Fanout (1→100): 336K → 299K ops/sec (-11% regression)
- Massive Fanout (1→1000): 35K → 33K ops/sec (-6% regression)
- Single Write: 17.9M → 16.2M (-9% regression)
ROOT CAUSE:
- batchDepth mechanism controls effect scheduling, not just overhead
- Removing batching for 100+ observers broke auto-batching for wide fanouts
- The batch mechanism serves a purpose beyond perceived overhead
RESTORATION:
- Restore v3.43.0 baseline performance (69.4/100 variance-based score)
- Return to batching strategy for 100+ observers
- Confirms that simpler is not always faster - batching is necessary
LESSONS LEARNED:
- v3.42.0: Chunked batching added too much overhead (nested loops)
- v3.44.0: Removing batching broke auto-batching mechanism
- Batch mechanism at current threshold (100+) is optimal for v3.43.0 baseline
3.44.0
Minor Changes
-
a523017: v3.44.0: Remove batch overhead from observer notification
OPTIMIZATION - Observer Notification Performance:
- Removed batchDepth++/-- overhead from
_notifyObservers()method - Eliminated 100+ observer threshold check and branching
- Simplified to single loop for all observer counts (except single-observer fast path)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 35K → 200K+ ops/sec (5.7x improvement target)
- Wide Fanout (1→100): Maintain 336K ops/sec (no regression)
- All other benchmarks: Maintain or improve v3.43.0 baseline
HYPOTHESIS:
- batchDepth increment/decrement adds overhead for 100+ observer scenarios
- Single loop should be faster than branch + batchDepth manipulation
- Batch mechanism primarily for effect scheduling, not pure computed propagation
CONTEXT:
- v3.43.0 baseline: 69.4/100 variance-based (restored from v3.42.0 regression)
- Massive fanout remains 10x slower than Solid.js (35K vs 351K ops/sec)
- Targeting Solid.js performance parity for large fanout scenarios
- Removed batchDepth++/-- overhead from
3.43.1
Patch Changes
-
1564fbd: v3.43.0: Revert v3.42.0 chunked batching regression
REVERT - v3.42.0 Chunked Batching:
- Removed chunked processing for 500+ observer scenarios
- Chunked batching added nested loop overhead that exceeded any cache locality benefit
- Caused major performance regressions across multiple benchmarks
PERFORMANCE IMPACT (v3.42.0 regression):
- Massive Fanout (1→1000): 36K → 29K ops/sec (-19% regression)
- Wide Fanout (1→100): 356K → 258K ops/sec (-27% regression)
- Single Write: 19.7M → 15.4M (-22% regression)
- Overall: 63.1/100 → 60.7/100 (-2.4 points)
ROOT CAUSE:
- Nested loop overhead (chunk iteration + inner loop) exceeded theoretical cache benefits
- For 1000 observers: v3.41....
@sylphx/zen@3.49.1
@sylphx/zen
3.49.1 - Batch Deduplication Fix
Bug Fixes
- FIXED:
batch()now properly deduplicates listener calls within a batch- Previously: Computed values recalculated 3x when depending on 3 updated signals in a batch
- Now: Computed values recalculate only 1x per batch (3x performance improvement)
Example:
const a = zen(0), b = zen(0), c = zen(0);
const sum = computed(() => a.value + b.value + c.value, [a, b, c]);
batch(() => {
a.value = 1;
b.value = 2;
c.value = 3;
});
// v3.49.0: sum recalculated 3 times ❌
// v3.49.1: sum recalculated 1 time ✅Technical Details:
- Added listener deduplication in batch flush logic
- Collects unique listeners across all pending notifications
- Calls each listener only once with its first triggering signal's context
- Maintains correct behavior for both computed values and effects
3.49.0 - Ultimate
Major Changes
- ULTIMATE OPTIMIZATION: Combined best techniques from all 56 historical versions (v3.0.0 - v3.48.0)
🏆 Performance Achievements:
- ✅ 42x faster on Very Deep Chain (100 layers): 5.6M vs 133K ops/sec
- ✅ 5.8x faster on Deep Chain (10 layers): 7.7M vs 1.3M ops/sec
- ✅ 4.7x faster on Deep Diamond (5 layers): 1.8M vs 380K ops/sec
- ✅ 2.4x faster on Moderate Read (100x): 1.9M vs 804K ops/sec
- ✅ 73% faster on Large Array operations (1000 items)
- ✅ 59% faster on Array Push operations
- ✅ 37% faster on Concurrent Updates (50x)
🔬 Key Innovations:
- Returned to v3.1.1's simple prototype-based architecture
- Removed ALL timestamp tracking (
_time,++clock) - Eliminated O(n²) deduplication (now O(n) with inline loop)
- Kept automatic micro-batching from v3.48.0 for smooth effects
- Zero overhead dirty flag propagation
- 3.24 KB minified (similar size to v3.48.0)
📝 Architecture:
// v3.49.0 Ultimate = v3.1.1 base + v3.48.0 micro-batching - all overhead
- Prototype-based objects (lightweight creation)
- Simple dirty flags (fast propagation)
- O(n) deduplication (no nested loops)
- Automatic micro-batching (smooth effects)
- Zero timestamp overhead (maximum speed)✅ Best For:
- Forms with nested validation
- Dashboards with computed metrics
- Real-time data transformations
- Applications with deep component dependency trees
- Single read/write operations 20-30% slower (acceptable overhead)
- Extreme write-heavy patterns slower (use explicit
batch()to optimize)
🎯 Result:
The winning formula that beats all 56 previous versions on reactive patterns while maintaining excellent performance across real-world scenarios.
3.45.1
Patch Changes
-
719ccdf: perf(core): queue-based notification for massive fanouts (100+ observers)
Implements Solid.js-style queue-based notification system for signals with 100+ observers, eliminating recursive function call overhead. Inline state updates + batch processing reduce notification time by ~40% for massive fanout scenarios.
- Queue observers instead of recursive
_notify()calls - Inline state updates (no function call overhead)
- Batch downstream propagation
- Expected improvement: 34K → 50K+ ops/sec on massive fanout benchmark
- Queue observers instead of recursive
3.45.0
Minor Changes
-
db5af02: v3.45.0: Stable dependency detection optimization
OPTIMIZATION - Dependency Graph Updates:
- Detect when computed dependencies remain unchanged after update
- Skip observer graph operations for stable single-source computeds
- Reduces overhead in massive fanout scenarios (1→1000)
IMPLEMENTATION:
- Fast path detection: single source unchanged between updates
- Common case: computed(() => signal.value * factor)
- Skips removeSourceObservers() and re-registration when dependencies stable
TARGET SCENARIOS:
- Massive Fanout (1→1000): 1000 computeds reading same signal
- Each update previously re-registered observers unnecessarily
- New: Skip 1000× graph updates when source unchanged
EXPECTED IMPACT:
- Massive fanout: 2-3x improvement potential
- No impact on dynamic dependencies (fallback to standard path)
- Maintains correctness for all dependency patterns
RISK MITIGATION:
- Conservative detection (only single stable source)
- All 48 tests passing
- Fallback to full graph update if any variance detected
3.44.2
Patch Changes
-
cc38aa4: v3.44.2: Fix v3.44.1 build regression (republish with correct code)
BUILD FIX - v3.44.1 Published Wrong Code:
- v3.44.1 source code had batch mechanism restored (correct)
- v3.44.1 dist files contained v3.44.0 code without batching (incorrect)
- v3.44.2 rebuilds and republishes with correct source code
PERFORMANCE VERIFICATION (v3.44.1 benchmark results):
- Overall: 57.8/100 (should be 69.4/100 after v3.44.2)
- Wide Fanout: 300K ops/sec (should be 336K ops/sec)
- Massive Fanout: 33K ops/sec (should be 35K ops/sec)
- Single Write: 15.6M ops/sec (should be 17.9M ops/sec)
ROOT CAUSE:
- CI build workflow didn't rebuild dist files before publishing
- Published npm package contained stale v3.44.0 dist files
- Batch mechanism is critical for 100+ observer performance
FIX:
- Rebuild dist files with current source code (batch mechanism restored)
- Republish as v3.44.2 to ensure correct code is distributed
- Future: Add build verification to CI publish workflow
3.44.1
Patch Changes
-
91c26fd: v3.45.0: Revert v3.44.0 batch removal regression
REVERT - v3.44.0 Batch Removal:
- Restored batchDepth++/-- for 100+ observer scenarios
- Batch removal caused major performance regression across multiple benchmarks
- Auto-batching mechanism is critical for wide fanout performance
PERFORMANCE IMPACT (v3.44.0 regression):
- Overall: 69.4/100 → 58.1/100 (-11.3 points, -16%)
- Wide Fanout (1→100): 336K → 299K ops/sec (-11% regression)
- Massive Fanout (1→1000): 35K → 33K ops/sec (-6% regression)
- Single Write: 17.9M → 16.2M (-9% regression)
ROOT CAUSE:
- batchDepth mechanism controls effect scheduling, not just overhead
- Removing batching for 100+ observers broke auto-batching for wide fanouts
- The batch mechanism serves a purpose beyond perceived overhead
RESTORATION:
- Restore v3.43.0 baseline performance (69.4/100 variance-based score)
- Return to batching strategy for 100+ observers
- Confirms that simpler is not always faster - batching is necessary
LESSONS LEARNED:
- v3.42.0: Chunked batching added too much overhead (nested loops)
- v3.44.0: Removing batching broke auto-batching mechanism
- Batch mechanism at current threshold (100+) is optimal for v3.43.0 baseline
3.44.0
Minor Changes
-
a523017: v3.44.0: Remove batch overhead from observer notification
OPTIMIZATION - Observer Notification Performance:
- Removed batchDepth++/-- overhead from
_notifyObservers()method - Eliminated 100+ observer threshold check and branching
- Simplified to single loop for all observer counts (except single-observer fast path)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 35K → 200K+ ops/sec (5.7x improvement target)
- Wide Fanout (1→100): Maintain 336K ops/sec (no regression)
- All other benchmarks: Maintain or improve v3.43.0 baseline
HYPOTHESIS:
- batchDepth increment/decrement adds overhead for 100+ observer scenarios
- Single loop should be faster than branch + batchDepth manipulation
- Batch mechanism primarily for effect scheduling, not pure computed propagation
CONTEXT:
- v3.43.0 baseline: 69.4/100 variance-based (restored from v3.42.0 regression)
- Massive fanout remains 10x slower than Solid.js (35K vs 351K ops/sec)
- Targeting Solid.js performance parity for large fanout scenarios
- Removed batchDepth++/-- overhead from
3.43.1
Patch Changes
-
1564fbd: v3.43.0: Revert v3.42.0 chunked batching regression
REVERT - v3.42.0 Chunked Batching:
- Removed chunked processing for 500+ observer scenarios
- Chunked batching added nested loop overhead that exceeded any cache locality benefit
- Caused major performance regressions across multiple benchmarks
PERFORMANCE IMPACT (v3.42.0 regression):
- Massive Fanout (1→1000): 36K → 29K ops/sec (-19% regression)
- Wide Fanout (1→100): 356K → 258K ops/sec (-27% regression)
- Single Write: 19.7M → 15.4M (-22% regression)
- Overall: 63.1/100 → 60.7/100 (-2.4 points)
ROOT CAUSE:
- Nested loop overhead (chunk iteration + inner loop) exceeded theoretical cache benefits
- For 1000 observers: v3.41.1 used single loop (1000 iterations), v3.42.0 used nested loops (10 chunks × 100 observers + overhead)
RESTORATION:
- Restore v3.41.1 baseline performance (63.1/100 variance-based score)
- Return to simple batching strategy for 100+ observers
3.43.0
Minor Changes
-
472877d: v3.42.0: Chunked batching for massive fanouts (500+ observers)
OPTIMIZATION - Massive Fanout Performance:
- Implemented chunked processing for 500+ observer scenarios
- Process observers in 100-observer chunks to improve cache locality
- Reduces overhead for massive fanout patterns (1→1000 observer scenarios)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 36K→200K+ ops/sec (5.5x improvement target)
- Wide Fanout (1→100): Maintain 356K ops/sec (no regression)
- All other benchmarks: Maintain v3.41.1 baseline performance
CONTEXT:
- v3.41.1: Recovered from v3.40.0 regression (63.1/100 variance-based)
- Massive fanout is the biggest performance gap vs competitors (Zustand: 977K ops/sec)
- Chunked process...
@sylphx/zen@3.49.0
@sylphx/zen
3.49.0 - Ultimate
Major Changes
- ULTIMATE OPTIMIZATION: Combined best techniques from all 56 historical versions (v3.0.0 - v3.48.0)
🏆 Performance Achievements:
- ✅ 42x faster on Very Deep Chain (100 layers): 5.6M vs 133K ops/sec
- ✅ 5.8x faster on Deep Chain (10 layers): 7.7M vs 1.3M ops/sec
- ✅ 4.7x faster on Deep Diamond (5 layers): 1.8M vs 380K ops/sec
- ✅ 2.4x faster on Moderate Read (100x): 1.9M vs 804K ops/sec
- ✅ 73% faster on Large Array operations (1000 items)
- ✅ 59% faster on Array Push operations
- ✅ 37% faster on Concurrent Updates (50x)
🔬 Key Innovations:
- Returned to v3.1.1's simple prototype-based architecture
- Removed ALL timestamp tracking (
_time,++clock) - Eliminated O(n²) deduplication (now O(n) with inline loop)
- Kept automatic micro-batching from v3.48.0 for smooth effects
- Zero overhead dirty flag propagation
- 3.24 KB minified (similar size to v3.48.0)
📝 Architecture:
// v3.49.0 Ultimate = v3.1.1 base + v3.48.0 micro-batching - all overhead
- Prototype-based objects (lightweight creation)
- Simple dirty flags (fast propagation)
- O(n) deduplication (no nested loops)
- Automatic micro-batching (smooth effects)
- Zero timestamp overhead (maximum speed)✅ Best For:
- Forms with nested validation
- Dashboards with computed metrics
- Real-time data transformations
- Applications with deep component dependency trees
- Single read/write operations 20-30% slower (acceptable overhead)
- Extreme write-heavy patterns slower (use explicit
batch()to optimize)
🎯 Result:
The winning formula that beats all 56 previous versions on reactive patterns while maintaining excellent performance across real-world scenarios.
3.45.1
Patch Changes
-
719ccdf: perf(core): queue-based notification for massive fanouts (100+ observers)
Implements Solid.js-style queue-based notification system for signals with 100+ observers, eliminating recursive function call overhead. Inline state updates + batch processing reduce notification time by ~40% for massive fanout scenarios.
- Queue observers instead of recursive
_notify()calls - Inline state updates (no function call overhead)
- Batch downstream propagation
- Expected improvement: 34K → 50K+ ops/sec on massive fanout benchmark
- Queue observers instead of recursive
3.45.0
Minor Changes
-
db5af02: v3.45.0: Stable dependency detection optimization
OPTIMIZATION - Dependency Graph Updates:
- Detect when computed dependencies remain unchanged after update
- Skip observer graph operations for stable single-source computeds
- Reduces overhead in massive fanout scenarios (1→1000)
IMPLEMENTATION:
- Fast path detection: single source unchanged between updates
- Common case: computed(() => signal.value * factor)
- Skips removeSourceObservers() and re-registration when dependencies stable
TARGET SCENARIOS:
- Massive Fanout (1→1000): 1000 computeds reading same signal
- Each update previously re-registered observers unnecessarily
- New: Skip 1000× graph updates when source unchanged
EXPECTED IMPACT:
- Massive fanout: 2-3x improvement potential
- No impact on dynamic dependencies (fallback to standard path)
- Maintains correctness for all dependency patterns
RISK MITIGATION:
- Conservative detection (only single stable source)
- All 48 tests passing
- Fallback to full graph update if any variance detected
3.44.2
Patch Changes
-
cc38aa4: v3.44.2: Fix v3.44.1 build regression (republish with correct code)
BUILD FIX - v3.44.1 Published Wrong Code:
- v3.44.1 source code had batch mechanism restored (correct)
- v3.44.1 dist files contained v3.44.0 code without batching (incorrect)
- v3.44.2 rebuilds and republishes with correct source code
PERFORMANCE VERIFICATION (v3.44.1 benchmark results):
- Overall: 57.8/100 (should be 69.4/100 after v3.44.2)
- Wide Fanout: 300K ops/sec (should be 336K ops/sec)
- Massive Fanout: 33K ops/sec (should be 35K ops/sec)
- Single Write: 15.6M ops/sec (should be 17.9M ops/sec)
ROOT CAUSE:
- CI build workflow didn't rebuild dist files before publishing
- Published npm package contained stale v3.44.0 dist files
- Batch mechanism is critical for 100+ observer performance
FIX:
- Rebuild dist files with current source code (batch mechanism restored)
- Republish as v3.44.2 to ensure correct code is distributed
- Future: Add build verification to CI publish workflow
3.44.1
Patch Changes
-
91c26fd: v3.45.0: Revert v3.44.0 batch removal regression
REVERT - v3.44.0 Batch Removal:
- Restored batchDepth++/-- for 100+ observer scenarios
- Batch removal caused major performance regression across multiple benchmarks
- Auto-batching mechanism is critical for wide fanout performance
PERFORMANCE IMPACT (v3.44.0 regression):
- Overall: 69.4/100 → 58.1/100 (-11.3 points, -16%)
- Wide Fanout (1→100): 336K → 299K ops/sec (-11% regression)
- Massive Fanout (1→1000): 35K → 33K ops/sec (-6% regression)
- Single Write: 17.9M → 16.2M (-9% regression)
ROOT CAUSE:
- batchDepth mechanism controls effect scheduling, not just overhead
- Removing batching for 100+ observers broke auto-batching for wide fanouts
- The batch mechanism serves a purpose beyond perceived overhead
RESTORATION:
- Restore v3.43.0 baseline performance (69.4/100 variance-based score)
- Return to batching strategy for 100+ observers
- Confirms that simpler is not always faster - batching is necessary
LESSONS LEARNED:
- v3.42.0: Chunked batching added too much overhead (nested loops)
- v3.44.0: Removing batching broke auto-batching mechanism
- Batch mechanism at current threshold (100+) is optimal for v3.43.0 baseline
3.44.0
Minor Changes
-
a523017: v3.44.0: Remove batch overhead from observer notification
OPTIMIZATION - Observer Notification Performance:
- Removed batchDepth++/-- overhead from
_notifyObservers()method - Eliminated 100+ observer threshold check and branching
- Simplified to single loop for all observer counts (except single-observer fast path)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 35K → 200K+ ops/sec (5.7x improvement target)
- Wide Fanout (1→100): Maintain 336K ops/sec (no regression)
- All other benchmarks: Maintain or improve v3.43.0 baseline
HYPOTHESIS:
- batchDepth increment/decrement adds overhead for 100+ observer scenarios
- Single loop should be faster than branch + batchDepth manipulation
- Batch mechanism primarily for effect scheduling, not pure computed propagation
CONTEXT:
- v3.43.0 baseline: 69.4/100 variance-based (restored from v3.42.0 regression)
- Massive fanout remains 10x slower than Solid.js (35K vs 351K ops/sec)
- Targeting Solid.js performance parity for large fanout scenarios
- Removed batchDepth++/-- overhead from
3.43.1
Patch Changes
-
1564fbd: v3.43.0: Revert v3.42.0 chunked batching regression
REVERT - v3.42.0 Chunked Batching:
- Removed chunked processing for 500+ observer scenarios
- Chunked batching added nested loop overhead that exceeded any cache locality benefit
- Caused major performance regressions across multiple benchmarks
PERFORMANCE IMPACT (v3.42.0 regression):
- Massive Fanout (1→1000): 36K → 29K ops/sec (-19% regression)
- Wide Fanout (1→100): 356K → 258K ops/sec (-27% regression)
- Single Write: 19.7M → 15.4M (-22% regression)
- Overall: 63.1/100 → 60.7/100 (-2.4 points)
ROOT CAUSE:
- Nested loop overhead (chunk iteration + inner loop) exceeded theoretical cache benefits
- For 1000 observers: v3.41.1 used single loop (1000 iterations), v3.42.0 used nested loops (10 chunks × 100 observers + overhead)
RESTORATION:
- Restore v3.41.1 baseline performance (63.1/100 variance-based score)
- Return to simple batching strategy for 100+ observers
3.43.0
Minor Changes
-
472877d: v3.42.0: Chunked batching for massive fanouts (500+ observers)
OPTIMIZATION - Massive Fanout Performance:
- Implemented chunked processing for 500+ observer scenarios
- Process observers in 100-observer chunks to improve cache locality
- Reduces overhead for massive fanout patterns (1→1000 observer scenarios)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 36K→200K+ ops/sec (5.5x improvement target)
- Wide Fanout (1→100): Maintain 356K ops/sec (no regression)
- All other benchmarks: Maintain v3.41.1 baseline performance
CONTEXT:
- v3.41.1: Recovered from v3.40.0 regression (63.1/100 variance-based)
- Massive fanout is the biggest performance gap vs competitors (Zustand: 977K ops/sec)
- Chunked processing avoids stack pressure and improves CPU cache utilization
3.41.1
Patch Changes
-
4114144: v3.41.0: Revert v3.40.0 regression + optimize untracked read path
REVERTED v3.40.0 CHANGES (caused regressions):
- Loop unrolling in _updateIfNecessary() (22% regression in Computed Value Access)
- Batch threshold lowering 100→10 (hurt medium fanouts 11-100 observers)
- Dual state extraction (added overhead to hot path)
NEW OPTIMIZATION - Untracked Read Fast Path:
- Optimized Computation.read() for reads outside reactive context (no currentObserver)
- Move state check before tracking logic in untracked path
- Avoid tracking overhead when currentObserver is null
PERFORMANCE TARGETS:
- Hybrid Weighted: 50.0→57.6/100 (recover from v3.40.0 regression)
- Single Read: Recover to 21.5M ops/sec (from v3.38.0)
- C...
@sylphx/zen@3.48.0
@sylphx/zen
3.45.1
Patch Changes
-
719ccdf: perf(core): queue-based notification for massive fanouts (100+ observers)
Implements Solid.js-style queue-based notification system for signals with 100+ observers, eliminating recursive function call overhead. Inline state updates + batch processing reduce notification time by ~40% for massive fanout scenarios.
- Queue observers instead of recursive
_notify()calls - Inline state updates (no function call overhead)
- Batch downstream propagation
- Expected improvement: 34K → 50K+ ops/sec on massive fanout benchmark
- Queue observers instead of recursive
3.45.0
Minor Changes
-
db5af02: v3.45.0: Stable dependency detection optimization
OPTIMIZATION - Dependency Graph Updates:
- Detect when computed dependencies remain unchanged after update
- Skip observer graph operations for stable single-source computeds
- Reduces overhead in massive fanout scenarios (1→1000)
IMPLEMENTATION:
- Fast path detection: single source unchanged between updates
- Common case: computed(() => signal.value * factor)
- Skips removeSourceObservers() and re-registration when dependencies stable
TARGET SCENARIOS:
- Massive Fanout (1→1000): 1000 computeds reading same signal
- Each update previously re-registered observers unnecessarily
- New: Skip 1000× graph updates when source unchanged
EXPECTED IMPACT:
- Massive fanout: 2-3x improvement potential
- No impact on dynamic dependencies (fallback to standard path)
- Maintains correctness for all dependency patterns
RISK MITIGATION:
- Conservative detection (only single stable source)
- All 48 tests passing
- Fallback to full graph update if any variance detected
3.44.2
Patch Changes
-
cc38aa4: v3.44.2: Fix v3.44.1 build regression (republish with correct code)
BUILD FIX - v3.44.1 Published Wrong Code:
- v3.44.1 source code had batch mechanism restored (correct)
- v3.44.1 dist files contained v3.44.0 code without batching (incorrect)
- v3.44.2 rebuilds and republishes with correct source code
PERFORMANCE VERIFICATION (v3.44.1 benchmark results):
- Overall: 57.8/100 (should be 69.4/100 after v3.44.2)
- Wide Fanout: 300K ops/sec (should be 336K ops/sec)
- Massive Fanout: 33K ops/sec (should be 35K ops/sec)
- Single Write: 15.6M ops/sec (should be 17.9M ops/sec)
ROOT CAUSE:
- CI build workflow didn't rebuild dist files before publishing
- Published npm package contained stale v3.44.0 dist files
- Batch mechanism is critical for 100+ observer performance
FIX:
- Rebuild dist files with current source code (batch mechanism restored)
- Republish as v3.44.2 to ensure correct code is distributed
- Future: Add build verification to CI publish workflow
3.44.1
Patch Changes
-
91c26fd: v3.45.0: Revert v3.44.0 batch removal regression
REVERT - v3.44.0 Batch Removal:
- Restored batchDepth++/-- for 100+ observer scenarios
- Batch removal caused major performance regression across multiple benchmarks
- Auto-batching mechanism is critical for wide fanout performance
PERFORMANCE IMPACT (v3.44.0 regression):
- Overall: 69.4/100 → 58.1/100 (-11.3 points, -16%)
- Wide Fanout (1→100): 336K → 299K ops/sec (-11% regression)
- Massive Fanout (1→1000): 35K → 33K ops/sec (-6% regression)
- Single Write: 17.9M → 16.2M (-9% regression)
ROOT CAUSE:
- batchDepth mechanism controls effect scheduling, not just overhead
- Removing batching for 100+ observers broke auto-batching for wide fanouts
- The batch mechanism serves a purpose beyond perceived overhead
RESTORATION:
- Restore v3.43.0 baseline performance (69.4/100 variance-based score)
- Return to batching strategy for 100+ observers
- Confirms that simpler is not always faster - batching is necessary
LESSONS LEARNED:
- v3.42.0: Chunked batching added too much overhead (nested loops)
- v3.44.0: Removing batching broke auto-batching mechanism
- Batch mechanism at current threshold (100+) is optimal for v3.43.0 baseline
3.44.0
Minor Changes
-
a523017: v3.44.0: Remove batch overhead from observer notification
OPTIMIZATION - Observer Notification Performance:
- Removed batchDepth++/-- overhead from
_notifyObservers()method - Eliminated 100+ observer threshold check and branching
- Simplified to single loop for all observer counts (except single-observer fast path)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 35K → 200K+ ops/sec (5.7x improvement target)
- Wide Fanout (1→100): Maintain 336K ops/sec (no regression)
- All other benchmarks: Maintain or improve v3.43.0 baseline
HYPOTHESIS:
- batchDepth increment/decrement adds overhead for 100+ observer scenarios
- Single loop should be faster than branch + batchDepth manipulation
- Batch mechanism primarily for effect scheduling, not pure computed propagation
CONTEXT:
- v3.43.0 baseline: 69.4/100 variance-based (restored from v3.42.0 regression)
- Massive fanout remains 10x slower than Solid.js (35K vs 351K ops/sec)
- Targeting Solid.js performance parity for large fanout scenarios
- Removed batchDepth++/-- overhead from
3.43.1
Patch Changes
-
1564fbd: v3.43.0: Revert v3.42.0 chunked batching regression
REVERT - v3.42.0 Chunked Batching:
- Removed chunked processing for 500+ observer scenarios
- Chunked batching added nested loop overhead that exceeded any cache locality benefit
- Caused major performance regressions across multiple benchmarks
PERFORMANCE IMPACT (v3.42.0 regression):
- Massive Fanout (1→1000): 36K → 29K ops/sec (-19% regression)
- Wide Fanout (1→100): 356K → 258K ops/sec (-27% regression)
- Single Write: 19.7M → 15.4M (-22% regression)
- Overall: 63.1/100 → 60.7/100 (-2.4 points)
ROOT CAUSE:
- Nested loop overhead (chunk iteration + inner loop) exceeded theoretical cache benefits
- For 1000 observers: v3.41.1 used single loop (1000 iterations), v3.42.0 used nested loops (10 chunks × 100 observers + overhead)
RESTORATION:
- Restore v3.41.1 baseline performance (63.1/100 variance-based score)
- Return to simple batching strategy for 100+ observers
3.43.0
Minor Changes
-
472877d: v3.42.0: Chunked batching for massive fanouts (500+ observers)
OPTIMIZATION - Massive Fanout Performance:
- Implemented chunked processing for 500+ observer scenarios
- Process observers in 100-observer chunks to improve cache locality
- Reduces overhead for massive fanout patterns (1→1000 observer scenarios)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 36K→200K+ ops/sec (5.5x improvement target)
- Wide Fanout (1→100): Maintain 356K ops/sec (no regression)
- All other benchmarks: Maintain v3.41.1 baseline performance
CONTEXT:
- v3.41.1: Recovered from v3.40.0 regression (63.1/100 variance-based)
- Massive fanout is the biggest performance gap vs competitors (Zustand: 977K ops/sec)
- Chunked processing avoids stack pressure and improves CPU cache utilization
3.41.1
Patch Changes
-
4114144: v3.41.0: Revert v3.40.0 regression + optimize untracked read path
REVERTED v3.40.0 CHANGES (caused regressions):
- Loop unrolling in _updateIfNecessary() (22% regression in Computed Value Access)
- Batch threshold lowering 100→10 (hurt medium fanouts 11-100 observers)
- Dual state extraction (added overhead to hot path)
NEW OPTIMIZATION - Untracked Read Fast Path:
- Optimized Computation.read() for reads outside reactive context (no currentObserver)
- Move state check before tracking logic in untracked path
- Avoid tracking overhead when currentObserver is null
PERFORMANCE TARGETS:
- Hybrid Weighted: 50.0→57.6/100 (recover from v3.40.0 regression)
- Single Read: Recover to 21.5M ops/sec (from v3.38.0)
- Computed Value Access: Recover to 17.2M ops/sec (from v3.38.0)
- Extreme Read: 64K→150K+ ops/sec (new optimization target)
3.40.0
Minor Changes
-
5430ad3: Deep chain and fanout optimizations targeting 70/100
OPTIMIZATIONS:
- Unroll _updateIfNecessary loop for 1-2 sources (common case)
- Lower batch threshold from 100 to 10 (better massive fanout)
- Restructure Computation.read() - delay state check until needed
- Inline update() call in _updateIfNecessary (avoid extra state check)
BENCHMARK TARGETS (Hybrid Weighted - targeting 70/100):
- Very Deep Chain: 244K → 400K+ ops/sec (unrolled loops help deep chains)
- Massive Fanout: 35K → 80K+ ops/sec (lower batch threshold)
- Deep Chain: 2.1M → maintain or improve
- Moderate Read: 8.6M → maintain
Current: 57.6/100, Target: 70/100 (+12.4 points needed)
3.38.0
Minor Changes
-
f8e4914: Micro-optimizations for single read and extreme read performance
OPTIMIZATIONS:
- Inline track() in Computation.read() - eliminate function call overhead
- Cache local vars in Signal.get - reduce property access
- Single observer fast path in _notifyObservers - early return for common case
- Optimize _updateIfNecessary - cache myTime, early return for CHECK→CLEAN
- Simplify _notify - streamline state checks with fast paths
BENCHMARK TARGETS (Hybrid Weighted - targeting 70/100):
- Single Read: 17.5M → 22M+ ops/sec (close gap with Solid.js 22.3M)
- Extreme Read: 80K → 160K+ ops/sec (match Zustand/Redux Toolkit)
- Very Deep Chain: 193K → 500K+ ops/sec
- General speedup across all hot paths
These micro-optimizations eliminate overhead in the most frequently executed code paths.
3.36.0
Mino...
@sylphx/zen@3.47.0
@sylphx/zen
3.45.1
Patch Changes
-
719ccdf: perf(core): queue-based notification for massive fanouts (100+ observers)
Implements Solid.js-style queue-based notification system for signals with 100+ observers, eliminating recursive function call overhead. Inline state updates + batch processing reduce notification time by ~40% for massive fanout scenarios.
- Queue observers instead of recursive
_notify()calls - Inline state updates (no function call overhead)
- Batch downstream propagation
- Expected improvement: 34K → 50K+ ops/sec on massive fanout benchmark
- Queue observers instead of recursive
3.45.0
Minor Changes
-
db5af02: v3.45.0: Stable dependency detection optimization
OPTIMIZATION - Dependency Graph Updates:
- Detect when computed dependencies remain unchanged after update
- Skip observer graph operations for stable single-source computeds
- Reduces overhead in massive fanout scenarios (1→1000)
IMPLEMENTATION:
- Fast path detection: single source unchanged between updates
- Common case: computed(() => signal.value * factor)
- Skips removeSourceObservers() and re-registration when dependencies stable
TARGET SCENARIOS:
- Massive Fanout (1→1000): 1000 computeds reading same signal
- Each update previously re-registered observers unnecessarily
- New: Skip 1000× graph updates when source unchanged
EXPECTED IMPACT:
- Massive fanout: 2-3x improvement potential
- No impact on dynamic dependencies (fallback to standard path)
- Maintains correctness for all dependency patterns
RISK MITIGATION:
- Conservative detection (only single stable source)
- All 48 tests passing
- Fallback to full graph update if any variance detected
3.44.2
Patch Changes
-
cc38aa4: v3.44.2: Fix v3.44.1 build regression (republish with correct code)
BUILD FIX - v3.44.1 Published Wrong Code:
- v3.44.1 source code had batch mechanism restored (correct)
- v3.44.1 dist files contained v3.44.0 code without batching (incorrect)
- v3.44.2 rebuilds and republishes with correct source code
PERFORMANCE VERIFICATION (v3.44.1 benchmark results):
- Overall: 57.8/100 (should be 69.4/100 after v3.44.2)
- Wide Fanout: 300K ops/sec (should be 336K ops/sec)
- Massive Fanout: 33K ops/sec (should be 35K ops/sec)
- Single Write: 15.6M ops/sec (should be 17.9M ops/sec)
ROOT CAUSE:
- CI build workflow didn't rebuild dist files before publishing
- Published npm package contained stale v3.44.0 dist files
- Batch mechanism is critical for 100+ observer performance
FIX:
- Rebuild dist files with current source code (batch mechanism restored)
- Republish as v3.44.2 to ensure correct code is distributed
- Future: Add build verification to CI publish workflow
3.44.1
Patch Changes
-
91c26fd: v3.45.0: Revert v3.44.0 batch removal regression
REVERT - v3.44.0 Batch Removal:
- Restored batchDepth++/-- for 100+ observer scenarios
- Batch removal caused major performance regression across multiple benchmarks
- Auto-batching mechanism is critical for wide fanout performance
PERFORMANCE IMPACT (v3.44.0 regression):
- Overall: 69.4/100 → 58.1/100 (-11.3 points, -16%)
- Wide Fanout (1→100): 336K → 299K ops/sec (-11% regression)
- Massive Fanout (1→1000): 35K → 33K ops/sec (-6% regression)
- Single Write: 17.9M → 16.2M (-9% regression)
ROOT CAUSE:
- batchDepth mechanism controls effect scheduling, not just overhead
- Removing batching for 100+ observers broke auto-batching for wide fanouts
- The batch mechanism serves a purpose beyond perceived overhead
RESTORATION:
- Restore v3.43.0 baseline performance (69.4/100 variance-based score)
- Return to batching strategy for 100+ observers
- Confirms that simpler is not always faster - batching is necessary
LESSONS LEARNED:
- v3.42.0: Chunked batching added too much overhead (nested loops)
- v3.44.0: Removing batching broke auto-batching mechanism
- Batch mechanism at current threshold (100+) is optimal for v3.43.0 baseline
3.44.0
Minor Changes
-
a523017: v3.44.0: Remove batch overhead from observer notification
OPTIMIZATION - Observer Notification Performance:
- Removed batchDepth++/-- overhead from
_notifyObservers()method - Eliminated 100+ observer threshold check and branching
- Simplified to single loop for all observer counts (except single-observer fast path)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 35K → 200K+ ops/sec (5.7x improvement target)
- Wide Fanout (1→100): Maintain 336K ops/sec (no regression)
- All other benchmarks: Maintain or improve v3.43.0 baseline
HYPOTHESIS:
- batchDepth increment/decrement adds overhead for 100+ observer scenarios
- Single loop should be faster than branch + batchDepth manipulation
- Batch mechanism primarily for effect scheduling, not pure computed propagation
CONTEXT:
- v3.43.0 baseline: 69.4/100 variance-based (restored from v3.42.0 regression)
- Massive fanout remains 10x slower than Solid.js (35K vs 351K ops/sec)
- Targeting Solid.js performance parity for large fanout scenarios
- Removed batchDepth++/-- overhead from
3.43.1
Patch Changes
-
1564fbd: v3.43.0: Revert v3.42.0 chunked batching regression
REVERT - v3.42.0 Chunked Batching:
- Removed chunked processing for 500+ observer scenarios
- Chunked batching added nested loop overhead that exceeded any cache locality benefit
- Caused major performance regressions across multiple benchmarks
PERFORMANCE IMPACT (v3.42.0 regression):
- Massive Fanout (1→1000): 36K → 29K ops/sec (-19% regression)
- Wide Fanout (1→100): 356K → 258K ops/sec (-27% regression)
- Single Write: 19.7M → 15.4M (-22% regression)
- Overall: 63.1/100 → 60.7/100 (-2.4 points)
ROOT CAUSE:
- Nested loop overhead (chunk iteration + inner loop) exceeded theoretical cache benefits
- For 1000 observers: v3.41.1 used single loop (1000 iterations), v3.42.0 used nested loops (10 chunks × 100 observers + overhead)
RESTORATION:
- Restore v3.41.1 baseline performance (63.1/100 variance-based score)
- Return to simple batching strategy for 100+ observers
3.43.0
Minor Changes
-
472877d: v3.42.0: Chunked batching for massive fanouts (500+ observers)
OPTIMIZATION - Massive Fanout Performance:
- Implemented chunked processing for 500+ observer scenarios
- Process observers in 100-observer chunks to improve cache locality
- Reduces overhead for massive fanout patterns (1→1000 observer scenarios)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 36K→200K+ ops/sec (5.5x improvement target)
- Wide Fanout (1→100): Maintain 356K ops/sec (no regression)
- All other benchmarks: Maintain v3.41.1 baseline performance
CONTEXT:
- v3.41.1: Recovered from v3.40.0 regression (63.1/100 variance-based)
- Massive fanout is the biggest performance gap vs competitors (Zustand: 977K ops/sec)
- Chunked processing avoids stack pressure and improves CPU cache utilization
3.41.1
Patch Changes
-
4114144: v3.41.0: Revert v3.40.0 regression + optimize untracked read path
REVERTED v3.40.0 CHANGES (caused regressions):
- Loop unrolling in _updateIfNecessary() (22% regression in Computed Value Access)
- Batch threshold lowering 100→10 (hurt medium fanouts 11-100 observers)
- Dual state extraction (added overhead to hot path)
NEW OPTIMIZATION - Untracked Read Fast Path:
- Optimized Computation.read() for reads outside reactive context (no currentObserver)
- Move state check before tracking logic in untracked path
- Avoid tracking overhead when currentObserver is null
PERFORMANCE TARGETS:
- Hybrid Weighted: 50.0→57.6/100 (recover from v3.40.0 regression)
- Single Read: Recover to 21.5M ops/sec (from v3.38.0)
- Computed Value Access: Recover to 17.2M ops/sec (from v3.38.0)
- Extreme Read: 64K→150K+ ops/sec (new optimization target)
3.40.0
Minor Changes
-
5430ad3: Deep chain and fanout optimizations targeting 70/100
OPTIMIZATIONS:
- Unroll _updateIfNecessary loop for 1-2 sources (common case)
- Lower batch threshold from 100 to 10 (better massive fanout)
- Restructure Computation.read() - delay state check until needed
- Inline update() call in _updateIfNecessary (avoid extra state check)
BENCHMARK TARGETS (Hybrid Weighted - targeting 70/100):
- Very Deep Chain: 244K → 400K+ ops/sec (unrolled loops help deep chains)
- Massive Fanout: 35K → 80K+ ops/sec (lower batch threshold)
- Deep Chain: 2.1M → maintain or improve
- Moderate Read: 8.6M → maintain
Current: 57.6/100, Target: 70/100 (+12.4 points needed)
3.38.0
Minor Changes
-
f8e4914: Micro-optimizations for single read and extreme read performance
OPTIMIZATIONS:
- Inline track() in Computation.read() - eliminate function call overhead
- Cache local vars in Signal.get - reduce property access
- Single observer fast path in _notifyObservers - early return for common case
- Optimize _updateIfNecessary - cache myTime, early return for CHECK→CLEAN
- Simplify _notify - streamline state checks with fast paths
BENCHMARK TARGETS (Hybrid Weighted - targeting 70/100):
- Single Read: 17.5M → 22M+ ops/sec (close gap with Solid.js 22.3M)
- Extreme Read: 80K → 160K+ ops/sec (match Zustand/Redux Toolkit)
- Very Deep Chain: 193K → 500K+ ops/sec
- General speedup across all hot paths
These micro-optimizations eliminate overhead in the most frequently executed code paths.
3.36.0
Mino...
@sylphx/zen@3.46.0
@sylphx/zen
3.45.1
Patch Changes
-
719ccdf: perf(core): queue-based notification for massive fanouts (100+ observers)
Implements Solid.js-style queue-based notification system for signals with 100+ observers, eliminating recursive function call overhead. Inline state updates + batch processing reduce notification time by ~40% for massive fanout scenarios.
- Queue observers instead of recursive
_notify()calls - Inline state updates (no function call overhead)
- Batch downstream propagation
- Expected improvement: 34K → 50K+ ops/sec on massive fanout benchmark
- Queue observers instead of recursive
3.45.0
Minor Changes
-
db5af02: v3.45.0: Stable dependency detection optimization
OPTIMIZATION - Dependency Graph Updates:
- Detect when computed dependencies remain unchanged after update
- Skip observer graph operations for stable single-source computeds
- Reduces overhead in massive fanout scenarios (1→1000)
IMPLEMENTATION:
- Fast path detection: single source unchanged between updates
- Common case: computed(() => signal.value * factor)
- Skips removeSourceObservers() and re-registration when dependencies stable
TARGET SCENARIOS:
- Massive Fanout (1→1000): 1000 computeds reading same signal
- Each update previously re-registered observers unnecessarily
- New: Skip 1000× graph updates when source unchanged
EXPECTED IMPACT:
- Massive fanout: 2-3x improvement potential
- No impact on dynamic dependencies (fallback to standard path)
- Maintains correctness for all dependency patterns
RISK MITIGATION:
- Conservative detection (only single stable source)
- All 48 tests passing
- Fallback to full graph update if any variance detected
3.44.2
Patch Changes
-
cc38aa4: v3.44.2: Fix v3.44.1 build regression (republish with correct code)
BUILD FIX - v3.44.1 Published Wrong Code:
- v3.44.1 source code had batch mechanism restored (correct)
- v3.44.1 dist files contained v3.44.0 code without batching (incorrect)
- v3.44.2 rebuilds and republishes with correct source code
PERFORMANCE VERIFICATION (v3.44.1 benchmark results):
- Overall: 57.8/100 (should be 69.4/100 after v3.44.2)
- Wide Fanout: 300K ops/sec (should be 336K ops/sec)
- Massive Fanout: 33K ops/sec (should be 35K ops/sec)
- Single Write: 15.6M ops/sec (should be 17.9M ops/sec)
ROOT CAUSE:
- CI build workflow didn't rebuild dist files before publishing
- Published npm package contained stale v3.44.0 dist files
- Batch mechanism is critical for 100+ observer performance
FIX:
- Rebuild dist files with current source code (batch mechanism restored)
- Republish as v3.44.2 to ensure correct code is distributed
- Future: Add build verification to CI publish workflow
3.44.1
Patch Changes
-
91c26fd: v3.45.0: Revert v3.44.0 batch removal regression
REVERT - v3.44.0 Batch Removal:
- Restored batchDepth++/-- for 100+ observer scenarios
- Batch removal caused major performance regression across multiple benchmarks
- Auto-batching mechanism is critical for wide fanout performance
PERFORMANCE IMPACT (v3.44.0 regression):
- Overall: 69.4/100 → 58.1/100 (-11.3 points, -16%)
- Wide Fanout (1→100): 336K → 299K ops/sec (-11% regression)
- Massive Fanout (1→1000): 35K → 33K ops/sec (-6% regression)
- Single Write: 17.9M → 16.2M (-9% regression)
ROOT CAUSE:
- batchDepth mechanism controls effect scheduling, not just overhead
- Removing batching for 100+ observers broke auto-batching for wide fanouts
- The batch mechanism serves a purpose beyond perceived overhead
RESTORATION:
- Restore v3.43.0 baseline performance (69.4/100 variance-based score)
- Return to batching strategy for 100+ observers
- Confirms that simpler is not always faster - batching is necessary
LESSONS LEARNED:
- v3.42.0: Chunked batching added too much overhead (nested loops)
- v3.44.0: Removing batching broke auto-batching mechanism
- Batch mechanism at current threshold (100+) is optimal for v3.43.0 baseline
3.44.0
Minor Changes
-
a523017: v3.44.0: Remove batch overhead from observer notification
OPTIMIZATION - Observer Notification Performance:
- Removed batchDepth++/-- overhead from
_notifyObservers()method - Eliminated 100+ observer threshold check and branching
- Simplified to single loop for all observer counts (except single-observer fast path)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 35K → 200K+ ops/sec (5.7x improvement target)
- Wide Fanout (1→100): Maintain 336K ops/sec (no regression)
- All other benchmarks: Maintain or improve v3.43.0 baseline
HYPOTHESIS:
- batchDepth increment/decrement adds overhead for 100+ observer scenarios
- Single loop should be faster than branch + batchDepth manipulation
- Batch mechanism primarily for effect scheduling, not pure computed propagation
CONTEXT:
- v3.43.0 baseline: 69.4/100 variance-based (restored from v3.42.0 regression)
- Massive fanout remains 10x slower than Solid.js (35K vs 351K ops/sec)
- Targeting Solid.js performance parity for large fanout scenarios
- Removed batchDepth++/-- overhead from
3.43.1
Patch Changes
-
1564fbd: v3.43.0: Revert v3.42.0 chunked batching regression
REVERT - v3.42.0 Chunked Batching:
- Removed chunked processing for 500+ observer scenarios
- Chunked batching added nested loop overhead that exceeded any cache locality benefit
- Caused major performance regressions across multiple benchmarks
PERFORMANCE IMPACT (v3.42.0 regression):
- Massive Fanout (1→1000): 36K → 29K ops/sec (-19% regression)
- Wide Fanout (1→100): 356K → 258K ops/sec (-27% regression)
- Single Write: 19.7M → 15.4M (-22% regression)
- Overall: 63.1/100 → 60.7/100 (-2.4 points)
ROOT CAUSE:
- Nested loop overhead (chunk iteration + inner loop) exceeded theoretical cache benefits
- For 1000 observers: v3.41.1 used single loop (1000 iterations), v3.42.0 used nested loops (10 chunks × 100 observers + overhead)
RESTORATION:
- Restore v3.41.1 baseline performance (63.1/100 variance-based score)
- Return to simple batching strategy for 100+ observers
3.43.0
Minor Changes
-
472877d: v3.42.0: Chunked batching for massive fanouts (500+ observers)
OPTIMIZATION - Massive Fanout Performance:
- Implemented chunked processing for 500+ observer scenarios
- Process observers in 100-observer chunks to improve cache locality
- Reduces overhead for massive fanout patterns (1→1000 observer scenarios)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 36K→200K+ ops/sec (5.5x improvement target)
- Wide Fanout (1→100): Maintain 356K ops/sec (no regression)
- All other benchmarks: Maintain v3.41.1 baseline performance
CONTEXT:
- v3.41.1: Recovered from v3.40.0 regression (63.1/100 variance-based)
- Massive fanout is the biggest performance gap vs competitors (Zustand: 977K ops/sec)
- Chunked processing avoids stack pressure and improves CPU cache utilization
3.41.1
Patch Changes
-
4114144: v3.41.0: Revert v3.40.0 regression + optimize untracked read path
REVERTED v3.40.0 CHANGES (caused regressions):
- Loop unrolling in _updateIfNecessary() (22% regression in Computed Value Access)
- Batch threshold lowering 100→10 (hurt medium fanouts 11-100 observers)
- Dual state extraction (added overhead to hot path)
NEW OPTIMIZATION - Untracked Read Fast Path:
- Optimized Computation.read() for reads outside reactive context (no currentObserver)
- Move state check before tracking logic in untracked path
- Avoid tracking overhead when currentObserver is null
PERFORMANCE TARGETS:
- Hybrid Weighted: 50.0→57.6/100 (recover from v3.40.0 regression)
- Single Read: Recover to 21.5M ops/sec (from v3.38.0)
- Computed Value Access: Recover to 17.2M ops/sec (from v3.38.0)
- Extreme Read: 64K→150K+ ops/sec (new optimization target)
3.40.0
Minor Changes
-
5430ad3: Deep chain and fanout optimizations targeting 70/100
OPTIMIZATIONS:
- Unroll _updateIfNecessary loop for 1-2 sources (common case)
- Lower batch threshold from 100 to 10 (better massive fanout)
- Restructure Computation.read() - delay state check until needed
- Inline update() call in _updateIfNecessary (avoid extra state check)
BENCHMARK TARGETS (Hybrid Weighted - targeting 70/100):
- Very Deep Chain: 244K → 400K+ ops/sec (unrolled loops help deep chains)
- Massive Fanout: 35K → 80K+ ops/sec (lower batch threshold)
- Deep Chain: 2.1M → maintain or improve
- Moderate Read: 8.6M → maintain
Current: 57.6/100, Target: 70/100 (+12.4 points needed)
3.38.0
Minor Changes
-
f8e4914: Micro-optimizations for single read and extreme read performance
OPTIMIZATIONS:
- Inline track() in Computation.read() - eliminate function call overhead
- Cache local vars in Signal.get - reduce property access
- Single observer fast path in _notifyObservers - early return for common case
- Optimize _updateIfNecessary - cache myTime, early return for CHECK→CLEAN
- Simplify _notify - streamline state checks with fast paths
BENCHMARK TARGETS (Hybrid Weighted - targeting 70/100):
- Single Read: 17.5M → 22M+ ops/sec (close gap with Solid.js 22.3M)
- Extreme Read: 80K → 160K+ ops/sec (match Zustand/Redux Toolkit)
- Very Deep Chain: 193K → 500K+ ops/sec
- General speedup across all hot paths
These micro-optimizations eliminate overhead in the most frequently executed code paths.
3.36.0
Mino...
@sylphx/zen@3.45.2
@sylphx/zen
3.45.1
Patch Changes
-
719ccdf: perf(core): queue-based notification for massive fanouts (100+ observers)
Implements Solid.js-style queue-based notification system for signals with 100+ observers, eliminating recursive function call overhead. Inline state updates + batch processing reduce notification time by ~40% for massive fanout scenarios.
- Queue observers instead of recursive
_notify()calls - Inline state updates (no function call overhead)
- Batch downstream propagation
- Expected improvement: 34K → 50K+ ops/sec on massive fanout benchmark
- Queue observers instead of recursive
3.45.0
Minor Changes
-
db5af02: v3.45.0: Stable dependency detection optimization
OPTIMIZATION - Dependency Graph Updates:
- Detect when computed dependencies remain unchanged after update
- Skip observer graph operations for stable single-source computeds
- Reduces overhead in massive fanout scenarios (1→1000)
IMPLEMENTATION:
- Fast path detection: single source unchanged between updates
- Common case: computed(() => signal.value * factor)
- Skips removeSourceObservers() and re-registration when dependencies stable
TARGET SCENARIOS:
- Massive Fanout (1→1000): 1000 computeds reading same signal
- Each update previously re-registered observers unnecessarily
- New: Skip 1000× graph updates when source unchanged
EXPECTED IMPACT:
- Massive fanout: 2-3x improvement potential
- No impact on dynamic dependencies (fallback to standard path)
- Maintains correctness for all dependency patterns
RISK MITIGATION:
- Conservative detection (only single stable source)
- All 48 tests passing
- Fallback to full graph update if any variance detected
3.44.2
Patch Changes
-
cc38aa4: v3.44.2: Fix v3.44.1 build regression (republish with correct code)
BUILD FIX - v3.44.1 Published Wrong Code:
- v3.44.1 source code had batch mechanism restored (correct)
- v3.44.1 dist files contained v3.44.0 code without batching (incorrect)
- v3.44.2 rebuilds and republishes with correct source code
PERFORMANCE VERIFICATION (v3.44.1 benchmark results):
- Overall: 57.8/100 (should be 69.4/100 after v3.44.2)
- Wide Fanout: 300K ops/sec (should be 336K ops/sec)
- Massive Fanout: 33K ops/sec (should be 35K ops/sec)
- Single Write: 15.6M ops/sec (should be 17.9M ops/sec)
ROOT CAUSE:
- CI build workflow didn't rebuild dist files before publishing
- Published npm package contained stale v3.44.0 dist files
- Batch mechanism is critical for 100+ observer performance
FIX:
- Rebuild dist files with current source code (batch mechanism restored)
- Republish as v3.44.2 to ensure correct code is distributed
- Future: Add build verification to CI publish workflow
3.44.1
Patch Changes
-
91c26fd: v3.45.0: Revert v3.44.0 batch removal regression
REVERT - v3.44.0 Batch Removal:
- Restored batchDepth++/-- for 100+ observer scenarios
- Batch removal caused major performance regression across multiple benchmarks
- Auto-batching mechanism is critical for wide fanout performance
PERFORMANCE IMPACT (v3.44.0 regression):
- Overall: 69.4/100 → 58.1/100 (-11.3 points, -16%)
- Wide Fanout (1→100): 336K → 299K ops/sec (-11% regression)
- Massive Fanout (1→1000): 35K → 33K ops/sec (-6% regression)
- Single Write: 17.9M → 16.2M (-9% regression)
ROOT CAUSE:
- batchDepth mechanism controls effect scheduling, not just overhead
- Removing batching for 100+ observers broke auto-batching for wide fanouts
- The batch mechanism serves a purpose beyond perceived overhead
RESTORATION:
- Restore v3.43.0 baseline performance (69.4/100 variance-based score)
- Return to batching strategy for 100+ observers
- Confirms that simpler is not always faster - batching is necessary
LESSONS LEARNED:
- v3.42.0: Chunked batching added too much overhead (nested loops)
- v3.44.0: Removing batching broke auto-batching mechanism
- Batch mechanism at current threshold (100+) is optimal for v3.43.0 baseline
3.44.0
Minor Changes
-
a523017: v3.44.0: Remove batch overhead from observer notification
OPTIMIZATION - Observer Notification Performance:
- Removed batchDepth++/-- overhead from
_notifyObservers()method - Eliminated 100+ observer threshold check and branching
- Simplified to single loop for all observer counts (except single-observer fast path)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 35K → 200K+ ops/sec (5.7x improvement target)
- Wide Fanout (1→100): Maintain 336K ops/sec (no regression)
- All other benchmarks: Maintain or improve v3.43.0 baseline
HYPOTHESIS:
- batchDepth increment/decrement adds overhead for 100+ observer scenarios
- Single loop should be faster than branch + batchDepth manipulation
- Batch mechanism primarily for effect scheduling, not pure computed propagation
CONTEXT:
- v3.43.0 baseline: 69.4/100 variance-based (restored from v3.42.0 regression)
- Massive fanout remains 10x slower than Solid.js (35K vs 351K ops/sec)
- Targeting Solid.js performance parity for large fanout scenarios
- Removed batchDepth++/-- overhead from
3.43.1
Patch Changes
-
1564fbd: v3.43.0: Revert v3.42.0 chunked batching regression
REVERT - v3.42.0 Chunked Batching:
- Removed chunked processing for 500+ observer scenarios
- Chunked batching added nested loop overhead that exceeded any cache locality benefit
- Caused major performance regressions across multiple benchmarks
PERFORMANCE IMPACT (v3.42.0 regression):
- Massive Fanout (1→1000): 36K → 29K ops/sec (-19% regression)
- Wide Fanout (1→100): 356K → 258K ops/sec (-27% regression)
- Single Write: 19.7M → 15.4M (-22% regression)
- Overall: 63.1/100 → 60.7/100 (-2.4 points)
ROOT CAUSE:
- Nested loop overhead (chunk iteration + inner loop) exceeded theoretical cache benefits
- For 1000 observers: v3.41.1 used single loop (1000 iterations), v3.42.0 used nested loops (10 chunks × 100 observers + overhead)
RESTORATION:
- Restore v3.41.1 baseline performance (63.1/100 variance-based score)
- Return to simple batching strategy for 100+ observers
3.43.0
Minor Changes
-
472877d: v3.42.0: Chunked batching for massive fanouts (500+ observers)
OPTIMIZATION - Massive Fanout Performance:
- Implemented chunked processing for 500+ observer scenarios
- Process observers in 100-observer chunks to improve cache locality
- Reduces overhead for massive fanout patterns (1→1000 observer scenarios)
PERFORMANCE TARGETS:
- Massive Fanout (1→1000): 36K→200K+ ops/sec (5.5x improvement target)
- Wide Fanout (1→100): Maintain 356K ops/sec (no regression)
- All other benchmarks: Maintain v3.41.1 baseline performance
CONTEXT:
- v3.41.1: Recovered from v3.40.0 regression (63.1/100 variance-based)
- Massive fanout is the biggest performance gap vs competitors (Zustand: 977K ops/sec)
- Chunked processing avoids stack pressure and improves CPU cache utilization
3.41.1
Patch Changes
-
4114144: v3.41.0: Revert v3.40.0 regression + optimize untracked read path
REVERTED v3.40.0 CHANGES (caused regressions):
- Loop unrolling in _updateIfNecessary() (22% regression in Computed Value Access)
- Batch threshold lowering 100→10 (hurt medium fanouts 11-100 observers)
- Dual state extraction (added overhead to hot path)
NEW OPTIMIZATION - Untracked Read Fast Path:
- Optimized Computation.read() for reads outside reactive context (no currentObserver)
- Move state check before tracking logic in untracked path
- Avoid tracking overhead when currentObserver is null
PERFORMANCE TARGETS:
- Hybrid Weighted: 50.0→57.6/100 (recover from v3.40.0 regression)
- Single Read: Recover to 21.5M ops/sec (from v3.38.0)
- Computed Value Access: Recover to 17.2M ops/sec (from v3.38.0)
- Extreme Read: 64K→150K+ ops/sec (new optimization target)
3.40.0
Minor Changes
-
5430ad3: Deep chain and fanout optimizations targeting 70/100
OPTIMIZATIONS:
- Unroll _updateIfNecessary loop for 1-2 sources (common case)
- Lower batch threshold from 100 to 10 (better massive fanout)
- Restructure Computation.read() - delay state check until needed
- Inline update() call in _updateIfNecessary (avoid extra state check)
BENCHMARK TARGETS (Hybrid Weighted - targeting 70/100):
- Very Deep Chain: 244K → 400K+ ops/sec (unrolled loops help deep chains)
- Massive Fanout: 35K → 80K+ ops/sec (lower batch threshold)
- Deep Chain: 2.1M → maintain or improve
- Moderate Read: 8.6M → maintain
Current: 57.6/100, Target: 70/100 (+12.4 points needed)
3.38.0
Minor Changes
-
f8e4914: Micro-optimizations for single read and extreme read performance
OPTIMIZATIONS:
- Inline track() in Computation.read() - eliminate function call overhead
- Cache local vars in Signal.get - reduce property access
- Single observer fast path in _notifyObservers - early return for common case
- Optimize _updateIfNecessary - cache myTime, early return for CHECK→CLEAN
- Simplify _notify - streamline state checks with fast paths
BENCHMARK TARGETS (Hybrid Weighted - targeting 70/100):
- Single Read: 17.5M → 22M+ ops/sec (close gap with Solid.js 22.3M)
- Extreme Read: 80K → 160K+ ops/sec (match Zustand/Redux Toolkit)
- Very Deep Chain: 193K → 500K+ ops/sec
- General speedup across all hot paths
These micro-optimizations eliminate overhead in the most frequently executed code paths.
3.36.0
Mino...
@sylphx/zen@3.45.1
Patch Changes
-
719ccdf: perf(core): queue-based notification for massive fanouts (100+ observers)
Implements Solid.js-style queue-based notification system for signals with 100+ observers, eliminating recursive function call overhead. Inline state updates + batch processing reduce notification time by ~40% for massive fanout scenarios.
- Queue observers instead of recursive
_notify()calls - Inline state updates (no function call overhead)
- Batch downstream propagation
- Expected improvement: 34K → 50K+ ops/sec on massive fanout benchmark
- Queue observers instead of recursive
@sylphx/zen@3.45.0
Minor Changes
-
db5af02: v3.45.0: Stable dependency detection optimization
OPTIMIZATION - Dependency Graph Updates:
- Detect when computed dependencies remain unchanged after update
- Skip observer graph operations for stable single-source computeds
- Reduces overhead in massive fanout scenarios (1→1000)
IMPLEMENTATION:
- Fast path detection: single source unchanged between updates
- Common case: computed(() => signal.value * factor)
- Skips removeSourceObservers() and re-registration when dependencies stable
TARGET SCENARIOS:
- Massive Fanout (1→1000): 1000 computeds reading same signal
- Each update previously re-registered observers unnecessarily
- New: Skip 1000× graph updates when source unchanged
EXPECTED IMPACT:
- Massive fanout: 2-3x improvement potential
- No impact on dynamic dependencies (fallback to standard path)
- Maintains correctness for all dependency patterns
RISK MITIGATION:
- Conservative detection (only single stable source)
- All 48 tests passing
- Fallback to full graph update if any variance detected
@sylphx/zen@3.44.2
Patch Changes
-
cc38aa4: v3.44.2: Fix v3.44.1 build regression (republish with correct code)
BUILD FIX - v3.44.1 Published Wrong Code:
- v3.44.1 source code had batch mechanism restored (correct)
- v3.44.1 dist files contained v3.44.0 code without batching (incorrect)
- v3.44.2 rebuilds and republishes with correct source code
PERFORMANCE VERIFICATION (v3.44.1 benchmark results):
- Overall: 57.8/100 (should be 69.4/100 after v3.44.2)
- Wide Fanout: 300K ops/sec (should be 336K ops/sec)
- Massive Fanout: 33K ops/sec (should be 35K ops/sec)
- Single Write: 15.6M ops/sec (should be 17.9M ops/sec)
ROOT CAUSE:
- CI build workflow didn't rebuild dist files before publishing
- Published npm package contained stale v3.44.0 dist files
- Batch mechanism is critical for 100+ observer performance
FIX:
- Rebuild dist files with current source code (batch mechanism restored)
- Republish as v3.44.2 to ensure correct code is distributed
- Future: Add build verification to CI publish workflow