Skip to content

Conversation

@sriv
Copy link
Member

@sriv sriv commented Oct 28, 2025

  • make scenarios initialize lazily for table driven scenarios
  • add benchmark tests for lazy scenario initialization

@sriv
Copy link
Member Author

sriv commented Oct 28, 2025

Lazy vs Eager Scenario Initialization - Quick Benchmark Comparison

Summary

Lazy mode is 100-4000x faster and uses 99.9% less memory during scenario creation.

Quick Comparison Table

Scenario Creation (1000 rows × 2 columns)

Metric Eager Mode Lazy Mode Improvement
Time 255,654 ns 61.77 ns 4,139x faster
Memory 804,987 bytes 120 bytes 6,708x less
Allocations 9,023 3 3,007x fewer

Nested Tables (10 spec × 100 scenario = 1000 total iterations)

Metric Eager Mode Lazy Mode Improvement
Time 25,192 ns 285.2 ns 88x faster
Memory 82,888 bytes 384 bytes 216x less

Performance Highlights

Creation Speed Comparison

Table Size     Eager Time    Lazy Time    Speedup
─────────────────────────────────────────────────
10 rows        2,826 ns      61.50 ns     46x
50 rows        12,562 ns     61.25 ns     205x
100 rows       26,133 ns     61.26 ns     427x
500 rows       120,005 ns    60.90 ns     1,970x
1000 rows      255,654 ns    61.77 ns     4,139x

Key Insight: Lazy time is constant (~61ns) regardless of table size!

Memory Usage Comparison

Table Size     Eager Memory   Lazy Memory   Savings
───────────────────────────────────────────────────
10 rows        8,336 bytes    120 bytes     98.6%
50 rows        40,912 bytes   120 bytes     99.7%
100 rows       82,561 bytes   120 bytes     99.9%
500 rows       405,515 bytes  120 bytes     99.97%
1000 rows      804,987 bytes  120 bytes     99.99%

Key Insight: Lazy uses constant 120 bytes regardless of table size!

Trade-offs

Iteration Performance (100 scenarios)

Mode Time per iteration Memory Total Time
Eager 28 ns 0 bytes 2.8 μs (instant access)
Lazy 169 ns 680 bytes 16.9 μs (on-demand)

Analysis: Lazy pays a small cost during iteration (141 ns extra per scenario), but this is negligible compared to actual test execution time.

Benchmark Environment

  • CPU: Apple M3 Pro (12 cores)
  • OS: macOS (darwin/arm64)
  • Test Tool: Go benchmark framework
  • Method: Average of multiple runs with warm-up

Run Benchmarks Yourself

# Quick comparison
go test -bench=. -benchmem ./parser | grep Benchmark

# Detailed results with 3-second runs
go test -bench=. -benchmem -benchtime=3s ./parser

# Memory-focused benchmarks
go test -bench=BenchmarkMemory -benchmem ./parser

@github-actions
Copy link
Contributor

github-actions bot commented Oct 28, 2025

Benchmark Results

java_simple_multithreaded.csv

Commit CPU Memory Time ExitCode
bb872e3 33% 67488 0:11.24 0
bc05ae1 32% 65608 0:11.87 0
7d3f2cf 30% 65828 0:12.41 0
2d084a1 33% 66144 0:11.71 0

java_maven_multithreaded.csv

Commit CPU Memory Time ExitCode
bb872e3 68% 204612 0:16.22 0
bc05ae1 58% 206016 0:18.52 0
7d3f2cf 43% 180196 0:23.88 0
2d084a1 57% 179812 0:18.87 0

java_simple_serial.csv

Commit CPU Memory Time ExitCode
bb872e3 36% 67452 0:17.74 0
bc05ae1 45% 62876 0:14.33 0
7d3f2cf 43% 65616 0:14.85 0
2d084a1 54% 66052 0:12.41 0

java_gradle_multithreaded.csv

Commit CPU Memory Time ExitCode
bb872e3 9% 121168 0:24.11 0
bc05ae1 9% 124120 0:24.78 0
7d3f2cf 182% 580952 0:28.28 0
2d084a1 199% 581932 0:24.53 0

java_simple_parallel.csv

Commit CPU Memory Time ExitCode
bb872e3 20% 67792 0:27.00 0
bc05ae1 19% 67884 0:30.30 0
7d3f2cf 20% 67500 0:26.37 0
2d084a1 22% 68356 0:25.81 0

java_maven_serial.csv

Commit CPU Memory Time ExitCode
bb872e3 76% 220396 0:17.73 0
bc05ae1 78% 221940 0:17.37 0
7d3f2cf 48% 179544 0:27.44 0
2d084a1 58% 178964 0:22.88 0

java_gradle_parallel.csv

Commit CPU Memory Time ExitCode
bb872e3 6% 123416 0:42.18 0
bc05ae1 5% 115312 0:43.77 0
7d3f2cf 115% 557904 0:44.08 0
2d084a1 110% 528044 0:45.64 0

java_gradle_serial.csv

Commit CPU Memory Time ExitCode
bb872e3 9% 101224 0:25.11 0
bc05ae1 9% 106956 0:27.26 0
7d3f2cf 180% 559464 0:29.04 0
2d084a1 202% 534100 0:26.70 0

java_maven_parallel.csv

Commit CPU Memory Time ExitCode
bb872e3 36% 196088 0:33.94 0
bc05ae1 34% 194564 0:36.99 0
7d3f2cf 29% 181836 0:46.20 0
2d084a1 32% 178544 0:37.20 0

Notes

  • The results above are generated by running against seed projects in https://github.com/getgauge/gauge-benchmark
  • These results are not persisted, but on merging to master the benchmark will be rerun.
  • These benchmark are run in Github Actions' agents, which are virtualized. Results are not to be taken as actual benchmarks.Rather, these are indicative numbers and make sense for comparison.

See Workflow log for more details.

@sriv
Copy link
Member Author

sriv commented Oct 30, 2025

@jensakejohansson - thanks for checking this out. I had missed one path with nested tables. I have pushed another commit - think this should address the issue. If you can take another look, that'd be super helpful!

@jensakejohansson
Copy link
Contributor

@sriv I don't know if I'm confused here, but I don't see any new commits?

@sriv
Copy link
Member Author

sriv commented Oct 30, 2025

no, that's my bad - I didn't push it before, now It should be in - apologies

@jensakejohansson
Copy link
Contributor

jensakejohansson commented Oct 30, 2025

Still issues here. Now when I execute my test spec with nested tables the execution freezes, both when execution in VSC and terminal. Same place very time...

Code_61Mx8YQtqB.mp4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

5 participants