-
Notifications
You must be signed in to change notification settings - Fork 12
Open
Labels
A-fsArea: file structure, asset loading and storingArea: file structure, asset loading and storingA-testsArea: anything actually test relatedArea: anything actually test relatedA-vcsArea: version control system integrationArea: version control system integrationP-mediumPriority: mediumPriority: mediumS-needs-investigationStatus: further information is requiredStatus: further information is required
Description
Description
Add performance regression tests to tytanic to detect unintended slowdowns in Typst compilation and layout.
While tytanic already covers functional correctness well, there is currently no structured way to prevent performance regressions. This feature would make performance a first-class, testable property.
I created a possible solution to this down below although this approach has a problem with machine-dependent variability.
Possible implementation
- Introduce a new test kind:
performance - Run performance tests multiple times to reduce noise (warmup + measured runs)
- Store reference values in
ref.json- Required: reference compilation time
- Optional (future): memory usage, layout time, total time, etc.
- Compare measured values against the reference using a configurable tolerance
- Example: fail if regression exceeds ±5%
Workflow
Create a new performance test
tt new --performance performance-testResulting file structure:
.
└── tests
└── performance-test
├── ref.json
└── test.typInitial ref.json:
{
"compilation_time_ms": 0
}Implement test and record baseline
tt update performance-testOutput:
Starting 1 test (run ID: e3c1682e-9cd6-4cba-90f3-aaa53089fc85)
pass [ 900ms] performance-test
──────────
Summary [ 900ms] 1/1 tests run: all passedThis updates ref.json to:
{
"compilation_time_ms": 240
}Detect a performance regression
After modifying the test to increase compilation time:
tt run performance-testOutput:
Starting 1 test (run ID: e3c1682e-9cd6-4cba-90f3-aaa53089fc85)
fail [ 940ms] performance-test
240ms -> 300ms (+25%)
──────────
Summary [ 940ms] 0/1 tests run: 1 failedProblems & considerations
- Machine-dependent variability
Compilation time varies across machines and environments, so performance tests are only meaningful when run under comparable conditions.- Possible mitigations:
- Store a identifier in
ref.jsonfor each machine that ran the test and give it its own reference compile time. - Just run these tests in on one specific machine.
- Compare to compiling a previous commit. This requires a vcs though.
- Store a identifier in
- Possible mitigations:
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
A-fsArea: file structure, asset loading and storingArea: file structure, asset loading and storingA-testsArea: anything actually test relatedArea: anything actually test relatedA-vcsArea: version control system integrationArea: version control system integrationP-mediumPriority: mediumPriority: mediumS-needs-investigationStatus: further information is requiredStatus: further information is required