Skip to content

ci: add continuous benchmark tracking dashboard#3404

Merged
cijothomas merged 3 commits intoopen-telemetry:mainfrom
scottgerring:main
Mar 10, 2026
Merged

ci: add continuous benchmark tracking dashboard#3404
cijothomas merged 3 commits intoopen-telemetry:mainfrom
scottgerring:main

Conversation

@scottgerring
Copy link
Member

@scottgerring scottgerring commented Mar 5, 2026

Summary

Dashboard will be at: https://open-telemetry.github.io/opentelemetry-rust/dev/bench/

Requires enabling GitHub Pages on the gh-pages branch (the action creates the branch automatically on first run). We'll need to chat with someone in opentelemetry-community to validate this is well and good and do the repo modifications. I will chase this up once we are happy before we merge.

Testing

Add a new continuousBenchmark job that runs on every push to main,
stores Criterion results in gh-pages, and publishes a dashboard.
Also mention the benchmark dashboard in CONTRIBUTING.md.
@scottgerring scottgerring requested a review from a team as a code owner March 5, 2026 08:23
@codecov
Copy link

codecov bot commented Mar 5, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 83.2%. Comparing base (a05dbe6) to head (7782280).
⚠️ Report is 9 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##            main   #3404     +/-   ##
=======================================
+ Coverage   82.7%   83.2%   +0.4%     
=======================================
  Files        128     128             
  Lines      24811   24899     +88     
=======================================
+ Hits       20526   20716    +190     
+ Misses      4285    4183    -102     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@scottgerring
Copy link
Member Author

Adding performance tag to demonstrate that the regular perf bench still runs.

@cijothomas
Copy link
Member

Thanks. (I setup the dashboards for OTel-Arrow repo, so I an figure out the process here.. Its relatively easy, just need to dig up the steps)

Before we do that, I'd like to double confirm that we want to run all the benches on every commit to main. The perf runner is a scarce commodity (there is just 1 machine), and I don't see much value in running every benchmark on every main commit. We need to have a curated subset.

Alternatively, we can just have a nightly or even weekly (or twice a week) run for all benchmarks.

@scottgerring
Copy link
Member Author

Thanks. (I setup the dashboards for OTel-Arrow repo, so I an figure out the process here.. Its relatively easy, just need to dig up the steps)

Before we do that, I'd like to double confirm that we want to run all the benches on every commit to main. The perf runner is a scarce commodity (there is just 1 machine), and I don't see much value in running every benchmark on every main commit. We need to have a curated subset.

Alternatively, we can just have a nightly or even weekly (or twice a week) run for all benchmarks.

@cijothomas ack concern on resource consumption. I think we should err on the side of running everything (else why have the benchmarks) at a lower cadence. It takes about 30 minutes to run presently. I think nightly would be fine but we can tone it down to weekly if you prefer

@cijothomas
Copy link
Member

Thanks. (I setup the dashboards for OTel-Arrow repo, so I an figure out the process here.. Its relatively easy, just need to dig up the steps)
Before we do that, I'd like to double confirm that we want to run all the benches on every commit to main. The perf runner is a scarce commodity (there is just 1 machine), and I don't see much value in running every benchmark on every main commit. We need to have a curated subset.
Alternatively, we can just have a nightly or even weekly (or twice a week) run for all benchmarks.

@cijothomas ack concern on resource consumption. I think we should err on the side of running everything (else why have the benchmarks) at a lower cadence. It takes about 30 minutes to run presently. I think nightly would be fine but we can tone it down to weekly if you prefer

Okay. Let us run all of them, and modify the schedule to be nightly, not every commit.
(We can adjust this in the future as we evolve)

@scottgerring scottgerring force-pushed the main branch 2 times, most recently from 315aa2b to 804954b Compare March 10, 2026 08:16
chore: temporarily enable guard on workflow_dispatch
@scottgerring
Copy link
Member Author

@cijothomas :

  • Updated schedule to fire once daily
  • Added a guard to not run if the commit being benchmarked has already been benchmarked; validated here

I reckon this should be good to go; we'll just need to enable github pages on this repo.

@cijothomas cijothomas added this pull request to the merge queue Mar 10, 2026
Merged via the queue into open-telemetry:main with commit 05bdb98 Mar 10, 2026
28 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants