Skip to content

perf(BitField): optimize len#2258

Open
wjmelements wants to merge 2 commits intomasterfrom
bitfield-optimize-len
Open

perf(BitField): optimize len#2258
wjmelements wants to merge 2 commits intomasterfrom
bitfield-optimize-len

Conversation

@wjmelements
Copy link

Reviewers @Kubuxu @Stebalien

Context

I am familiarizing myself with this codebase and the rust language.

Concept

By avoiding the expensive ranges() step, the performance of len can be greatly improved.

cargo bench

len                     time:   [1.0397 µs 1.0435 µs 1.0481 µs]
                        change: [-84.608% -84.316% -83.975%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high severe

Changes

  • avoid constructing ranges to calculate len

@github-project-automation github-project-automation bot moved this to 📌 Triage in FilOz Feb 5, 2026
@codecov-commenter
Copy link

codecov-commenter commented Feb 5, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 77.59%. Comparing base (fe4d5c1) to head (3d53748).
⚠️ Report is 16 commits behind head on master.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2258      +/-   ##
==========================================
+ Coverage   77.56%   77.59%   +0.03%     
==========================================
  Files         147      147              
  Lines       15789    15798       +9     
==========================================
+ Hits        12247    12259      +12     
+ Misses       3542     3539       -3     
Files with missing lines Coverage Δ
ipld/bitfield/src/lib.rs 79.39% <100.00%> (+0.97%) ⬆️

... and 5 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@rvagg rvagg requested a review from LesnyRumcajs February 6, 2026 04:16
@rvagg
Copy link
Member

rvagg commented Feb 6, 2026

@wjmelements you should ping @LesnyRumcajs if you want some native Rust eyes.

My concerns with this are:

  1. The benchmarks are pretty basic in here, they just start from from_bytes (see example) and don't touch the set and unset buffers, so you're probably benching at the fastest case only.
  2. len() tests are super limited. There's maybe 5 paths that hit len in some way. So expanding the tests while doing this would help increase confidence that this is correct.

Comment on lines +330 to +332
range.size()
- (self.unset.range(range.clone()).count() as u64)
- (self.set.range(range.clone()).count() as u64)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure about the correctness of the implementation per se, probably some more (fuzz?) tests would do.

In this particular case, I see a potential danger of integer underflow, i.e., if range.size() < ((self.unset.range(range.clone()).count() as u64) + (self.set.range(range.clone()).count() as u64)) then some absurd values could get produced. Given we have overflow-checks = true for wasm profile, it'd probably panic (bad). Can we guarantee the condition above doesn't happen? Is it already guaranteed from some internal logic? If so, some comment to reassure the reader would help.

@BigLep BigLep moved this from 📌 Triage to 🐱 Todo in FilOz Feb 24, 2026
@BigLep BigLep moved this from 🐱 Todo to ⌨️ In Progress in FilOz Feb 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: ⌨️ In Progress

Development

Successfully merging this pull request may close these issues.

4 participants