Skip to content

Use custom HashCell in place of RwLock#98

Open
macladson wants to merge 2 commits intosigp:mainfrom
macladson:hash-cell
Open

Use custom HashCell in place of RwLock#98
macladson wants to merge 2 commits intosigp:mainfrom
macladson:hash-cell

Conversation

@macladson
Copy link
Copy Markdown
Member

Addresses #96

An alternative to #97
Implement a custom "write-once" HashCell which stores a Hash256 and allows for lock-free reads using atomics.

@codecov-commenter
Copy link
Copy Markdown

Codecov Report

❌ Patch coverage is 75.20661% with 30 lines in your changes missing coverage. Please review.
✅ Project coverage is 69.75%. Comparing base (3fcb03c) to head (d81a9c3).

Files with missing lines Patch % Lines
src/hash_cell.rs 73.46% 13 Missing ⚠️
src/tree.rs 73.91% 12 Missing ⚠️
src/packed_leaf.rs 81.81% 2 Missing ⚠️
src/utils.rs 0.00% 2 Missing ⚠️
src/leaf.rs 75.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #98      +/-   ##
==========================================
- Coverage   70.15%   69.75%   -0.41%     
==========================================
  Files          22       23       +1     
  Lines        1280     1296      +16     
==========================================
+ Hits          898      904       +6     
- Misses        382      392      +10     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Comment on lines +56 to +57
// visible. Redundant writers store the same bytes, so concurrent reads
// always produce the correct hash.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant writers sounds bad. I think you mentioned a variant of this that tracked whether hashing was already in-progress?

I can imagine we might get a lot of redundant writers will e.g. hashing the same list in parallel (e.g. two beacon states), or hashing a list with lots of repetition (e.g. a list that has had intra_rebase called on it).

Maybe that's an acceptable trade-off though, to be lock-free.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this case should only occur if two threads hit the same tree node at the exact same time, but perhaps something like intra_rebase makes that statistically likely if they navigate the tree in the same pattern.

I think in the case where we track in-progress hashing, I actually think it doesn't help that much since the thread basically has to spin-lock waiting for the hash to become available anyway.

One spicy idea we could try is adding some randomness (like a coin-flip) so two similar trees being hashed in parallel are unlikely to follow the same hashing path. Making it non-deterministic seems pretty spooky though

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I think two similar lists (beacon states) being hashed in parallel is something we general should be avoiding. Hashing two similar states sequentially is probably much faster than in parallel, since the first rayon will load all cores anyway and the second one will primarily be loading cached values. Unless I'm misunderstanding. Does parallel hashing happen a lot in Lighthouse?

ready: AtomicBool,
/// The cached Hash256 hash value, stored as 4 × AtomicU64 for lock-free
/// unconditional writes without data races.
value: [AtomicU64; 4],
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apparently AtomicU128 exists 👀

https://docs.rs/portable-atomic/1.13.1/portable_atomic/struct.AtomicU128.html

Could be worth a bench.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooh interesting. I'll take a look

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants