Skip to content

Conversation

@rescrv
Copy link
Contributor

@rescrv rescrv commented Jun 10, 2025

Description of changes

This PR uses the LogReader's standard scan method to select
fragments for copying. It then issues a copy in parallel and writes
them to the manifest as one list, without accounting for snapshots.

Alternatives considered:

  • Potentially walk the snapshots and copy/paste snapshots. This is what
    was done before, but it could only prune to boundaries of snapshot
    pointers in the root/manifest of the tree.
  • Use the GC logic to prune. This was originally my intent, but it had
    a hidden downside: The GC would walk all old snapshots to determine
    what data it had to delete. Implementing a switch to skip this
    behavior essentially made two distinct functions in one, obviating the
    advantage of reusing the GC code.

Test plan

Integration tests cover copy.

  • Tests pass locally with pytest for python, yarn test for js, cargo test for rust

Documentation Changes

N/A

@github-actions
Copy link

Reviewer Checklist

Please leverage this checklist to ensure your code review is thorough before approving

Testing, Bugs, Errors, Logs, Documentation

  • Can you think of any use case in which the code does not behave as intended? Have they been tested?
  • Can you think of any inputs or external events that could break the code? Is user input validated and safe? Have they been tested?
  • If appropriate, are there adequate property based tests?
  • If appropriate, are there adequate unit tests?
  • Should any logging, debugging, tracing information be added or removed?
  • Are error messages user-friendly?
  • Have all documentation changes needed been made?
  • Have all non-obvious changes been commented?

System Compatibility

  • Are there any potential impacts on other parts of the system or backward compatibility?
  • Does this change intersect with any items on our roadmap, and if so, is there a plan for fitting them together?

Quality

  • Is this code of a unexpectedly high quality (Readability, Modularity, Intuitiveness)

@propel-code-bot
Copy link
Contributor

propel-code-bot bot commented Jun 10, 2025

Refactor wal3::copy to Use LogReader Scan and Direct AWS Copy; Simplify Snapshot Handling

This PR significantly refactors the wal3::copy implementation, moving away from a snapshot/tree traversal approach to using a flat fragment list acquired by LogReader::scan(). Fragments are copied in parallel using the AWS copy functionality, and a new manifest is constructed with the complete fragment list, omitting snapshot information in the destination. Supporting changes introduce a utility for path prefixing, extend the Limits struct for convenience, and adjust tests and scrub logic for consistency.

Key Changes:
• Rewrites wal3::copy to select fragments using LogReader::scan and copy them in parallel via S3/Storage API.
• Removes prior snapshot-based copy logic, eliminating recursive copy functions and snapshot traversal.
• Introduces prefixed_fragment_path utility for consistent fragment path building.
• Extends the Limits struct to add a convenient UNLIMITED constant.
• Updates Manifest::scrub and related methods to correctly account for fragment byte sizes and setsum calculations.
• Modifies integration tests for log copy/update to support new naming and copy semantics, ensuring test resources match new conventions.
• Minor fixes to type signatures and test parameters for updated usage.

Affected Areas:
• rust/wal3/src/copy.rs
• rust/wal3/src/reader.rs
• rust/wal3/src/lib.rs
• rust/wal3/src/manifest.rs
• rust/wal3/tests/test_k8s_integration_82_copy_then_update_dst.rs

Potential Impact:

Functionality: Fragment copy is now always a flat operation; all eligible fragments are copied in one batch, and the destination manifest contains only fragments (no snapshots), which may affect systems expecting snapshot hierarchy. Removal of per-snapshot copying may affect incremental copy or restore workflows.

Performance: Potentially improved performance due to parallelized fragment copying, but risks with very large manifests if the number of fragments is excessive (bounded by manifest branching factor).

Security: No new security concerns introduced; storage/copy routines are largely unchanged.

Scalability: Scalability remains similar or improved due to parallel copy, but copying very large logs in a single manifest could hit scale/manifest size limits without snapshotting.

Review Focus:
• Correctness of fragment parallel copy and manifest construction.
• Absence of regression for logs copied from snapshot-heavy sources.
• Handling of initial_offset and fragment sequence number edge cases.
• Memory or S3 consistency issues for very large copy operations.

Testing Needed

• Run rust integration test suite, especially log copy/update and scrub scenarios.
• Test with logs large enough to trigger close-to-limit manifest sizes.
• Validate copy correctness when the source uses snapshots (destination should still produce a complete flat manifest).

Code Quality Assessment

rust/wal3/tests/test_k8s_integration_82_copy_then_update_dst.rs: Test resource names adjusted throughout for correct target/source mapping.

rust/wal3/src/copy.rs: Major simplification, correct use of async/await and error propagation. Awareness of branching factor was noted in comments.

rust/wal3/src/reader.rs: Limits struct improved, code remains clear. Test cases updated.

rust/wal3/src/lib.rs: Utility function added, code stays clean and idiomatic.

rust/wal3/src/manifest.rs: Scrub logic extended to sum bytes as well as setsums for full manifest validation.

Best Practices

Manifest Integrity:
• Extends existing scrub and setsum consistency checks for post-copy validation.

Test Coverage:
• Ensures integration tests reflect real-world usage and naming conventions.

Code Simplicity:
• Removes redundant recursive and snapshot code for reduced complexity.

Async:
• Parallelizes copy operations using async functions.

Potential Issues

• Manifest-only destination may not scale if the log consists of a very large number of fragments and snapshotting is not re-enabled before/during copy-could hit manifest or API size limits.
• Copying logs from sources relying on deep snapshot hierarchies will flatten the structure at the target, which may have implications for downstream GC, snapshot, or backup logic.
• Error handling in parallel copy may surface as an Arc-wrapped error, potentially making diagnostics slightly less direct.

This summary was automatically generated by @propel-code-bot

acc_bytes,
writer: "copy task".to_string(),
snapshots,
snapshots: vec![],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if there is too many frags? should we construct snapshot in that case? or will that be done automatically later?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update: the number of manifest should be guaranteed to be bounded by the branching factor

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be done on next write. I was trying not to complicate this code. The test does this implicitly. I can add a check for it.

This API will use the serverside option to upload a file, meaning that
copying a file becomes a cheap operation, relatively speaking.  No
longer will it have to stream to the rust-log-service and back to S3.
@rescrv rescrv force-pushed the rescrv/cheap-copy branch from 025899c to bd68f48 Compare June 10, 2025 20:15
@rescrv rescrv force-pushed the rescrv/copy-using-gc branch from ed17688 to 557c6fa Compare June 10, 2025 20:21
rescrv added 3 commits June 10, 2025 13:22
This PR uses the `LogReader`'s standard `scan` method to select
fragments for copying.  It then issues a copy in parallel and writes
them to the manifest as one list, without accounting for snapshots.

Alternatives considered:
- Potentially walk the snapshots and copy/paste snapshots.  This is what
  was done before, but it could only prune to boundaries of snapshot
  pointers in the root/manifest of the tree.
- Use the GC logic to prune.  This was originally my intent, but it had
  a hidden downside:  The GC would walk all old snapshots to determine
  what data it had to delete.  Implementing a switch to skip this
  behavior essentially made two distinct functions in one, obviating the
  advantage of reusing the GC code.
@rescrv rescrv force-pushed the rescrv/copy-using-gc branch from 557c6fa to 52e1fb6 Compare June 10, 2025 21:41
Base automatically changed from rescrv/cheap-copy to main June 10, 2025 22:22
@rescrv rescrv merged commit bf2b0c0 into main Jun 10, 2025
111 of 114 checks passed
@rescrv rescrv deleted the rescrv/copy-using-gc branch June 10, 2025 22:22
Inventrohyder pushed a commit to Inventrohyder/chroma that referenced this pull request Aug 5, 2025
## Description of changes

This PR uses the `LogReader`'s standard `scan` method to select
fragments for copying.  It then issues a copy in parallel and writes
them to the manifest as one list, without accounting for snapshots.

Alternatives considered:
- Potentially walk the snapshots and copy/paste snapshots.  This is what
  was done before, but it could only prune to boundaries of snapshot
  pointers in the root/manifest of the tree.
- Use the GC logic to prune.  This was originally my intent, but it had
  a hidden downside:  The GC would walk all old snapshots to determine
  what data it had to delete.  Implementing a switch to skip this
  behavior essentially made two distinct functions in one, obviating the
  advantage of reusing the GC code.

## Test plan

Integration tests cover copy.

- [X] Tests pass locally with `pytest` for python, `yarn test` for js,
`cargo test` for rust

## Documentation Changes

N/A
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants