Summary
Add the ability to save and restore LiteSVM state to/from disk, so on-chain state survives process restarts.
Motivation
LiteSVM is in-memory only. This makes it incompatible with hot-reload local development workflows (cargo-watch, bacon, etc.) because every process restart wipes all on-chain state — deployed programs, funded accounts, token balances, everything.
For teams using LiteSVM as the Solana backend for a local dev server, this is a blocker:
- Developer seeds the environment — deploys programs, creates accounts, airdrops SOL, mints tokens
- Developer edits Rust code — bacon/cargo-watch detects the change and restarts the server
- All on-chain state is gone — programs are undeployed, accounts are empty, tokens are unminted
- Developer must re-seed from scratch before they can test their change
This makes LiteSVM unusable for iterative local development. The developer either re-seeds on every reload (slow, frustrating) or gives up on hot-reload entirely.
solana-test-validator solves this with --ledger for disk persistence, but it's slow and heavy. LiteSVM is fast but ephemeral. State persistence closes this gap — making LiteSVM viable as a full replacement for solana-test-validator in local development.
Beyond hot-reload, persistence also enables:
- CI test caching — save a fully-seeded VM as a fixture, skip re-seeding on every run
- Debugging — capture a failing VM state, share the snapshot, and replay it locally
Proposed Solution
A new litesvm-persistence workspace crate with a simple public API:
use litesvm::LiteSVM;
use litesvm_persistence::{save_to_file, load_from_file};
// Save after seeding
let mut svm = LiteSVM::new().with_builtins().with_sysvars();
svm.airdrop(&pubkey, 1_000_000_000).unwrap();
save_to_file(&svm, "snapshot.bin").unwrap();
// Later, restore instantly
let mut restored = load_from_file("snapshot.bin").unwrap();
assert_eq!(restored.get_balance(&pubkey).unwrap(), 1_000_000_000);
// Programs, sysvars, tx history — all preserved
Also exposes to_bytes/from_bytes for custom storage (databases, network, etc.).
Implementation Approach
We have a working implementation and are happy to open a PR. Here's the design:
1. persistence-internal feature flag on the core litesvm crate
Exposes read-only getters and low-level setters needed for serialization, without polluting the public API:
- Getters:
airdrop_keypair_bytes(), get_blockhash_check(), get_fee_structure(), get_log_bytes_limit(), get_feature_set_ref(), transaction_history_entries()
- Setters:
set_latest_blockhash(), set_airdrop_keypair(), restore_transaction_history(), set_account_no_checks() (inserts without program cache loading)
rebuild_caches() — rebuilds sysvar cache + program cache after bulk account insertion
2. Serialization strategy
- Bincode for compact binary format
- Version byte (
STATE_VERSION = 1) for forward compatibility
- Mirror types for
FeeStructure and ComputeBudget (upstream types lack serde)
- Large-stack thread (64 MB) for serialize/deserialize to prevent stack overflow on large states (thousands of accounts produce deeply nested bincode frames)
3. Two-pass account restoration
Upgradeable BPF programs (BPF Loader V3) have a Program account that references a ProgramData account. If accounts are inserted in arbitrary order, loading the Program into the cache fails with MissingAccount because ProgramData doesn't exist yet.
Solution:
- Pass 1: Insert all accounts via
set_account_no_checks() — no program cache loading, no sysvar validation
- Pass 2:
rebuild_caches() — scan all accounts, rebuild sysvar cache from sysvar accounts, then load all executable programs into the program cache
This avoids ordering dependencies entirely.
4. Changes to accounts_db.rs
load_all_existing_programs() — scans all accounts for executable BPF programs not in cache, loads them
maybe_handle_sysvar_account changed from fn to pub(crate) fn (needed by rebuild_caches)
Test Coverage
13 round-trip tests + 2 doc-tests:
| Test |
What it verifies |
basic_account_round_trip |
Single account with data, owner, lamports |
multiple_accounts_round_trip |
10 accounts with varying data sizes |
sysvar_round_trip |
Clock sysvar with custom timestamp/slot/epoch |
config_round_trip |
sigverify, blockhash_check, log_bytes_limit |
blockhash_round_trip |
Blockhash preserved after expiration |
airdrop_keypair_round_trip |
Airdrop keypair bytes preserved |
transaction_history_round_trip |
Full tx with signature in history |
bpf_program_round_trip |
Actual BPF program execution before/after restore |
bytes_round_trip |
to_bytes()/from_bytes() API |
airdrop_works_after_restore |
Airdrop functional post-restore |
send_transaction_after_restore |
Tx execution functional post-restore |
load_nonexistent_file |
Returns PersistenceError::Io |
load_corrupted_data |
Returns PersistenceError::Serialize |
Backwards Compatibility
- All changes to
litesvm core are behind the persistence-internal feature flag — zero impact on existing users
- The persistence crate is a separate optional workspace member
- No changes to any existing public API
Questions for Maintainers
- Feature flag approach — Is
persistence-internal the right way to expose internals, or would you prefer making the getters/setters part of the public API?
- Crate naming —
litesvm-persistence as a workspace crate, or would you prefer it as a module within the main crate?
- Serialization format — We chose bincode for speed/compactness. Any preference for a different format?
set_account_no_checks — This bypasses sysvar/program cache updates on insert. Should this be public (useful for batch loading) or stay behind the feature flag?
Summary
Add the ability to save and restore
LiteSVMstate to/from disk, so on-chain state survives process restarts.Motivation
LiteSVM is in-memory only. This makes it incompatible with hot-reload local development workflows (cargo-watch, bacon, etc.) because every process restart wipes all on-chain state — deployed programs, funded accounts, token balances, everything.
For teams using LiteSVM as the Solana backend for a local dev server, this is a blocker:
This makes LiteSVM unusable for iterative local development. The developer either re-seeds on every reload (slow, frustrating) or gives up on hot-reload entirely.
solana-test-validatorsolves this with--ledgerfor disk persistence, but it's slow and heavy. LiteSVM is fast but ephemeral. State persistence closes this gap — making LiteSVM viable as a full replacement forsolana-test-validatorin local development.Beyond hot-reload, persistence also enables:
Proposed Solution
A new
litesvm-persistenceworkspace crate with a simple public API:Also exposes
to_bytes/from_bytesfor custom storage (databases, network, etc.).Implementation Approach
We have a working implementation and are happy to open a PR. Here's the design:
1.
persistence-internalfeature flag on the corelitesvmcrateExposes read-only getters and low-level setters needed for serialization, without polluting the public API:
airdrop_keypair_bytes(),get_blockhash_check(),get_fee_structure(),get_log_bytes_limit(),get_feature_set_ref(),transaction_history_entries()set_latest_blockhash(),set_airdrop_keypair(),restore_transaction_history(),set_account_no_checks()(inserts without program cache loading)rebuild_caches()— rebuilds sysvar cache + program cache after bulk account insertion2. Serialization strategy
STATE_VERSION = 1) for forward compatibilityFeeStructureandComputeBudget(upstream types lack serde)3. Two-pass account restoration
Upgradeable BPF programs (BPF Loader V3) have a Program account that references a ProgramData account. If accounts are inserted in arbitrary order, loading the Program into the cache fails with
MissingAccountbecause ProgramData doesn't exist yet.Solution:
set_account_no_checks()— no program cache loading, no sysvar validationrebuild_caches()— scan all accounts, rebuild sysvar cache from sysvar accounts, then load all executable programs into the program cacheThis avoids ordering dependencies entirely.
4. Changes to
accounts_db.rsload_all_existing_programs()— scans all accounts for executable BPF programs not in cache, loads themmaybe_handle_sysvar_accountchanged fromfntopub(crate) fn(needed byrebuild_caches)Test Coverage
13 round-trip tests + 2 doc-tests:
basic_account_round_tripmultiple_accounts_round_tripsysvar_round_tripconfig_round_tripblockhash_round_tripairdrop_keypair_round_triptransaction_history_round_tripbpf_program_round_tripbytes_round_tripto_bytes()/from_bytes()APIairdrop_works_after_restoresend_transaction_after_restoreload_nonexistent_filePersistenceError::Ioload_corrupted_dataPersistenceError::SerializeBackwards Compatibility
litesvmcore are behind thepersistence-internalfeature flag — zero impact on existing usersQuestions for Maintainers
persistence-internalthe right way to expose internals, or would you prefer making the getters/setters part of the public API?litesvm-persistenceas a workspace crate, or would you prefer it as a module within the main crate?set_account_no_checks— This bypasses sysvar/program cache updates on insert. Should this be public (useful for batch loading) or stay behind the feature flag?