Skip to content

Benchmarks

The key advantage of walsync is memory efficiency when managing multiple databases.

DatabasesLitestreamWalsyncSavings
133 MB (1 process)12 MB21 MB
5152 MB (5 processes)14 MB138 MB
10286 MB (10 processes)12 MB274 MB
20600 MB (20 processes)12 MB588 MB

Measured on macOS (aarch64) with 100KB test databases, 5-second measurement window.

Key observation: Walsync memory stays constant (~12 MB) regardless of database count. Litestream scales linearly (~30 MB per process).

Tested with Tigris (Fly.io S3-compatible storage):

DB SizeTime (ms)Throughput (MB/s)
1.2 MB9711.24
12 MB1,5028.00
120 MB6,95417.27

Key findings:

  • Restore throughput scales with database size
  • 17 MB/s for large databases is good for Tigris
  • Small databases dominated by connection overhead

Time to snapshot a database to Tigris:

Terminal window
uv run bench/realworld.py --test sync

Measured results (~100KB database to Tigris):

  • p50: 445ms
  • p95: 539ms
  • Mean: 445ms

Latency is dominated by S3 upload time over residential broadband.

Maximum sustainable commits per second with walsync watching:

Terminal window
uv run bench/realworld.py --test throughput

Measured results (macOS aarch64):

  • Max commits/sec: 25,874
  • Avg commit latency: 0.04ms

Walsync imposes virtually no overhead on SQLite commit performance.

SQLite checkpoints merge WAL into the main database. Impact on sync:

Terminal window
uv run bench/realworld.py --test checkpoint

Measured results:

  • Normal commit latency: 0.07ms
  • Post-checkpoint latency: 0.08ms (minimal impact)
  • Checkpoint duration: 7ms for 1MB WAL

Checkpoints have negligible impact on write performance.

Time to catch up after walsync is restarted:

Terminal window
uv run bench/realworld.py --test network

Measured results (5 second simulated outage):

  • Writes during outage: 49 rows
  • Catchup time: ~5s (immediate on restart)
  • Data loss: 0 (WAL preserves all writes)

Strategy: Stop walsync, write for 5 seconds, restart and measure catchup time. All writes made during the outage are synced when walsync restarts.

Internal operation performance from cargo bench:

OperationTime
WAL header parse47 ns
WAL frame header parse32 ns
SHA256 (1KB)6 us
SHA256 (100KB)512 us
SHA256 (1MB)5.23 ms

Key findings:

  • WAL parsing is extremely fast (< 50ns)
  • SHA256 is the bottleneck for checksums
  • For 100MB database, checksum takes ~500ms
Terminal window
make bench
# or: cargo bench
Terminal window
make bench-compare
# or: uv run bench/compare.py

Options:

Terminal window
uv run bench/compare.py --dbs 1,5,10 # Specific counts
uv run bench/compare.py --duration 10 # Longer measurement
uv run bench/compare.py --db-size 1000 # 1MB test databases
uv run bench/compare.py --json # JSON output
Terminal window
make bench-realworld
# or: uv run bench/realworld.py

Options:

Terminal window
uv run bench/realworld.py --test restore # Just restore test
uv run bench/realworld.py --sizes 1,10,100 # Specific sizes (MB)
  • Platform: macOS (aarch64)
  • Rust: Release build
  • S3 Provider: Tigris (Fly.io)
  • Network: Residential broadband

Use Walsync when:

  • Multiple databases (5+)
  • Resource-constrained environments (512MB VMs)
  • Memory is a concern
  • Need explicit SHA256 verification

Use Litestream when:

  • Single database
  • Battle-tested production stability is critical
  • Team familiar with Go ecosystem