Benchmarks
Memory Usage vs Litestream
Section titled “Memory Usage vs Litestream”The key advantage of walsync is memory efficiency when managing multiple databases.
| Databases | Litestream | Walsync | Savings |
|---|---|---|---|
| 1 | 33 MB (1 process) | 12 MB | 21 MB |
| 5 | 152 MB (5 processes) | 14 MB | 138 MB |
| 10 | 286 MB (10 processes) | 12 MB | 274 MB |
| 20 | 600 MB (20 processes) | 12 MB | 588 MB |
Measured on macOS (aarch64) with 100KB test databases, 5-second measurement window.
Key observation: Walsync memory stays constant (~12 MB) regardless of database count. Litestream scales linearly (~30 MB per process).
Restore Performance
Section titled “Restore Performance”Tested with Tigris (Fly.io S3-compatible storage):
| DB Size | Time (ms) | Throughput (MB/s) |
|---|---|---|
| 1.2 MB | 971 | 1.24 |
| 12 MB | 1,502 | 8.00 |
| 120 MB | 6,954 | 17.27 |
Key findings:
- Restore throughput scales with database size
- 17 MB/s for large databases is good for Tigris
- Small databases dominated by connection overhead
Sync Latency
Section titled “Sync Latency”Time to snapshot a database to Tigris:
uv run bench/realworld.py --test syncMeasured results (~100KB database to Tigris):
- p50: 445ms
- p95: 539ms
- Mean: 445ms
Latency is dominated by S3 upload time over residential broadband.
Write Throughput
Section titled “Write Throughput”Maximum sustainable commits per second with walsync watching:
uv run bench/realworld.py --test throughputMeasured results (macOS aarch64):
- Max commits/sec: 25,874
- Avg commit latency: 0.04ms
Walsync imposes virtually no overhead on SQLite commit performance.
Checkpoint Impact
Section titled “Checkpoint Impact”SQLite checkpoints merge WAL into the main database. Impact on sync:
uv run bench/realworld.py --test checkpointMeasured results:
- Normal commit latency: 0.07ms
- Post-checkpoint latency: 0.08ms (minimal impact)
- Checkpoint duration: 7ms for 1MB WAL
Checkpoints have negligible impact on write performance.
Network Recovery
Section titled “Network Recovery”Time to catch up after walsync is restarted:
uv run bench/realworld.py --test networkMeasured results (5 second simulated outage):
- Writes during outage: 49 rows
- Catchup time: ~5s (immediate on restart)
- Data loss: 0 (WAL preserves all writes)
Strategy: Stop walsync, write for 5 seconds, restart and measure catchup time. All writes made during the outage are synced when walsync restarts.
Micro-Benchmarks
Section titled “Micro-Benchmarks”Internal operation performance from cargo bench:
| Operation | Time |
|---|---|
| WAL header parse | 47 ns |
| WAL frame header parse | 32 ns |
| SHA256 (1KB) | 6 us |
| SHA256 (100KB) | 512 us |
| SHA256 (1MB) | 5.23 ms |
Key findings:
- WAL parsing is extremely fast (< 50ns)
- SHA256 is the bottleneck for checksums
- For 100MB database, checksum takes ~500ms
Running Benchmarks
Section titled “Running Benchmarks”Micro-benchmarks (Rust)
Section titled “Micro-benchmarks (Rust)”make bench# or: cargo benchComparison with Litestream
Section titled “Comparison with Litestream”make bench-compare# or: uv run bench/compare.pyOptions:
uv run bench/compare.py --dbs 1,5,10 # Specific countsuv run bench/compare.py --duration 10 # Longer measurementuv run bench/compare.py --db-size 1000 # 1MB test databasesuv run bench/compare.py --json # JSON outputReal-world benchmarks
Section titled “Real-world benchmarks”make bench-realworld# or: uv run bench/realworld.pyOptions:
uv run bench/realworld.py --test restore # Just restore testuv run bench/realworld.py --sizes 1,10,100 # Specific sizes (MB)Test Environment
Section titled “Test Environment”- Platform: macOS (aarch64)
- Rust: Release build
- S3 Provider: Tigris (Fly.io)
- Network: Residential broadband
When to Use Each
Section titled “When to Use Each”Use Walsync when:
- Multiple databases (5+)
- Resource-constrained environments (512MB VMs)
- Memory is a concern
- Need explicit SHA256 verification
Use Litestream when:
- Single database
- Battle-tested production stability is critical
- Team familiar with Go ecosystem