|
| 1 | +# Hyperactor Channel Benchmarks |
| 2 | + |
| 3 | +This directory contains comprehensive benchmarks for the Hyperactor channel system using the Criterion benchmarking framework. |
| 4 | + |
| 5 | +## Overview |
| 6 | + |
| 7 | +The benchmark harness tests various aspects of channel performance: |
| 8 | + |
| 9 | +1. **Channel Creation** - Benchmarks the cost of creating channels (dial/serve operations) |
| 10 | +2. **Message Sending** - Tests different sending methods (`try_post`, `post`, `send`) |
| 11 | +3. **Message Sizes** - Compares performance with different message sizes (small, medium, large) |
| 12 | +4. **Throughput** - Measures messages per second for different transports |
| 13 | +5. **Latency** - Round-trip latency measurements |
| 14 | +6. **Transport Comparison** - Performance comparison between different transport types |
| 15 | + |
| 16 | +## Supported Transports |
| 17 | + |
| 18 | +The benchmarks test the following channel transports: |
| 19 | + |
| 20 | +- **Local** - In-process channels using mpsc |
| 21 | +- **TCP** - Network channels over TCP |
| 22 | +- **Unix** - Unix domain socket channels |
| 23 | + |
| 24 | +## Message Types |
| 25 | + |
| 26 | +Three message types are used to test different payload sizes: |
| 27 | + |
| 28 | +- **SmallMessage** - ~16 bytes (id + value) |
| 29 | +- **MediumMessage** - ~1KB (id + 1KB data) |
| 30 | +- **LargeMessage** - ~64KB (id + 64KB payload) |
| 31 | + |
| 32 | +## Running Benchmarks |
| 33 | + |
| 34 | +### Run All Benchmarks |
| 35 | +```bash |
| 36 | +cargo bench |
| 37 | +``` |
| 38 | + |
| 39 | +### Run Specific Benchmark Groups |
| 40 | +```bash |
| 41 | +# Channel creation benchmarks |
| 42 | +cargo bench channel_creation |
| 43 | + |
| 44 | +# Message sending benchmarks |
| 45 | +cargo bench message_sending |
| 46 | + |
| 47 | +# Message size comparison |
| 48 | +cargo bench message_sizes |
| 49 | + |
| 50 | +# Throughput benchmarks |
| 51 | +cargo bench throughput |
| 52 | + |
| 53 | +# Latency benchmarks |
| 54 | +cargo bench latency |
| 55 | + |
| 56 | +# Transport comparison |
| 57 | +cargo bench transport_comparison |
| 58 | +``` |
| 59 | + |
| 60 | +### Run with Specific Transport |
| 61 | +```bash |
| 62 | +# Run only local transport benchmarks |
| 63 | +cargo bench -- local |
| 64 | + |
| 65 | +# Run only TCP transport benchmarks |
| 66 | +cargo bench -- tcp |
| 67 | +``` |
| 68 | + |
| 69 | +## Output |
| 70 | + |
| 71 | +Benchmarks generate: |
| 72 | + |
| 73 | +1. **Console Output** - Real-time results with timing statistics |
| 74 | +2. **HTML Reports** - Detailed reports in `target/criterion/` directory |
| 75 | +3. **Baseline Comparisons** - Performance regression detection |
| 76 | + |
| 77 | +## Benchmark Details |
| 78 | + |
| 79 | +### Channel Creation |
| 80 | +- Measures time to create server (`serve`) and client (`dial`) endpoints |
| 81 | +- Tests all supported transport types |
| 82 | +- Useful for understanding connection establishment overhead |
| 83 | + |
| 84 | +### Message Sending |
| 85 | +- Compares `try_post` (non-blocking), `post` (fire-and-forget), and `send` (synchronous) |
| 86 | +- Tests local and TCP transports |
| 87 | +- Measures raw sending performance |
| 88 | + |
| 89 | +### Message Sizes |
| 90 | +- Tests impact of payload size on performance |
| 91 | +- Uses local transport to isolate serialization/deserialization costs |
| 92 | +- Reports throughput in bytes per second |
| 93 | + |
| 94 | +### Throughput |
| 95 | +- Measures sustained message rate |
| 96 | +- Batches messages to reduce measurement overhead |
| 97 | +- 10-second measurement window for stability |
| 98 | + |
| 99 | +### Latency |
| 100 | +- Round-trip latency measurement |
| 101 | +- Uses echo server pattern |
| 102 | +- Tests both local and network transports |
| 103 | + |
| 104 | +### Transport Comparison |
| 105 | +- Direct comparison of transport performance |
| 106 | +- Sends 1000 messages per iteration |
| 107 | +- Helps choose optimal transport for use case |
| 108 | + |
| 109 | +## Interpreting Results |
| 110 | + |
| 111 | +- **Lower is better** for latency measurements |
| 112 | +- **Higher is better** for throughput measurements |
| 113 | +- **Confidence intervals** indicate measurement reliability |
| 114 | +- **Slope** indicates performance scaling characteristics |
| 115 | + |
| 116 | +## Configuration |
| 117 | + |
| 118 | +Benchmarks can be customized by modifying: |
| 119 | + |
| 120 | +- Message sizes in the message type definitions |
| 121 | +- Number of messages in throughput tests |
| 122 | +- Measurement duration in benchmark groups |
| 123 | +- Transport types tested |
| 124 | + |
| 125 | +## Dependencies |
| 126 | + |
| 127 | +The benchmark harness requires: |
| 128 | + |
| 129 | +- `criterion` - Benchmarking framework |
| 130 | +- `tokio` - Async runtime |
| 131 | +- `serde` - Message serialization |
| 132 | +- `hyperactor` - The library being benchmarked |
| 133 | + |
| 134 | +## Notes |
| 135 | + |
| 136 | +- Benchmarks automatically handle async operations using Tokio runtime |
| 137 | +- Receiver tasks are spawned to consume messages and prevent backpressure |
| 138 | +- Results may vary based on system load and hardware |
| 139 | +- Network benchmarks may be affected by local network configuration |
0 commit comments