You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It will create a virtual environment in `python-virtualenv`, install necessary Python
@@ -92,153 +105,12 @@ virtual environment:
92
105
Now you can deploy serverless experiments :-)
93
106
94
107
The installation of additional platforms is controlled with the `--platform` and `--no-platform`
95
-
switches. Currently, the default behavior for `install.py` is to install only the local
96
-
environment.
108
+
switches. Currently, the default behavior for `install.py` is to install only the
109
+
local environment.
97
110
98
111
**Make sure** that your Docker daemon is running and your user has sufficient permissions to use it. Otherwise you might see a lot of "Connection refused" and "Permission denied" errors when using SeBS.
99
112
100
-
To verify the correctness of installation, you can use [our regression testing](#regression).
101
-
102
-
## Usage
103
-
104
-
SeBS has three basic commands: `benchmark`, `experiment`, and `local`.
105
-
For each command you can pass `--verbose` flag to increase the verbosity of the output.
106
-
By default, all scripts will create a cache in directory `cache` to store code with
107
-
dependencies and information on allocated cloud resources.
108
-
Benchmarks will be rebuilt after a change in source code is detected.
109
-
To enforce redeployment of code and benchmark input please use flags `--update-code`
110
-
and `--update-storage`, respectively.
111
-
**Note:** the cache does not support updating cloud region. If you want to deploy benchmarks
112
-
to a new cloud region, then use a new cache directory.
113
-
114
-
### Benchmark
115
-
116
-
This command is used to build, deploy, and execute serverless benchmark in cloud.
117
-
The example below invokes the benchmark `110.dynamic-html` on AWS via the standard HTTP trigger.
118
-
119
-
```
120
-
./sebs.py benchmark invoke 110.dynamic-html test --config config/example.json --deployment aws --verbose
121
-
```
122
-
123
-
To configure your benchmark, change settings in the config file or use command-line options.
124
-
The full list is available by running `./sebs.py benchmark invoke --help`.
125
-
126
-
### Regression
127
-
128
-
Additionally, we provide a regression option to execute all benchmarks on a given platform.
129
-
The example below demonstrates how to run the regression suite with `test` input size on AWS.
130
-
131
-
```
132
-
./sebs.py benchmark regression test --config config/example.json --deployment aws
133
-
```
134
-
135
-
The regression can be executed on a single benchmark as well:
136
-
137
-
```
138
-
./sebs.py benchmark regression test --config config/example.json --deployment aws --benchmark-name 120.uploader
139
-
```
140
-
141
-
### Experiment
142
-
143
-
This command is used to execute benchmarks described in the paper. The example below runs the experiment **perf-cost**:
The configuration specifies that benchmark **110.dynamic-html** is executed 50 times, with 50 concurrent invocations, and both cold and warm invocations are recorded.
150
-
151
-
```json
152
-
"perf-cost": {
153
-
"benchmark": "110.dynamic-html",
154
-
"experiments": ["cold", "warm"],
155
-
"input-size": "test",
156
-
"repetitions": 50,
157
-
"concurrent-invocations": 50,
158
-
"memory-sizes": [128, 256]
159
-
}
160
-
```
161
-
162
-
To download cloud metrics and process the invocations into a .csv file with data, run the process construct
163
-
164
-
```
165
-
./sebs.py experiment process perf-cost --config example.json --deployment aws
166
-
```
167
-
168
-
### Local
169
-
170
-
In addition to the cloud deployment, we provide an opportunity to launch benchmarks locally with the help of [minio](https://min.io/) storage.
171
-
This allows us to conduct debugging and a local characterization of the benchmarks.
172
-
173
-
To launch Docker containers, use the following command - this example launches benchmark `110.dynamic-html` with size `test`:
174
-
175
-
```
176
-
./sebs.py local start 110.dynamic-html test out.json --config config/example.json --deployments 1
177
-
```
178
-
179
-
The output file `out.json` will contain the information on containers deployed and the endpoints that can be used to invoke functions:
To stop containers, you can use the following command:
210
-
211
-
```
212
-
./sebs.py local stop out.json
213
-
```
214
-
215
-
The stopped containers won't be automatically removed unless the option `--remove-containers` has been passed to the `start` command.
216
-
217
-
## Experiments
218
-
219
-
For details on experiments and methodology, please refer to [our paper](#paper).
220
-
221
-
#### Performance & cost
222
-
223
-
Invokes given benchmark a selected number of times, measuring the time and cost of invocations.
224
-
Supports `cold` and `warm` invocations with a selected number of concurrent invocations.
225
-
In addition, to accurately measure the overheads of Azure Function Apps, we offer `burst` and `sequential` invocation type that doesn't distinguish
226
-
between cold and warm startups.
227
-
228
-
#### Network ping-pong
229
-
230
-
Measures the distribution of network latency between benchmark driver and function instance.
231
-
232
-
#### Invocation overhead
233
-
234
-
The experiment performs the clock drift synchronization protocol to accurately measure the startup time of a function by comparing
235
-
benchmark driver and function timestamps.
236
-
237
-
#### Eviction model
238
-
239
-
Executes test functions multiple times, with varying size, memory and runtime configurations, to test for how long function instances stay alive.
240
-
The result helps to estimate the analytical models describing cold startups.
241
-
Currently supported only on AWS.
113
+
To verify the correctness of installation, you can use [our regression testing](docs/usage.md#regression).
242
114
243
115
## Authors
244
116
@@ -247,4 +119,5 @@ Currently supported only on AWS.
247
119
*[Nico Graf (ETH Zurich)](https://github.com/ncograf/) - contributed implementation of regression tests, bugfixes, and helped with testing and documentation.
248
120
*[Kacper Janda](https://github.com/Kacpro), [Mateusz Knapik](https://github.com/maknapik), [JmmCz](https://github.com/JmmCz), AGH University of Science and Technology - contributed together Google Cloud support.
249
121
*[Grzegorz Kwaśniewski (ETH Zurich)](https://github.com/gkwasniewski) - worked on the modeling experiments.
122
+
*[Paweł Żuk (University of Warsaw)](https://github.com/pmzuk) - contributed OpenWhisk support.
0 commit comments