Skip to content

Commit 3ac761a

Browse files
authored
Merge pull request #95 from spcl/dev
Full OpenWhisk support.
2 parents 9dcbcc9 + 4c02784 commit 3ac761a

File tree

113 files changed

+3446
-882
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

113 files changed

+3446
-882
lines changed

.circleci/config.yml

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
version: 2.1
22

33
orbs:
4-
python: circleci/python@0.2.1
4+
python: circleci/python@1.4.0
55

66
jobs:
77
linting:
@@ -12,7 +12,11 @@ jobs:
1212
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
1313
- run:
1414
command: |
15-
python3 install.py --aws --azure --gcp --dont-rebuild-docker-images --no-local
15+
sudo apt update && sudo apt install libcurl4-openssl-dev
16+
name: Install curl-config from Ubuntu APT
17+
- run:
18+
command: |
19+
python3 install.py --aws --azure --gcp --no-local
1620
name: Install pip dependencies
1721
- run:
1822
command: |
@@ -40,8 +44,8 @@ jobs:
4044
then
4145
ls $HOME/docker/*.tar.gz | xargs -I {file} sh -c "zcat {file} | docker load";
4246
else
43-
docker pull mcopik/serverless-benchmarks:build.aws.python.3.6
44-
docker pull mcopik/serverless-benchmarks:build.aws.nodejs.10.x
47+
docker pull mcopik/serverless-benchmarks:build.aws.python.3.7
48+
docker pull mcopik/serverless-benchmarks:build.aws.nodejs.12.x
4549
fi
4650
name: Load Docker images
4751
- run:

.dockerignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,3 +6,4 @@ config
66
cache
77
python-venv
88
regression-*
9+
*_code

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -170,3 +170,7 @@ dmypy.json
170170
sebs-*
171171
# cache
172172
cache
173+
174+
# IntelliJ IDEA files
175+
.idea
176+
*.iml

.mypy.ini

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,5 +30,11 @@ ignore_missing_imports = True
3030
[mypy-google.api_core]
3131
ignore_missing_imports = True
3232

33+
[mypy-googleapiclient.discovery]
34+
ignore_missing_imports = True
35+
36+
[mypy-googleapiclient.errors]
37+
ignore_missing_imports = True
38+
3339
[mypy-testtools]
3440
ignore_missing_imports = True

README.md

Lines changed: 50 additions & 177 deletions
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,56 @@
1-
# SeBS: Serverless Benchmark Suite
2-
3-
**FaaS benchmarking suite for serverless functions with automatic build, deployment, and measurements.**
41

52
[![CircleCI](https://circleci.com/gh/spcl/serverless-benchmarks.svg?style=shield)](https://circleci.com/gh/spcl/serverless-benchmarks)
63
![Release](https://img.shields.io/github/v/release/spcl/serverless-benchmarks)
74
![License](https://img.shields.io/github/license/spcl/serverless-benchmarks)
85
![GitHub issues](https://img.shields.io/github/issues/spcl/serverless-benchmarks)
96
![GitHub pull requests](https://img.shields.io/github/issues-pr/spcl/serverless-benchmarks)
107

11-
SeBS is a diverse suite of FaaS benchmarks that allows an automatic performance analysis of
8+
# SeBS: Serverless Benchmark Suite
9+
10+
**FaaS benchmarking suite for serverless functions with automatic build, deployment, and measurements.**
11+
12+
![Overview of SeBS features and components.](docs/overview.png)
13+
14+
SeBS is a diverse suite of FaaS benchmarks that allows automatic performance analysis of
1215
commercial and open-source serverless platforms. We provide a suite of
13-
[benchmark applications](#benchmark-applications) and [experiments](#experiments),
16+
[benchmark applications](#benchmark-applications) and [experiments](#experiments)
1417
and use them to test and evaluate different components of FaaS systems.
1518
See the [installation instructions](#installation) to learn how to configure SeBS to use selected
1619
cloud services and [usage instructions](#usage) to automatically launch experiments in the cloud!
1720

18-
SeBS provides support for automatic deployment and invocation of benchmarks on
19-
AWS Lambda, Azure Functions, Google Cloud Functions, and a custom, Docker-based local
20-
evaluation platform. See the [documentation on cloud providers](docs/platforms.md)
21-
to learn how to provide SeBS with cloud credentials.
21+
22+
SeBS provides support for **automatic deployment** and invocation of benchmarks on
23+
commercial and black-box platforms
24+
[AWS Lambda](https://aws.amazon.com/lambda/),
25+
[Azure Functions](https://azure.microsoft.com/en-us/services/functions/),
26+
and [Google Cloud Functions](https://cloud.google.com/functions).
27+
Furthermore, we support the open-source platform [OpenWhisk](https://openwhisk.apache.org/)
28+
and offer a custom, Docker-based local evaluation platform.
29+
See the [documentation on cloud providers](docs/platforms.md)
30+
for details on configuring each platform in SeBS.
2231
The documentation describes in detail [the design and implementation of our
2332
tool](docs/design.md), and see the [modularity](docs/modularity.md)
2433
section to learn how SeBS can be extended with new platforms, benchmarks, and experiments.
34+
Find out more about our project in [a paper summary](mcopik.github.io/projects/sebs/).
35+
36+
Do you have further questions not answered by our documentation?
37+
Did you encounter troubles with installing and using SeBS?
38+
Or do you want to use SeBS in your work and you need new features?
39+
Feel free to reach us through GitHub issues or by writing to <[email protected]>.
2540

26-
SeBS can be used with our Docker image `spcleth/serverless-benchmarks:latest`, or the tool
27-
can be [installed locally](#installation).
2841

29-
### Paper
42+
For more information on how to configure, use and extend SeBS, see our
43+
documentation:
44+
45+
* [How to use SeBS?](docs/usage.md)
46+
* [Which benchmark applications are offered?](docs/benchmarks.md)
47+
* [Which experiments can be launched to evaluate FaaS platforms?](docs/experiment.md)
48+
* [How to configure serverless platforms?](docs/platforms.md)
49+
* [How SeBS builds and deploys functions?](docs/build.md)
50+
* [How SeBS package is designed?](docs/design.md)
51+
* [How to extend SeBS with new benchmarks, experiments, and platforms?](docs/modularity.md)
52+
53+
### Publication
3054

3155
When using SeBS, please cite our [Middleware '21 paper](https://dl.acm.org/doi/abs/10.1145/3464298.3476133).
3256
An extended version of our paper is [available on arXiv](https://arxiv.org/abs/2012.14132), and you can
@@ -35,39 +59,28 @@ You can cite our software repository as well, using the citation button on the r
3559

3660
```
3761
@inproceedings{copik2021sebs,
38-
author={Marcin Copik and Grzegorz Kwasniewski and Maciej Besta and Michal Podstawski and Torsten Hoefler},
39-
title={SeBS: A Serverless Benchmark Suite for Function-as-a-Service Computing},
62+
author = {Copik, Marcin and Kwasniewski, Grzegorz and Besta, Maciej and Podstawski, Michal and Hoefler, Torsten},
63+
title = {SeBS: A Serverless Benchmark Suite for Function-as-a-Service Computing},
4064
year = {2021},
65+
isbn = {9781450385343},
4166
publisher = {Association for Computing Machinery},
67+
address = {New York, NY, USA},
4268
url = {https://doi.org/10.1145/3464298.3476133},
4369
doi = {10.1145/3464298.3476133},
4470
booktitle = {Proceedings of the 22nd International Middleware Conference},
71+
pages = {64–78},
72+
numpages = {15},
73+
keywords = {benchmark, serverless, FaaS, function-as-a-service},
74+
location = {Qu\'{e}bec city, Canada},
4575
series = {Middleware '21}
4676
}
4777
```
4878

49-
## Benchmark Applications
50-
51-
For details on benchmark selection and their characterization, please refer to [our paper](#paper).
52-
53-
| Type | Benchmark | Languages | Description |
54-
| :--- | :---: | :---: | :---: |
55-
| Webapps | 110.dynamic-html | Python, Node.js | Generate dynamic HTML from a template. |
56-
| Webapps | 120.uploader | Python, Node.js | Uploader file from provided URL to cloud storage. |
57-
| Multimedia | 210.thumbnailer | Python, Node.js | Generate a thumbnail of an image. |
58-
| Multimedia | 220.video-processing | Python | Add a watermark and generate gif of a video file. |
59-
| Utilities | 311.compression | Python | Create a .zip file for a group of files in storage and return to user to download. |
60-
| Utilities | 504.dna-visualization | Python | Creates a visualization data for DNA sequence. |
61-
| Inference | 411.image-recognition | Python | Image recognition with ResNet and pytorch. |
62-
| Scientific | 501.graph-pagerank | Python | PageRank implementation with igraph. |
63-
| Scientific | 501.graph-mst | Python | Minimum spanning tree (MST) implementation with igraph. |
64-
| Scientific | 501.graph-bfs | Python | Breadth-first search (BFS) implementation with igraph. |
65-
6679
## Installation
6780

6881
Requirements:
6982
- Docker (at least 19)
70-
- Python 3.6+ with:
83+
- Python 3.7+ with:
7184
- pip
7285
- venv
7386
- `libcurl` and its headers must be available on your system to install `pycurl`
@@ -78,7 +91,7 @@ Requirements:
7891
To install the benchmarks with a support for all platforms, use:
7992

8093
```
81-
./install.py --aws --azure --gcp --local
94+
./install.py --aws --azure --gcp --openwhisk --local
8295
```
8396

8497
It will create a virtual environment in `python-virtualenv`, install necessary Python
@@ -92,153 +105,12 @@ virtual environment:
92105
Now you can deploy serverless experiments :-)
93106

94107
The installation of additional platforms is controlled with the `--platform` and `--no-platform`
95-
switches. Currently, the default behavior for `install.py` is to install only the local
96-
environment.
108+
switches. Currently, the default behavior for `install.py` is to install only the
109+
local environment.
97110

98111
**Make sure** that your Docker daemon is running and your user has sufficient permissions to use it. Otherwise you might see a lot of "Connection refused" and "Permission denied" errors when using SeBS.
99112

100-
To verify the correctness of installation, you can use [our regression testing](#regression).
101-
102-
## Usage
103-
104-
SeBS has three basic commands: `benchmark`, `experiment`, and `local`.
105-
For each command you can pass `--verbose` flag to increase the verbosity of the output.
106-
By default, all scripts will create a cache in directory `cache` to store code with
107-
dependencies and information on allocated cloud resources.
108-
Benchmarks will be rebuilt after a change in source code is detected.
109-
To enforce redeployment of code and benchmark input please use flags `--update-code`
110-
and `--update-storage`, respectively.
111-
**Note:** the cache does not support updating cloud region. If you want to deploy benchmarks
112-
to a new cloud region, then use a new cache directory.
113-
114-
### Benchmark
115-
116-
This command is used to build, deploy, and execute serverless benchmark in cloud.
117-
The example below invokes the benchmark `110.dynamic-html` on AWS via the standard HTTP trigger.
118-
119-
```
120-
./sebs.py benchmark invoke 110.dynamic-html test --config config/example.json --deployment aws --verbose
121-
```
122-
123-
To configure your benchmark, change settings in the config file or use command-line options.
124-
The full list is available by running `./sebs.py benchmark invoke --help`.
125-
126-
### Regression
127-
128-
Additionally, we provide a regression option to execute all benchmarks on a given platform.
129-
The example below demonstrates how to run the regression suite with `test` input size on AWS.
130-
131-
```
132-
./sebs.py benchmark regression test --config config/example.json --deployment aws
133-
```
134-
135-
The regression can be executed on a single benchmark as well:
136-
137-
```
138-
./sebs.py benchmark regression test --config config/example.json --deployment aws --benchmark-name 120.uploader
139-
```
140-
141-
### Experiment
142-
143-
This command is used to execute benchmarks described in the paper. The example below runs the experiment **perf-cost**:
144-
145-
```
146-
./sebs.py experiment invoke perf-cost --config config/example.json --deployment aws
147-
```
148-
149-
The configuration specifies that benchmark **110.dynamic-html** is executed 50 times, with 50 concurrent invocations, and both cold and warm invocations are recorded.
150-
151-
```json
152-
"perf-cost": {
153-
"benchmark": "110.dynamic-html",
154-
"experiments": ["cold", "warm"],
155-
"input-size": "test",
156-
"repetitions": 50,
157-
"concurrent-invocations": 50,
158-
"memory-sizes": [128, 256]
159-
}
160-
```
161-
162-
To download cloud metrics and process the invocations into a .csv file with data, run the process construct
163-
164-
```
165-
./sebs.py experiment process perf-cost --config example.json --deployment aws
166-
```
167-
168-
### Local
169-
170-
In addition to the cloud deployment, we provide an opportunity to launch benchmarks locally with the help of [minio](https://min.io/) storage.
171-
This allows us to conduct debugging and a local characterization of the benchmarks.
172-
173-
To launch Docker containers, use the following command - this example launches benchmark `110.dynamic-html` with size `test`:
174-
175-
```
176-
./sebs.py local start 110.dynamic-html test out.json --config config/example.json --deployments 1
177-
```
178-
179-
The output file `out.json` will contain the information on containers deployed and the endpoints that can be used to invoke functions:
180-
181-
```
182-
{
183-
"functions": [
184-
{
185-
"benchmark": "110.dynamic-html",
186-
"hash": "5ff0657337d17b0cf6156f712f697610",
187-
"instance_id": "e4797ae01c52ac54bfc22aece1e413130806165eea58c544b2a15c740ec7d75f",
188-
"name": "110.dynamic-html-python-128",
189-
"port": 9000,
190-
"triggers": [],
191-
"url": "172.17.0.3:9000"
192-
}
193-
],
194-
"inputs": [
195-
{
196-
"random_len": 10,
197-
"username": "testname"
198-
}
199-
]
200-
}
201-
```
202-
203-
In our example, we can use `curl` to invoke the function with provided input:
204-
205-
```
206-
curl 172.17.0.3:9000 --request POST --data '{"random_len": 10,"username": "testname"}' --header 'Content-Type: application/json'
207-
```
208-
209-
To stop containers, you can use the following command:
210-
211-
```
212-
./sebs.py local stop out.json
213-
```
214-
215-
The stopped containers won't be automatically removed unless the option `--remove-containers` has been passed to the `start` command.
216-
217-
## Experiments
218-
219-
For details on experiments and methodology, please refer to [our paper](#paper).
220-
221-
#### Performance & cost
222-
223-
Invokes given benchmark a selected number of times, measuring the time and cost of invocations.
224-
Supports `cold` and `warm` invocations with a selected number of concurrent invocations.
225-
In addition, to accurately measure the overheads of Azure Function Apps, we offer `burst` and `sequential` invocation type that doesn't distinguish
226-
between cold and warm startups.
227-
228-
#### Network ping-pong
229-
230-
Measures the distribution of network latency between benchmark driver and function instance.
231-
232-
#### Invocation overhead
233-
234-
The experiment performs the clock drift synchronization protocol to accurately measure the startup time of a function by comparing
235-
benchmark driver and function timestamps.
236-
237-
#### Eviction model
238-
239-
Executes test functions multiple times, with varying size, memory and runtime configurations, to test for how long function instances stay alive.
240-
The result helps to estimate the analytical models describing cold startups.
241-
Currently supported only on AWS.
113+
To verify the correctness of installation, you can use [our regression testing](docs/usage.md#regression).
242114

243115
## Authors
244116

@@ -247,4 +119,5 @@ Currently supported only on AWS.
247119
* [Nico Graf (ETH Zurich)](https://github.com/ncograf/) - contributed implementation of regression tests, bugfixes, and helped with testing and documentation.
248120
* [Kacper Janda](https://github.com/Kacpro), [Mateusz Knapik](https://github.com/maknapik), [JmmCz](https://github.com/JmmCz), AGH University of Science and Technology - contributed together Google Cloud support.
249121
* [Grzegorz Kwaśniewski (ETH Zurich)](https://github.com/gkwasniewski) - worked on the modeling experiments.
122+
* [Paweł Żuk (University of Warsaw)](https://github.com/pmzuk) - contributed OpenWhisk support.
250123

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
{
2+
"name": "",
3+
"version": "1.0.0",
4+
"description": "",
5+
"author": "",
6+
"license": "",
7+
"dependencies": {
8+
}
9+
}
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
jinja2==2.10.3
1+
jinja2>=2.10.3

benchmarks/100.webapps/120.uploader/nodejs/package.json

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,7 @@
44
"description": "",
55
"author": "",
66
"license": "",
7-
"dependencies": {},
8-
"devDependencies": {
7+
"dependencies": {
98
"request": "^2.88.0"
109
}
1110
}

benchmarks/200.multimedia/210.thumbnailer/nodejs/package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,6 @@
55
"author": "",
66
"license": "",
77
"dependencies": {
8-
"sharp": "^0.23.4"
8+
"sharp": "^0.25"
99
}
1010
}
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Pillow==9.0.0

0 commit comments

Comments
 (0)