You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/user/InstructionsForAutoTuner.md
+75-29Lines changed: 75 additions & 29 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,25 +5,34 @@ AutoTuner provides a generic interface where users can define parameter configur
5
5
This enables AutoTuner to easily support various tools and flows. AutoTuner also utilizes [METRICS2.1](https://github.com/ieee-ceda-datc/datc-rdf-Metrics4ML) to capture PPA
6
6
of individual search trials. With the abundant features of METRICS2.1, users can explore various reward functions that steer the flow autotuning to different PPA goals.
7
7
8
-
AutoTuner provides two main functionalities as follows.
9
-
* Automatic hyperparameter tuning framework for OpenROAD-flow-script (ORFS)
10
-
* Parametric sweeping experiments for ORFS
8
+
AutoTuner provides three main functionalities as follows.
9
+
*[Ray] Automatic hyperparameter tuning framework for OpenROAD-flow-script (ORFS)
10
+
*[Ray] Parametric sweeping experiments for ORFS
11
+
*[Vizier] Multi-objective optimization of ORFS parameters
11
12
12
13
13
14
AutoTuner contains top-level Python script for ORFS, each of which implements a different search algorithm. Current supported search algorithms are as follows.
14
-
* Random/Grid Search
15
-
* Population Based Training ([PBT](https://www.deepmind.com/blog/population-based-training-of-neural-networks))
16
-
* Tree Parzen Estimator ([HyperOpt](https://hyperopt.github.io/hyperopt))
* Quasi Random Search ([quasi-random](https://developers.google.com/machine-learning/guides/deep-learning-tuning-playbook/quasi-random-search))
25
+
* Gaussian Process Bandit ([GP-Bandit](https://acsweb.ucsd.edu/~shshekha/GPBandits.html))
26
+
* Non-dominated Sorting Genetic Algorithm II ([NSGA-II](https://ieeexplore.ieee.org/document/996017))
20
27
21
-
User-defined coefficient values (`coeff_perform`, `coeff_power`, `coeff_area`) of three objectives to set the direction of tuning are written in the script. Each coefficient is expressed as a global variable at the `get_ppa` function in `PPAImprov` class in the script (`coeff_perform`, `coeff_power`, `coeff_area`). Efforts to optimize each of the objectives are proportional to the specified coefficients.
28
+
For Ray algorithms, optimized function can be adjusted with user-defined coefficient values (`coeff_perform`, `coeff_power`, `coeff_area`) for three objectives to set the direction of tuning. They are defined in the [distributed.py sricpt](../../tools/AutoTuner/src/autotuner/distributed.py) in `get_ppa` method of `PPAImprov` class. Efforts to optimize each of the objectives are proportional to the specified coefficients.
29
+
30
+
Using Vizier algorithms, used can choose which metrics should be optimized with `--use-metrics` argument.
22
31
23
32
24
33
## Setting up AutoTuner
25
34
26
-
We have provided two convenience scripts, `./install.sh` and `./setup.sh`
35
+
We have provided two convenience scripts, `./installer.sh` and `./setup.sh`
27
36
that works in Python3.8 for installation and configuration of AutoTuner,
28
37
as shown below:
29
38
@@ -32,8 +41,10 @@ Make sure you run the following commands in `./tools/AutoTuner/src/autotuner`.
32
41
```
33
42
34
43
```shell
35
-
# Install prerequisites
36
-
./tools/AutoTuner/install.sh
44
+
# Install prerequisites for both Ray Tune and Vizier
45
+
./tools/AutoTuner/installer.sh
46
+
# Or install prerequisites for `ray` or `vizier`
47
+
./tools/AutoTuner/installer.sh vizier
37
48
38
49
# Start virtual environment
39
50
./tools/AutoTuner/setup.sh
@@ -54,7 +65,8 @@ Alternatively, here is a minimal example to get started:
54
65
1.0,
55
66
3.7439
56
67
],
57
-
"step": 0
68
+
"step": 0,
69
+
"scale": "log"
58
70
},
59
71
"CORE_MARGIN": {
60
72
"type": "int",
@@ -71,6 +83,7 @@ Alternatively, here is a minimal example to get started:
71
83
*`"type"`: Parameter type ("float" or "int") for sweeping/tuning
72
84
*`"minmax"`: Min-to-max range for sweeping/tuning. The unit follows the default value of each technology std cell library.
73
85
*`"step"`: Parameter step within the minmax range. Step 0 for type "float" means continuous step for sweeping/tuning. Step 0 for type "int" means the constant parameter.
|`--orfs`| Path to the OpenROAD-flow-scripts repository |
223
+
|`--results`| Path where JSON file with results will be saved |
224
+
|`-a` or `--algorithm`| Algorithm for the optimization engine, one of GAUSSIAN_PROCESS_BANDIT, RANDOM_SEARCH, QUASI_RANDOM_SEARCH, GRID_SEARCH, SHUFFLED_GRID_SEARCH, NSGA2 |
225
+
|`-m` or `--use-metrics`| Metrics to optimize, list of worst_slack, clk_period-worst_slack, total_power, core_util, final_util, design_area, core_area, die_area, last_successful_stage |
226
+
|`-i` or `--iterations`| Max iteration count for the optimization engine |
227
+
|`-s` or `--suggestions`| Suggestion count per iteration of the optimization engine |
228
+
|`-w` or `--workers`| Number of parallel workers |
229
+
|`--use-existing-server`| Address of the running Vizier server |
230
+
231
+
### GUI for optimizations with Ray Tune
186
232
187
233
Basically, progress is displayed at the terminal where you run, and when all runs are finished, the results are displayed.
188
234
You could find the "Best config found" on the screen.
0 commit comments