|
82 | 82 |
|
83 | 83 | ## Environment Variables |
84 | 84 |
|
85 | | -**OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS** |
86 | | - |
| 85 | +- **OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS:** |
87 | 86 | This variable is disabled by default, and it freezes variables from TensorFlow's ReadVariableOp as constants during the graph translation phase. Highly recommended to enable it to ensure optimal inference latencies on eagerly executed models. Disable it when model weights are modified after loading the model for inference. |
88 | 87 |
|
89 | | -Example: |
| 88 | + Example: |
90 | 89 |
|
91 | | - OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS="1" |
| 90 | + OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS="1" |
92 | 91 |
|
93 | | -**OPENVINO_TF_BACKEND:** |
| 92 | +- **OPENVINO_TF_BACKEND:** |
94 | 93 | Backend device name can be set using this variable. It should be set to "CPU", "GPU", "GPU_FP16", "MYRIAD", or "VAD-M". |
95 | 94 |
|
96 | | -Example: |
97 | | - |
98 | | - OPENVINO_TF_BACKEND="MYRIAD" |
| 95 | + Example: |
| 96 | + |
| 97 | + OPENVINO_TF_BACKEND="MYRIAD" |
99 | 98 |
|
100 | | -**OPENVINO_TF_DISABLE:** |
| 99 | +- **OPENVINO_TF_DISABLE:** |
101 | 100 | Disables **OpenVINO™ integration with TensorFlow** if set to 1. |
102 | 101 |
|
103 | | -Example: |
104 | | - |
105 | | - OPENVINO_TF_DISABLE="1" |
| 102 | + Example: |
| 103 | + |
| 104 | + OPENVINO_TF_DISABLE="1" |
106 | 105 |
|
107 | | -**OPENVINO_TF_LOG_PLACEMENT:** |
| 106 | +- **OPENVINO_TF_LOG_PLACEMENT:** |
108 | 107 | If this variable is set to 1, it will print the logs related to cluster formation and encapsulation. |
109 | 108 |
|
110 | | -Example: |
111 | | - |
112 | | - OPENVINO_TF_LOG_PLACEMENT="1" |
| 109 | + Example: |
| 110 | + |
| 111 | + OPENVINO_TF_LOG_PLACEMENT="1" |
113 | 112 |
|
114 | | -**OPENVINO_TF_MIN_NONTRIVIAL_NODES:** |
| 113 | +- **OPENVINO_TF_MIN_NONTRIVIAL_NODES:** |
115 | 114 | This variable sets the minimum number of operators that can exist in a cluster. If the number of operators in a cluster is smaller than the specified number, the cluster will be de-assigned and all the Ops in it are executed using native TensorFlow. By default, it is calculated based on the total graph size, but it cannot be less than 6 unless it is set manually. (No performance benefit is observed by enabling very small clusters). To get a detailed cluster summary set "OPENVINO_TF_LOG_PLACEMENT" to 1. |
116 | 115 |
|
117 | | -Example: |
118 | | - |
119 | | - OPENVINO_TF_MIN_NONTRIVIAL_NODES="10" |
| 116 | + Example: |
| 117 | + |
| 118 | + OPENVINO_TF_MIN_NONTRIVIAL_NODES="10" |
120 | 119 |
|
121 | | -**OPENVINO_TF_MAX_CLUSTERS:** |
| 120 | +- **OPENVINO_TF_MAX_CLUSTERS:** |
122 | 121 | This variable sets the maximum number of clusters selected for execution using OpenVINO™ backend. The clusters are selected based on the size (from highest to lowest), and this decision is made at the final stage of cluster de-assignment. Ops of remaining clusters are unmarked and are executed using native TensorFlow. Setting this environment variable is useful if there are some large clusters and a number of small clusters, and performance improves by scheduling only the large clusters using OpenVINO™ backend. To get a detailed cluster summary set "OPENVINO_TF_LOG_PLACEMENT" to 1. |
123 | 122 |
|
124 | | -Example: |
125 | | - |
126 | | - OPENVINO_TF_MAX_CLUSTERS="3" |
| 123 | + Example: |
| 124 | + |
| 125 | + OPENVINO_TF_MAX_CLUSTERS="3" |
127 | 126 |
|
128 | | -**OPENVINO_TF_VLOG_LEVEL:** |
| 127 | +- **OPENVINO_TF_VLOG_LEVEL:** |
129 | 128 | This variable is used to print the execution logs. Setting it to 1 will print the minumum amount of details and setting it to 5 will print the most detailed logs. |
130 | 129 |
|
131 | | -Example: |
132 | | - |
133 | | - OPENVINO_TF_VLOG_LEVEL="4" |
| 130 | + Example: |
| 131 | + |
| 132 | + OPENVINO_TF_VLOG_LEVEL="4" |
134 | 133 |
|
135 | | -**OPENVINO_TF_DISABLED_OPS:** |
| 134 | +- **OPENVINO_TF_DISABLED_OPS:** |
136 | 135 | A list of disabled operators can be passed using this variable. These operators will not be considered for clustering and they will fall back on to native TensorFlow. |
137 | 136 |
|
138 | | -Example: |
139 | | - |
140 | | - OPENVINO_TF_DISABLED_OPS="Squeeze,Greater,Gather,Unpack" |
| 137 | + Example: |
| 138 | + |
| 139 | + OPENVINO_TF_DISABLED_OPS="Squeeze,Greater,Gather,Unpack" |
141 | 140 |
|
142 | | -**OPENVINO_TF_DUMP_GRAPHS:** |
| 141 | +- **OPENVINO_TF_DUMP_GRAPHS:** |
143 | 142 | Setting this will serialize the full graphs in all stages during the optimization pass and save them in the current directory. |
144 | 143 |
|
145 | | -Example: |
146 | | - |
147 | | - OPENVINO_TF_DUMP_GRAPHS="1" |
| 144 | + Example: |
| 145 | + |
| 146 | + OPENVINO_TF_DUMP_GRAPHS="1" |
148 | 147 |
|
149 | | -**OPENVINO_TF_DUMP_CLUSTERS:** |
| 148 | +- **OPENVINO_TF_DUMP_CLUSTERS:** |
150 | 149 | Setting this variable to 1 will serialize all the clusters in ".pbtxt" format and save them in the current directory. |
151 | 150 |
|
152 | | -Example: |
153 | | - |
154 | | - OPENVINO_TF_DUMP_CLUSTERS="1" |
| 151 | + Example: |
| 152 | + |
| 153 | + OPENVINO_TF_DUMP_CLUSTERS="1" |
155 | 154 |
|
156 | | -**OPENVINO_TF_ENABLE_BATCHING:** |
| 155 | +- **OPENVINO_TF_ENABLE_BATCHING:** |
157 | 156 | If this parameter is set to 1 while using VAD-M as the backend, the backend engine will divide the input into multiple asynchronous requests to utilize all devices in VAD-M to achieve better performance. |
158 | 157 |
|
159 | | -Example: |
160 | | - |
161 | | - OPENVINO_TF_ENABLE_BATCHING="1" |
| 158 | + Example: |
| 159 | + |
| 160 | + OPENVINO_TF_ENABLE_BATCHING="1" |
162 | 161 |
|
163 | | -**OPENVINO_TF_DYNAMIC_FALLBACK** |
| 162 | +- **OPENVINO_TF_DYNAMIC_FALLBACK** |
164 | 163 | This variable enables or disables dynamic fallback feature. Should be set to "0" to disable and "1" to enable dynamic fallback. When enabled, clusters causing errors during runtime can fallback to native TensorFlow although they are assigned to run on OpenVINO™. Enabled by default. |
165 | 164 |
|
166 | | -Example: |
167 | | - |
168 | | - OPENVINO_TF_DYNAMIC_FALLBACK="0" |
| 165 | + Example: |
| 166 | + |
| 167 | + OPENVINO_TF_DYNAMIC_FALLBACK="0" |
169 | 168 |
|
170 | | -**OPENVINO_TF_CONSTANT_FOLDING:** |
| 169 | +- **OPENVINO_TF_CONSTANT_FOLDING:** |
171 | 170 | This will enable/disable constant folding pass on the translated clusters (Disabled by default). |
172 | 171 |
|
173 | | -Example: |
174 | | - |
175 | | - OPENVINO_TF_CONSTANT_FOLDING="1" |
| 172 | + Example: |
| 173 | + |
| 174 | + OPENVINO_TF_CONSTANT_FOLDING="1" |
176 | 175 |
|
177 | | -**OPENVINO_TF_TRANSPOSE_SINKING:** |
| 176 | +- **OPENVINO_TF_TRANSPOSE_SINKING:** |
178 | 177 | This will enable/disable transpose sinking pass on the translated clusters (Enabled by default). |
179 | 178 |
|
180 | | -Example: |
181 | | - |
182 | | - OPENVINO_TF_TRANSPOSE_SINKING="0" |
| 179 | + Example: |
| 180 | + |
| 181 | + OPENVINO_TF_TRANSPOSE_SINKING="0" |
183 | 182 |
|
184 | | -**OPENVINO_TF_DISABLE_DEASSIGN_CLUSTERS:** |
| 183 | +- **OPENVINO_TF_DISABLE_DEASSIGN_CLUSTERS:** |
185 | 184 | After clusters are formed, some of the clusters may still fall back to native TensorFlow (e.g a cluster is too small, some conditions are not supported by the target device). If this variable is set, clusters will not be dropped and forced to run on OpenVINO™ backend. This may reduce the performance gain or may lead the execution to crash in some cases. |
186 | 185 |
|
187 | | -Example: |
| 186 | + Example: |
| 187 | + |
| 188 | + OPENVINO_TF_DISABLE_DEASSIGN_CLUSTERS="1" |
| 189 | + |
| 190 | +- **OPENVINO_TF_DISABLE_TFFE:** |
| 191 | +Starting from **OpenVINO™ integration with TensorFlow 2.2.0** release, TensorFlow operations are converted by [TensorFlow Frontend](https://github.com/openvinotoolkit/openvino/tree/master/src/frontends/tensorflow) to the latest available [Operation Set](https://docs.openvino.ai/latest/openvino_docs_ops_opset.html) by OpenVINO™ toolkit except some exceptional cases. By setting **OPENVINO_TF_DISABLE_TFFE** to **1**, TensorFlow Frontend can be disabled. In that case, TensorFlow Importer (the default translator of **OpenVINO™ integration with TensorFlow 2.1.0** and earlier) will be used to translate TensorFlow operations for all backends. If this environment variable is set to **0**, TensorFlow Frontend will be enabled for all backends. As of **OpenVINO™ integration with TensorFlow 2.2.0** release, this environment variable is effective only on Ubuntu and Windows platforms and TensorFlow Frontend is not supported on MacOS yet. The table below shows the translation modules used for each backend and platform by default for **OpenVINO™ integration with TensorFlow 2.2.0**. |
| 192 | + |
| 193 | + | | **CPU** | **GPU** | **GPU_FP16** | **MYRIAD** | **VAD-M** | | |
| 194 | + |-------------|-------------|-------------|--------------|-------------|-------------|----------------------------------------------------------------| |
| 195 | + | **Ubuntu** | TF Frontend | TF Frontend | TF Frontend | TF Importer | TF Importer | _Environment variable changes the default translator_ | |
| 196 | + | **Windows** | TF Frontend | TF Frontend | TF Frontend | TF Importer | TF Importer | _Environment variable changes the default translator_ | |
| 197 | + | **MacOS** | TF Importer | TF Importer | TF Importer | TF Importer | TF Importer | _Environment variable is not effective_ | |
| 198 | + |
| 199 | + Example: |
| 200 | + |
| 201 | + OPENVINO_TF_DISABLE_TFFE="1" |
| 202 | + |
| 203 | +- **OPENVINO_TF_MODEL_CACHE_DIR:** |
| 204 | +Using this environment variable, [OpenVINO™ model caching](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Model_caching_overview.html) can be enabled with **OpenVINO™ integration with TensorFlow**. A cache directory should be specified to be used to store cached files. Reusing a cached model can reduce the model compile time which impacts the first inference latency using **OpenVINO™ integration with TensorFlow**. Model caching is disabled by default. To enable it, the cache directory should be specified using this environment variable. **Note: Model caching support is experimental for OpenVINO™ integration with TensorFlow 2.2.0 release and it is not fully validated.** |
| 205 | + |
| 206 | + Example: |
| 207 | + |
| 208 | + OPENVINO_TF_MODEL_CACHE_DIR=path/to/model/cache/directory |
188 | 209 |
|
189 | | - OPENVINO_TF_DISABLE_DEASSIGN_CLUSTERS="1" |
| 210 | +- **OPENVINO_TF_ENABLE_OVTF_PROFILING:** |
| 211 | +When this environment variable is set to **1**, additional performance timing information will be printed as part of verbose logs. This environment variable should be used with **OPENVINO_TF_VLOG_LEVEL** environment variable and it is only effective when verbose log level is set to **1** or greater. |
| 212 | + |
| 213 | + Example: |
| 214 | + |
| 215 | + OPENVINO_TF_VLOG_LEVEL=1 |
| 216 | + OPENVINO_TF_ENABLE_OVTF_PROFILING=1 |
| 217 | + |
| 218 | +- **OPENVINO_TF_ENABLE_PERF_COUNT:** |
| 219 | +This environment variable is used to print operator level performance counter information. This is only supported by the CPU backend. |
| 220 | + |
| 221 | + Example: |
| 222 | + |
| 223 | + OPENVINO_TF_ENABLE_PERF_COUNT=1 |
190 | 224 |
|
191 | 225 | ## GPU Precision |
192 | 226 |
|
|
0 commit comments