You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 1, 2024. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ This repository contains the source code of **OpenVINO™ integration with Tenso
10
10
- Intel<sup>®</sup> Movidius™ Vision Processing Units - referred as VPU
11
11
- Intel<sup>®</sup> Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred as VAD-M or HDDL
12
12
13
-
Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend going beyond this component to adopt native OpenVINO APIs and its runtime.
13
+
[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend going beyond this component to adopt native OpenVINO™ APIs and its runtime.]
14
14
15
15
## Installation
16
16
### Prerequisites
@@ -53,10 +53,10 @@ This should produce an output like:
53
53
OpenVINO integration with TensorFlow version: b'0.5.0'
54
54
OpenVINO version used for this build: b'2021.3'
55
55
TensorFlow version used for this build: v2.4.1
56
-
CXX11_ABI flag used for this build: 1
56
+
CXX11_ABI flag used for this build: 0
57
57
OpenVINO integration with TensorFlow built with Grappler: False
58
58
59
-
By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to either Intel<sup>®</sup> integrated GPU or Intel<sup>®</sup> VPU for AI inferencing. Invoke the following function to change the hardware inferencing is done on.
59
+
By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to either Intel<sup>®</sup> integrated GPU or Intel<sup>®</sup> VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done.
Copy file name to clipboardExpand all lines: docs/BUILD.md
+19-20Lines changed: 19 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@
8
8
9
9
## Use Pre-Built Packages
10
10
11
-
**OpenVINO™ integration with TensorFlow** has two releases: one build with CXX11_ABI=0 and another built with CXX11_ABI=1.
11
+
**OpenVINO™ integration with TensorFlow** has two releases: one built with CXX11_ABI=0 and another built with CXX11_ABI=1.
12
12
13
13
Since TensorFlow packages available in [PyPi](https://pypi.org) are built with CXX11_ABI=0 and OpenVINO™ release packages are built with CXX11_ABI=1, binary releases of these packages **cannot be installed together**. Based on your needs, you can choose one of the two available methods:
14
14
@@ -20,7 +20,7 @@ Since TensorFlow packages available in [PyPi](https://pypi.org) are built with C
20
20
21
21
### Install **OpenVINO™ integration with TensorFlow** alongside PyPi TensorFlow
22
22
23
-
This **OpenVINO™ integration with TensorFlow** package includes pre-built libraries of OpenVINO™ version 2021.3. The users do not have to install OpenVINO™ separately. This package supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs).
23
+
This **OpenVINO™ integration with TensorFlow** package includes pre-built libraries of OpenVINO™ version 2021.3. The users do not have to install OpenVINO™ separately. This package supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs).
24
24
25
25
26
26
pip3 install -U pip==21.0.1
@@ -29,7 +29,7 @@ This **OpenVINO™ integration with TensorFlow** package includes pre-built libr
29
29
30
30
### Install **OpenVINO™ integration with TensorFlow** alongside the Intel® Distribution of OpenVINO™ Toolkit
31
31
32
-
This **OpenVINO™ integration with TensorFlow** package is currently compatible with OpenVINO™ version 2021.3. This package supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs) and Intel<sup>®</sup> Vision Accelerator Design with Movidius™ (VAD-M).
32
+
This **OpenVINO™ integration with TensorFlow** package is currently compatible with OpenVINO™ version 2021.3. This package supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs), and Intel<sup>®</sup> Vision Accelerator Design with Movidius™ (VAD-M).
33
33
34
34
You can build TensorFlow from source with -D_GLIBCXX_USE_CXX11_ABI=1 or use the following TensorFlow package:
35
35
@@ -49,23 +49,23 @@ You can build TensorFlow from source with -D_GLIBCXX_USE_CXX11_ABI=1 or use the
3. Download & install Intel® Distribution of OpenVINO™ Toolkit 2021.3 release along with its dependencies from ([https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)).
52
+
3. Download & install Intel® Distribution of OpenVINO™ Toolkit 2021.3 release along with its dependencies from ([https://software.intel.com/en-us/openvino-toolkit/download](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html)).
53
53
54
54
4. Initialize the OpenVINO™ environment by running the `setupvars.sh` located in <code>\<openvino\_install\_directory\>\/bin</code> using the command below:
55
55
56
56
source setupvars.sh
57
57
58
58
5. Install `openvino-tensorflow`. Based on your Python version, choose the appropriate package below:
5. Uses pre-built TF from the given location ([refer the Tensorflow build instructions](#tensorflow)). Pulls and builds OpenVINO™ from source. Use this if you need to build OpenVINO-TensorFlow frequently without building TF from source everytime.
116
+
5. Uses pre-built TF from the given location ([refer the Tensorflow build instructions](#tensorflow)). Pulls and builds OpenVINO™ from source. Use this if you need to build **OpenVINO™ integration with TensorFlow** frequently without building TF from source everytime.
6. Uses prebuilt TF from the given location ([refer the Tensorflow build instructions](#tensorflow)). Uses OpenVINO™ binary. **This is only compatible with ABI1 built TF**.
Select the `help` option of `build_ovtf.py` script to learn more about various build options.
125
125
@@ -153,7 +153,8 @@ Test the installation:
153
153
154
154
python3 test_ovtf.py
155
155
156
-
This command runs all C++ and Python unit tests from the `openvino_tensorflow` source tree. It also runs various TensorFlow Python tests using OpenVINO.
156
+
This command runs all C++ and Python unit tests from the `openvino_tensorflow` source tree. It also runs various TensorFlow Python tests using OpenVINO™.
157
+
157
158
## TensorFlow
158
159
159
160
TensorFlow can be built from source using `build_tf.py`. The build artifacts can be found under ${PATH_TO_TF_BUILD}/artifacts/
@@ -178,17 +179,15 @@ TensorFlow can be built from source using `build_tf.py`. The build artifacts can
OpenVINO™ can be built from source independently using `build_ov.py`
184
185
185
-
## Docker
186
+
## Build ManyLinux2014 compatible **OpenVINO™ integration with TensorFlow** wheels
186
187
187
-
### Build ManyLinux2014 compatible **OpenVINO integration with TensorFlow** whls
188
-
189
-
To build whl files compatible with manylinux2014, use the following commands. The build artifacts will be available in your container's /whl/ folder.
188
+
To build wheel files compatible with manylinux2014, use the following commands. The build artifacts will be available in your container's /whl/ folder.
Copy file name to clipboardExpand all lines: docs/USAGE.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ To enable verbose logs of the execution of the full TensorFlow pipeline and plac
32
32
33
33
openvino_tensorflow.start_logging_placement()
34
34
35
-
To disbale verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO™ integration with TensorFlow**, use the following API:
35
+
To disable verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO™ integration with TensorFlow**, use the following API:
36
36
37
37
openvino_tensorflow.stop_logging_placement()
38
38
@@ -44,7 +44,7 @@ To check the CXX11_ABI used to compile **OpenVINO™ integration with TensorFlow
44
44
45
45
openvino_tensorflow.cxx11_abi_flag()
46
46
47
-
To disable execution of certain operators on the OpenVINO backend, use the following API to run them on native TensorFlow runtime:
47
+
To disable execution of certain operators on the OpenVINO™ backend, use the following API to run them on native TensorFlow runtime:
@@ -57,7 +57,7 @@ To disable execution of certain operators on the OpenVINO backend, use the follo
57
57
## Environment Variables
58
58
59
59
**OPENVINO_TF_DISABLE_DEASSIGN_CLUSTERS:**
60
-
After clusters are formed, some of the clusters may still fall back to native Tensorflow due to some reasons (e.g a cluster is too small, some conditions are not supported by the target device). If this variable is set, clusters will not be dropped and forced to run on OpenVINO backend. This may reduce the performance gain or may lead the execution to crash in some cases.
60
+
After clusters are formed, some of the clusters may still fall back to native Tensorflow due to some reasons (e.g a cluster is too small, some conditions are not supported by the target device). If this variable is set, clusters will not be dropped and forced to run on OpenVINO™ backend. This may reduce the performance gain or may lead the execution to crash in some cases.
61
61
62
62
Example:
63
63
@@ -78,7 +78,7 @@ Example:
78
78
OPENVINO_TF_LOG_PLACEMENT="1"
79
79
80
80
**OPENVINO_TF_BACKEND:**
81
-
Backend device name can be set using this variable. It should be set to CPU, GPU, MYRIAD, or VAD-M.
81
+
Backend device name can be set using this variable. It should be set to "CPU", "GPU", "MYRIAD", or "VAD-M".
82
82
83
83
Example:
84
84
@@ -127,7 +127,7 @@ Example:
127
127
OPENVINO_TF_DUMP_CLUSTERS=1
128
128
129
129
**OPENVINO_TF_DISABLE:**
130
-
Disables OpenVINO Integration if set to 1.
130
+
Disables **OpenVINO™ integration with TensorFlow** if set to 1.
0 commit comments