Skip to content
This repository was archived by the owner on Jul 1, 2024. It is now read-only.

Commit ddaf932

Browse files
adamczapsuryasidd
andauthored
Trademark symbol update (#89)
* Update BUILD.md * added TM symbol * Updated documentation Co-authored-by: suryasidd <[email protected]>
1 parent 1fe88a7 commit ddaf932

File tree

8 files changed

+27
-717
lines changed

8 files changed

+27
-717
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ This repository contains the source code of **OpenVINO™ integration with Tenso
1010
- Intel<sup>®</sup> Movidius™ Vision Processing Units - referred as VPU
1111
- Intel<sup>®</sup> Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred as VAD-M or HDDL
1212

13-
Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend going beyond this component to adopt native OpenVINO APIs and its runtime.
13+
[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend going beyond this component to adopt native OpenVINO APIs and its runtime.]
1414

1515
## Installation
1616
### Prerequisites
@@ -53,10 +53,10 @@ This should produce an output like:
5353
OpenVINO integration with TensorFlow version: b'0.5.0'
5454
OpenVINO version used for this build: b'2021.3'
5555
TensorFlow version used for this build: v2.4.1
56-
CXX11_ABI flag used for this build: 1
56+
CXX11_ABI flag used for this build: 0
5757
OpenVINO integration with TensorFlow built with Grappler: False
5858

59-
By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to either Intel<sup>®</sup> integrated GPU or Intel<sup>®</sup> VPU for AI inferencing. Invoke the following function to change the hardware inferencing is done on.
59+
By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to either Intel<sup>®</sup> integrated GPU or Intel<sup>®</sup> VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done.
6060

6161
openvino_tensorflow.set_backend('<backend_name>')
6262

docs/BUILD.md

Lines changed: 19 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
## Use Pre-Built Packages
1010

11-
**OpenVINO™ integration with TensorFlow** has two releases: one build with CXX11_ABI=0 and another built with CXX11_ABI=1.
11+
**OpenVINO™ integration with TensorFlow** has two releases: one built with CXX11_ABI=0 and another built with CXX11_ABI=1.
1212

1313
Since TensorFlow packages available in [PyPi](https://pypi.org) are built with CXX11_ABI=0 and OpenVINO™ release packages are built with CXX11_ABI=1, binary releases of these packages **cannot be installed together**. Based on your needs, you can choose one of the two available methods:
1414

@@ -20,7 +20,7 @@ Since TensorFlow packages available in [PyPi](https://pypi.org) are built with C
2020

2121
### Install **OpenVINO™ integration with TensorFlow** alongside PyPi TensorFlow
2222

23-
This **OpenVINO™ integration with TensorFlow** package includes pre-built libraries of OpenVINO™ version 2021.3. The users do not have to install OpenVINO™ separately. This package supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs).
23+
This **OpenVINO™ integration with TensorFlow** package includes pre-built libraries of OpenVINO™ version 2021.3. The users do not have to install OpenVINO™ separately. This package supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs).
2424

2525

2626
pip3 install -U pip==21.0.1
@@ -29,7 +29,7 @@ This **OpenVINO™ integration with TensorFlow** package includes pre-built libr
2929

3030
### Install **OpenVINO™ integration with TensorFlow** alongside the Intel® Distribution of OpenVINO™ Toolkit
3131

32-
This **OpenVINO™ integration with TensorFlow** package is currently compatible with OpenVINO™ version 2021.3. This package supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs) and Intel<sup>®</sup> Vision Accelerator Design with Movidius™ (VAD-M).
32+
This **OpenVINO™ integration with TensorFlow** package is currently compatible with OpenVINO™ version 2021.3. This package supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs), and Intel<sup>®</sup> Vision Accelerator Design with Movidius™ (VAD-M).
3333

3434
You can build TensorFlow from source with -D_GLIBCXX_USE_CXX11_ABI=1 or use the following TensorFlow package:
3535

@@ -49,23 +49,23 @@ You can build TensorFlow from source with -D_GLIBCXX_USE_CXX11_ABI=1 or use the
4949

5050
pip3.8 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v0.5.0/tensorflow_abi1-2.4.1-cp38-cp38-manylinux2010_x86_64.whl
5151

52-
3. Download & install Intel® Distribution of OpenVINO™ Toolkit 2021.3 release along with its dependencies from ([https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)).
52+
3. Download & install Intel® Distribution of OpenVINO™ Toolkit 2021.3 release along with its dependencies from ([https://software.intel.com/en-us/openvino-toolkit/download](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html)).
5353

5454
4. Initialize the OpenVINO™ environment by running the `setupvars.sh` located in <code>\<openvino\_install\_directory\>\/bin</code> using the command below:
5555

5656
source setupvars.sh
5757

5858
5. Install `openvino-tensorflow`. Based on your Python version, choose the appropriate package below:
5959

60-
pip3.6 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v0.5.0/openvino_tensorflow_abi1-0.5.0-cp36-cp36m-manylinux2014_x86_64.whl
60+
pip3.6 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v0.5.0/openvino_tensorflow_abi1-0.5.0-cp36-cp36m-linux_x86_64.whl
6161

6262
or
6363

64-
pip3.7 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v0.5.0/openvino_tensorflow_abi1-0.5.0-cp37-cp37m-manylinux2014_x86_64.whl
64+
pip3.7 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v0.5.0/openvino_tensorflow_abi1-0.5.0-cp37-cp37m-linux_x86_64.whl
6565

6666
or
6767

68-
pip3.8 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v0.5.0/openvino_tensorflow_abi1-0.5.0-cp38-cp38-manylinux2014_x86_64.whl
68+
pip3.8 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v0.5.0/openvino_tensorflow_abi1-0.5.0-cp38-cp38-linux_x86_64.whl
6969

7070

7171

@@ -79,7 +79,7 @@ You can build TensorFlow from source with -D_GLIBCXX_USE_CXX11_ABI=1 or use the
7979

8080
## Build From Source
8181
Clone the `openvino_tensorflow` repository:
82-
82+
8383
```bash
8484
$ git clone https://github.com/openvinotoolkit/openvino_tensorflow.git
8585
$ cd openvino_tensorflow
@@ -102,7 +102,7 @@ Use one of the following build options based on the requirements
102102

103103
2. Pulls compatible prebuilt TF package from PyPi. Uses OpenVINO™ binary.
104104

105-
python3 build_ovtf.py use_openvino_from_location=/opt/intel/openvino_2021.3.394/ --cxx11_abi_version=1
105+
python3 build_ovtf.py --use_openvino_from_location=/opt/intel/openvino_2021.3.394/ --cxx11_abi_version=1
106106

107107

108108
3. Pulls and builds TF and OpenVINO™ from source
@@ -111,15 +111,15 @@ Use one of the following build options based on the requirements
111111

112112
4. Pulls and builds TF from Source. Uses OpenVINO™ binary.
113113

114-
python3 build_ovtf.py build_tf_from_source use_openvino_from_location=/opt/intel/openvino_2021.3.394/ --cxx11_abi_version=1
114+
python3 build_ovtf.py --build_tf_from_source --use_openvino_from_location=/opt/intel/openvino_2021.3.394/ --cxx11_abi_version=1
115115

116-
5. Uses pre-built TF from the given location ([refer the Tensorflow build instructions](#tensorflow)). Pulls and builds OpenVINO™ from source. Use this if you need to build OpenVINO-TensorFlow frequently without building TF from source everytime.
116+
5. Uses pre-built TF from the given location ([refer the Tensorflow build instructions](#tensorflow)). Pulls and builds OpenVINO™ from source. Use this if you need to build **OpenVINO™ integration with TensorFlow** frequently without building TF from source everytime.
117117

118-
python3 build_ovtf.py use_tensorflow_from_location=/path/to/tensorflow/build/
118+
python3 build_ovtf.py --use_tensorflow_from_location=/path/to/tensorflow/build/
119119

120120
6. Uses prebuilt TF from the given location ([refer the Tensorflow build instructions](#tensorflow)). Uses OpenVINO™ binary. **This is only compatible with ABI1 built TF**.
121121

122-
python3 build_ovtf.py use_tensorflow_from_location=/path/to/tensorflow/build/ use_openvino_from_location=/opt/intel/openvino_2021/ --cxx11_abi_version=1
122+
python3 build_ovtf.py --use_tensorflow_from_location=/path/to/tensorflow/build/ --use_openvino_from_location=/opt/intel/openvino_2021/ --cxx11_abi_version=1
123123

124124
Select the `help` option of `build_ovtf.py` script to learn more about various build options.
125125

@@ -153,7 +153,8 @@ Test the installation:
153153

154154
python3 test_ovtf.py
155155

156-
This command runs all C++ and Python unit tests from the `openvino_tensorflow` source tree. It also runs various TensorFlow Python tests using OpenVINO.
156+
This command runs all C++ and Python unit tests from the `openvino_tensorflow` source tree. It also runs various TensorFlow Python tests using OpenVINO™.
157+
157158
## TensorFlow
158159

159160
TensorFlow can be built from source using `build_tf.py`. The build artifacts can be found under ${PATH_TO_TF_BUILD}/artifacts/
@@ -178,17 +179,15 @@ TensorFlow can be built from source using `build_tf.py`. The build artifacts can
178179

179180
python3 build_tf.py --output_dir=${PATH_TO_TF_BUILD} --tf_version=v2.4.1
180181

181-
## OpenVINO
182+
## OpenVINO
182183

183184
OpenVINO™ can be built from source independently using `build_ov.py`
184185

185-
## Docker
186+
## Build ManyLinux2014 compatible **OpenVINO™ integration with TensorFlow** wheels
186187

187-
### Build ManyLinux2014 compatible **OpenVINO integration with TensorFlow** whls
188-
189-
To build whl files compatible with manylinux2014, use the following commands. The build artifacts will be available in your container's /whl/ folder.
188+
To build wheel files compatible with manylinux2014, use the following commands. The build artifacts will be available in your container's /whl/ folder.
190189

191190
```bash
192191
cd tools/builds/
193192
docker build --no-cache -t openvino_tensorflow/pip --build-arg OVTF_BRANCH=releases/v0.5.0 . -f Dockerfile.manylinux2014
194-
```
193+
```

docs/USAGE.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ To enable verbose logs of the execution of the full TensorFlow pipeline and plac
3232

3333
openvino_tensorflow.start_logging_placement()
3434

35-
To disbale verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO™ integration with TensorFlow**, use the following API:
35+
To disable verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO™ integration with TensorFlow**, use the following API:
3636

3737
openvino_tensorflow.stop_logging_placement()
3838

@@ -44,7 +44,7 @@ To check the CXX11_ABI used to compile **OpenVINO™ integration with TensorFlow
4444

4545
openvino_tensorflow.cxx11_abi_flag()
4646

47-
To disable execution of certain operators on the OpenVINO backend, use the following API to run them on native TensorFlow runtime:
47+
To disable execution of certain operators on the OpenVINO backend, use the following API to run them on native TensorFlow runtime:
4848

4949
openvino_tensorflow.set_disabled_ops(<string_of_operators_separated_by_commas>)
5050

@@ -57,7 +57,7 @@ To disable execution of certain operators on the OpenVINO backend, use the follo
5757
## Environment Variables
5858

5959
**OPENVINO_TF_DISABLE_DEASSIGN_CLUSTERS:**
60-
After clusters are formed, some of the clusters may still fall back to native Tensorflow due to some reasons (e.g a cluster is too small, some conditions are not supported by the target device). If this variable is set, clusters will not be dropped and forced to run on OpenVINO backend. This may reduce the performance gain or may lead the execution to crash in some cases.
60+
After clusters are formed, some of the clusters may still fall back to native Tensorflow due to some reasons (e.g a cluster is too small, some conditions are not supported by the target device). If this variable is set, clusters will not be dropped and forced to run on OpenVINO backend. This may reduce the performance gain or may lead the execution to crash in some cases.
6161

6262
Example:
6363

@@ -78,7 +78,7 @@ Example:
7878
OPENVINO_TF_LOG_PLACEMENT="1"
7979

8080
**OPENVINO_TF_BACKEND:**
81-
Backend device name can be set using this variable. It should be set to CPU, GPU, MYRIAD, or VAD-M.
81+
Backend device name can be set using this variable. It should be set to "CPU", "GPU", "MYRIAD", or "VAD-M".
8282

8383
Example:
8484

@@ -127,7 +127,7 @@ Example:
127127
OPENVINO_TF_DUMP_CLUSTERS=1
128128

129129
**OPENVINO_TF_DISABLE:**
130-
Disables OpenVINO Integration if set to 1.
130+
Disables **OpenVINO™ integration with TensorFlow** if set to 1.
131131

132132
Example:
133133

docs/cloud_instructions/Azure_instructions.md

Lines changed: 0 additions & 104 deletions
This file was deleted.

0 commit comments

Comments
 (0)