You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 1, 2024. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+7-10Lines changed: 7 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,9 +14,9 @@ This repository contains the source code of **OpenVINO™ integration with Tenso
14
14
This product delivers [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) inline optimizations which enhance inferencing performance with minimal code modifications. **OpenVINO™ integration with TensorFlow accelerates** inference across many AI models on a variety of Intel<sup>®</sup> silicon such as:
15
15
16
16
- Intel<sup>®</sup> CPUs
17
-
- Intel<sup>®</sup> integrated GPUs
18
-
- Intel<sup>®</sup> Movidius™ Vision Processing Units - referred to as VPU
19
-
- Intel<sup>®</sup> Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL
17
+
- Intel<sup>®</sup> integrated and discrete GPUs
18
+
19
+
Note: Support for Intel Movidius™ MyriadX VPUs is no longer maintained. Consider previous releases for running on Myriad VPUs.
20
20
21
21
[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.]
The **OpenVINO™ integration with TensorFlow** package comes with pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately. This package supports:
35
35
- Intel<sup>®</sup> CPUs
36
-
- Intel<sup>®</sup> integrated GPUs
37
-
- Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs)
36
+
- Intel<sup>®</sup> integrated and discrete GPUs
38
37
39
38
40
39
pip3 install -U pip
@@ -46,8 +45,6 @@ For installation instructions on Windows please refer to [**OpenVINO™ integrat
46
45
47
46
To use Intel<sup>®</sup> integrated GPUs for inference, make sure to install the [Intel® Graphics Compute Runtime for OpenCL™ drivers](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html#install-gpu)
48
47
49
-
To leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, install [**OpenVINO™ integration with TensorFlow** alongside the Intel® Distribution of OpenVINO™ Toolkit](./docs/INSTALL.md#install-openvino-integration-with-tensorflow-pypi-release-alongside-the-intel-distribution-of-openvino-toolkit-for-vad-m-support).
50
-
51
48
For more details on installation please refer to [INSTALL.md](docs/INSTALL.md), and for build from source options please refer to [BUILD.md](docs/BUILD.md)
52
49
53
50
## Configuration
@@ -68,11 +65,11 @@ This should produce an output like:
68
65
69
66
CXX11_ABI flag used for this build: 1
70
67
71
-
By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to either Intel<sup>®</sup> integrated GPU or Intel<sup>®</sup> VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done.
68
+
By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to Intel<sup>®</sup> integrated or discrete GPUs (GPU, GPU.0, GPU.1 etc). Invoke the following function to change the hardware on which inferencing is done.
72
69
73
70
openvino_tensorflow.set_backend('<backend_name>')
74
71
75
-
Supported backends include 'CPU', 'GPU', 'GPU_FP16', 'MYRIAD', and 'VAD-M'.
72
+
Supported backends include 'CPU', 'GPU', 'GPU_FP16'
76
73
77
74
To determine what processing units are available on your system for inference, use the following function:
78
75
@@ -85,7 +82,7 @@ For further performance improvements, it is advised to set the environment varia
85
82
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the [examples](./examples) directory.
86
83
87
84
## Docker Support
88
-
Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for **OpenVINO™ integration with TensorFlow** on CPU, GPU, VPU, and VAD-M.
85
+
Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for **OpenVINO™ integration with TensorFlow** on CPU, GPU.
89
86
For more details see [docker readme](docker/README.md).
Copy file name to clipboardExpand all lines: docs/INSTALL.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@
9
9
10
10
### Install **OpenVINO™ integration with TensorFlow** PyPi release
11
11
* Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately
12
-
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs). No VAD-M support
12
+
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs
13
13
14
14
pip3 install -U pip
15
15
pip3 install tensorflow==2.9.3
@@ -19,7 +19,7 @@
19
19
20
20
### Install **OpenVINO™ integration with TensorFlow** PyPi release alongside the Intel® Distribution of OpenVINO™ Toolkit for VAD-M Support
21
21
* Compatible with OpenVINO™ version 2022.3.0
22
-
* Supports Intel<sup>®</sup> Vision Accelerator Design with Movidius™ (VAD-M), it also supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs)
22
+
* Supports it also supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs
23
23
* To use it:
24
24
1. Install tensorflow and openvino-tensorflow packages from PyPi as explained in the section above
25
25
2. Download & install Intel® Distribution of OpenVINO™ Toolkit 2022.3.0 release along with its dependencies from ([https://software.intel.com/en-us/openvino-toolkit/download](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html)).
@@ -32,7 +32,7 @@
32
32
33
33
Install **OpenVINO™ integration with TensorFlow** PyPi release
34
34
* Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately
35
-
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup>, and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs). No VAD-M support
35
+
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs
36
36
37
37
pip3 install -U pip
38
38
pip3 install tensorflow==2.9.3
@@ -44,7 +44,7 @@
44
44
Install **OpenVINO™ integration with TensorFlow** PyPi release alongside TensorFlow released in Github
45
45
* TensorFlow wheel for Windows from PyPi does't have all the API symbols enabled which are required for **OpenVINO™ integration with TensorFlow**. User needs to install the TensorFlow wheel from the assets of the Github release page
46
46
* Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately
47
-
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs). No VAD-M support
47
+
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs
Copy file name to clipboardExpand all lines: examples/notebooks/OpenVINO_TensorFlow_classification_example.ipynb
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,7 @@
10
10
]
11
11
},
12
12
{
13
+
"attachments": {},
13
14
"cell_type": "markdown",
14
15
"metadata": {
15
16
"id": "1s7OK7vW3put"
@@ -19,8 +20,7 @@
19
20
"\n",
20
21
"OpenVINO™ integration with TensorFlow is designed for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. This product effectively delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as: \n",
21
22
"* Intel® CPUs\n",
22
-
"* Intel® integrated GPUs\n",
23
-
"* Intel® Movidius™ Vision Processing Units - referred to as VPU\n",
Copy file name to clipboardExpand all lines: examples/notebooks/OpenVINO_TensorFlow_object_detection_example.ipynb
+2-3Lines changed: 2 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,7 @@
10
10
]
11
11
},
12
12
{
13
+
"attachments": {},
13
14
"cell_type": "markdown",
14
15
"metadata": {
15
16
"id": "atwwZdgc3d3_"
@@ -21,9 +22,7 @@
21
22
"\n",
22
23
"OpenVINO™ integration with TensorFlow is designed for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. This product effectively delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as: \n",
23
24
"* Intel® CPUs\n",
24
-
"* Intel® integrated GPUs\n",
25
-
"* Intel® Movidius™ Vision Processing Units - referred to as VPU\n",
26
-
"* Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL\n",
Copy file name to clipboardExpand all lines: examples/notebooks/OpenVINO_TensorFlow_tfhub_object_detection_example.ipynb
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -17,15 +17,15 @@
17
17
]
18
18
},
19
19
{
20
+
"attachments": {},
20
21
"cell_type": "markdown",
21
22
"id": "898d9206",
22
23
"metadata": {},
23
24
"source": [
24
25
"[OpenVINO™ integration with TensorFlow](https://github.com/openvinotoolkit/openvino_tensorflow) is designed for TensorFlow developers who want to get started with [OpenVINO™](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) in their inferencing applications. This product delivers OpenVINO™ inline optimizations, which enhance inferencing performance of popular deep learning models with minimal code changes and without any accuracy drop. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:\n",
25
26
"\n",
26
27
" - Intel® CPUs\n",
27
-
" - Intel® integrated GPUs\n",
28
-
" - Intel® Movidius™ Vision Processing Units - referred to as VPU"
0 commit comments