Skip to content
This repository was archived by the owner on Jul 1, 2024. It is now read-only.

Commit bdb9afa

Browse files
authored
Disable Myriad Plugin (#415)
* Disable Myriad Plugin * Changes to MYD documentation * Disable Myriad plugin for OpenVINO source builds * Add back required file plugins.xml
1 parent 8c7c864 commit bdb9afa

11 files changed

+30
-57
lines changed

README.md

Lines changed: 7 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@ This repository contains the source code of **OpenVINO™ integration with Tenso
1414
This product delivers [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) inline optimizations which enhance inferencing performance with minimal code modifications. **OpenVINO™ integration with TensorFlow accelerates** inference across many AI models on a variety of Intel<sup>®</sup> silicon such as:
1515

1616
- Intel<sup>®</sup> CPUs
17-
- Intel<sup>®</sup> integrated GPUs
18-
- Intel<sup>®</sup> Movidius™ Vision Processing Units - referred to as VPU
19-
- Intel<sup>®</sup> Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL
17+
- Intel<sup>®</sup> integrated and discrete GPUs
18+
19+
Note: Support for Intel Movidius™ MyriadX VPUs is no longer maintained. Consider previous releases for running on Myriad VPUs.
2020

2121
[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.]
2222

@@ -33,8 +33,7 @@ Check our [Interactive Installation Table](https://openvinotoolkit.github.io/ope
3333

3434
The **OpenVINO™ integration with TensorFlow** package comes with pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately. This package supports:
3535
- Intel<sup>®</sup> CPUs
36-
- Intel<sup>®</sup> integrated GPUs
37-
- Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs)
36+
- Intel<sup>®</sup> integrated and discrete GPUs
3837

3938

4039
pip3 install -U pip
@@ -46,8 +45,6 @@ For installation instructions on Windows please refer to [**OpenVINO™ integrat
4645

4746
To use Intel<sup>®</sup> integrated GPUs for inference, make sure to install the [Intel® Graphics Compute Runtime for OpenCL™ drivers](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html#install-gpu)
4847

49-
To leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, install [**OpenVINO™ integration with TensorFlow** alongside the Intel® Distribution of OpenVINO™ Toolkit](./docs/INSTALL.md#install-openvino-integration-with-tensorflow-pypi-release-alongside-the-intel-distribution-of-openvino-toolkit-for-vad-m-support).
50-
5148
For more details on installation please refer to [INSTALL.md](docs/INSTALL.md), and for build from source options please refer to [BUILD.md](docs/BUILD.md)
5249

5350
## Configuration
@@ -68,11 +65,11 @@ This should produce an output like:
6865

6966
CXX11_ABI flag used for this build: 1
7067

71-
By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to either Intel<sup>®</sup> integrated GPU or Intel<sup>®</sup> VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done.
68+
By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to Intel<sup>®</sup> integrated or discrete GPUs (GPU, GPU.0, GPU.1 etc). Invoke the following function to change the hardware on which inferencing is done.
7269

7370
openvino_tensorflow.set_backend('<backend_name>')
7471

75-
Supported backends include 'CPU', 'GPU', 'GPU_FP16', 'MYRIAD', and 'VAD-M'.
72+
Supported backends include 'CPU', 'GPU', 'GPU_FP16'
7673

7774
To determine what processing units are available on your system for inference, use the following function:
7875

@@ -85,7 +82,7 @@ For further performance improvements, it is advised to set the environment varia
8582
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the [examples](./examples) directory.
8683

8784
## Docker Support
88-
Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for **OpenVINO™ integration with TensorFlow** on CPU, GPU, VPU, and VAD-M.
85+
Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for **OpenVINO™ integration with TensorFlow** on CPU, GPU.
8986
For more details see [docker readme](docker/README.md).
9087

9188
### Prebuilt Images

README_cn.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,6 @@
1515

1616
- 英特尔<sup>®</sup> CPU
1717
- 英特尔<sup>®</sup> 集成 GPU
18-
- 英特尔<sup>®</sup> Movidius™ 视觉处理单元 (VPU)
19-
- 支持 8 颗英特尔 Movidius™ MyriadX VPU 的英特尔<sup>®</sup> 视觉加速器设计(称作 VAD-M 或 HDDL)
2018

2119
[注:为实现最佳的性能、效率、工具定制和硬件控制,我们建议开发人员使用原生 OpenVINO™ API 及其运行时。]
2220

@@ -34,7 +32,6 @@
3432
**OpenVINO™ integration with TensorFlow** 安装包附带 OpenVINO™ 2022.3.0 版本的预建库,用户无需单独安装 OpenVINO™。该安装包支持:
3533
- 英特尔<sup>®</sup> CPU
3634
- 英特尔<sup>®</sup> 集成 GPU
37-
- 英特尔<sup>®</sup> Movidius™ 视觉处理单元 (VPU)
3835

3936

4037
pip3 install -U pip
@@ -45,7 +42,6 @@
4542

4643
如果您想使用Intel<sup>®</sup> 集成显卡进行推理,请确保安装[Intel® Graphics Compute Runtime for OpenCL™ drivers](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html#install-gpu)
4744

48-
如果您想使用支持 Movidius™ (VAD-M)进行推理的英特尔® 视觉加速器设计 (VAD-M) 进行推理,请安装 [**OpenVINO™ integration with TensorFlow** 以及英特尔® OpenVINO™ 工具套件发布版](docs/INSTALL_cn.md#安装-openvino-integration-with-tensorflow-pypi-发布版与独立安装intel-openvino-发布版以支持vad-m)
4945

5046
更多安装详情,请参阅 [INSTALL.md](docs/INSTALL_cn.md), 更多源构建选项请参阅 [BUILD.md](docs/BUILD_cn.md)
5147

docs/INSTALL.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99

1010
### Install **OpenVINO™ integration with TensorFlow** PyPi release
1111
* Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately
12-
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs). No VAD-M support
12+
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs
1313

1414
pip3 install -U pip
1515
pip3 install tensorflow==2.9.3
@@ -19,7 +19,7 @@
1919

2020
### Install **OpenVINO™ integration with TensorFlow** PyPi release alongside the Intel® Distribution of OpenVINO™ Toolkit for VAD-M Support
2121
* Compatible with OpenVINO™ version 2022.3.0
22-
* Supports Intel<sup>®</sup> Vision Accelerator Design with Movidius™ (VAD-M), it also supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs)
22+
* Supports it also supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs
2323
* To use it:
2424
1. Install tensorflow and openvino-tensorflow packages from PyPi as explained in the section above
2525
2. Download & install Intel® Distribution of OpenVINO™ Toolkit 2022.3.0 release along with its dependencies from ([https://software.intel.com/en-us/openvino-toolkit/download](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html)).
@@ -32,7 +32,7 @@
3232

3333
Install **OpenVINO™ integration with TensorFlow** PyPi release
3434
* Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately
35-
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup>, and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs). No VAD-M support
35+
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs
3636

3737
pip3 install -U pip
3838
pip3 install tensorflow==2.9.3
@@ -44,7 +44,7 @@
4444
Install **OpenVINO™ integration with TensorFlow** PyPi release alongside TensorFlow released in Github
4545
* TensorFlow wheel for Windows from PyPi does't have all the API symbols enabled which are required for **OpenVINO™ integration with TensorFlow**. User needs to install the TensorFlow wheel from the assets of the Github release page
4646
* Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately
47-
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs). No VAD-M support
47+
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs
4848

4949
pip3.9 install -U pip
5050
pip3.9 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v2.2.0/tensorflow-2.9.2-cp39-cp39-win_amd64.whl

docs/INSTALL_cn.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
### 安装 **OpenVINO™ integration with TensorFlow** PyPi 发布版
1010
* 包括 Intel<sup>®</sup> OpenVINO™ 2022.3.0 版的预建库,用户无需单独安装 OpenVINO™。
11-
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和 Intel<sup>®</sup> Movidius™ 视觉处理单元 (VPU),但不支持 VAD-M。
11+
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和
1212

1313
pip3 install -U pip
1414
pip3 install tensorflow==2.9.3
@@ -18,7 +18,7 @@
1818

1919
### 安装 **OpenVINO™ integration with TensorFlow** PyPi 发布版与独立安装Intel® OpenVINO™ 发布版以支持VAD-M
2020
* 兼容 Intel<sup>®</sup> OpenVINO™ 2022.3.0版本
21-
* 支持 Intel<sup>®</sup> Movidius™ (VAD-M) 的视觉加速器设计 同时支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU、Intel<sup>®</sup> Movidius™ 视觉处理单元 (VPU)。
21+
* 支持 的视觉加速器设计 同时支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU
2222
* 使用方法:
2323
1. 按照上述方法从PyPi安装tensorflow 和 openvino-tensorflow。
2424
2. 下载安装Intel<sup>®</sup> OpenVINO™ 2022.3.0发布版,一并安装其依赖([https://software.intel.com/en-us/openvino-toolkit/download](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html))。
@@ -31,7 +31,7 @@
3131

3232
安装 **OpenVINO™ integration with TensorFlow** PyPi 发布版
3333
* 包括 Intel<sup>®</sup> OpenVINO™ 2022.3.0 版的预建库,用户无需单独安装Intel<sup>®</sup> OpenVINO™
34-
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和 Intel<sup>®</sup> Movidius™ 视觉处理单元 (VPU),但不支持 VAD-M。
34+
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和
3535

3636
pip3 install -U pip
3737
pip3 install tensorflow==2.9.3
@@ -43,7 +43,7 @@
4343
安装 **OpenVINO™ integration with TensorFlow** PyPi 版本与独立安装TensorFlow Github版本
4444
* 基于Windows 的TensorFlow PyPi 安装版并没有使能 **OpenVINO™ integration with TensorFlow** 需要的所有API。用户需要从Github 发布中安装TensorFlow wheel。
4545
* 包括 OpenVINO™ 2022.3.0 版的预建库。 用户无需单独安装 Intel<sup>®</sup> OpenVINO™ 。
46-
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和 Intel<sup>®</sup> Movidius™ 视觉处理单元 (VPU),但不支持 VAD-M。
46+
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和
4747

4848
pip3.9 install -U pip
4949
pip3.9 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v2.2.0/tensorflow-2.9.2-cp39-cp39-win_amd64.whl

examples/notebooks/OpenVINO_TensorFlow_classification_example.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
]
1111
},
1212
{
13+
"attachments": {},
1314
"cell_type": "markdown",
1415
"metadata": {
1516
"id": "1s7OK7vW3put"
@@ -19,8 +20,7 @@
1920
"\n",
2021
"OpenVINO™ integration with TensorFlow is designed for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. This product effectively delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as: \n",
2122
"* Intel® CPUs\n",
22-
"* Intel® integrated GPUs\n",
23-
"* Intel® Movidius™ Vision Processing Units - referred to as VPU\n",
23+
"* Intel® integrated and discrete GPUs\n",
2424
"\n",
2525
"**Overview**\n",
2626
"\n",

examples/notebooks/OpenVINO_TensorFlow_object_detection_example.ipynb

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
]
1111
},
1212
{
13+
"attachments": {},
1314
"cell_type": "markdown",
1415
"metadata": {
1516
"id": "atwwZdgc3d3_"
@@ -21,9 +22,7 @@
2122
"\n",
2223
"OpenVINO™ integration with TensorFlow is designed for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. This product effectively delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as: \n",
2324
"* Intel® CPUs\n",
24-
"* Intel® integrated GPUs\n",
25-
"* Intel® Movidius™ Vision Processing Units - referred to as VPU\n",
26-
"* Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL\n",
25+
"* Intel® integrated and discrete GPUs\n",
2726
"\n",
2827
"**Overview**\n",
2928
"\n",

examples/notebooks/OpenVINO_TensorFlow_tfhub_object_detection_example.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,15 +17,15 @@
1717
]
1818
},
1919
{
20+
"attachments": {},
2021
"cell_type": "markdown",
2122
"id": "898d9206",
2223
"metadata": {},
2324
"source": [
2425
"[OpenVINO™ integration with TensorFlow](https://github.com/openvinotoolkit/openvino_tensorflow) is designed for TensorFlow developers who want to get started with [OpenVINO™](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) in their inferencing applications. This product delivers OpenVINO™ inline optimizations, which enhance inferencing performance of popular deep learning models with minimal code changes and without any accuracy drop. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:\n",
2526
"\n",
2627
" - Intel® CPUs\n",
27-
" - Intel® integrated GPUs\n",
28-
" - Intel® Movidius™ Vision Processing Units - referred to as VPU"
28+
" - Intel® integrated and discrete GPUs"
2929
]
3030
},
3131
{

openvino_tensorflow/CMakeLists.txt

Lines changed: 1 addition & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -145,9 +145,6 @@ if (APPLE)
145145
endif()
146146

147147
set(IE_LIBS_PATH ${OPENVINO_ARTIFACTS_DIR}/runtime/lib/intel64/${CMAKE_BUILD_TYPE})
148-
set(IE_LIBS
149-
"${IE_LIBS_PATH}/pcie-ma2x8x.mvcmd"
150-
)
151148
set(TBB_LIBS ${OPENVINO_ARTIFACTS_DIR}/runtime/3rdparty/tbb/lib/)
152149
install(FILES ${CMAKE_INSTALL_PREFIX}/../ocm/OCM/${OCM_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}")
153150
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/${TF_CONVERSION_EXTENSIONS_MODULE_NAME}/${TF_CONVERSION_EXTENSIONS_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}")
@@ -161,7 +158,6 @@ elseif(WIN32)
161158
set (IE_LIBS
162159
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_gpu_plugin.${PLUGIN_LIB_EXT}"
163160
"${IE_LIBS_PATH}/cache.json"
164-
"${IE_LIBS_PATH}/pcie-ma2x8x.elf"
165161
)
166162
set(TBB_LIBS ${OPENVINO_ARTIFACTS_DIR}/runtime/3rdparty/tbb/bin/)
167163
install(FILES ${CMAKE_INSTALL_PREFIX}/../ocm/OCM/${CMAKE_BUILD_TYPE}/${OCM_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}")
@@ -183,7 +179,6 @@ else()
183179
set (IE_LIBS
184180
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_gpu_plugin.${PLUGIN_LIB_EXT}"
185181
"${IE_LIBS_PATH}/cache.json"
186-
"${IE_LIBS_PATH}/pcie-ma2x8x.mvcmd"
187182
)
188183
set(TBB_LIBS ${OPENVINO_ARTIFACTS_DIR}/runtime/3rdparty/tbb/lib/)
189184
install(FILES ${CMAKE_INSTALL_PREFIX}/../ocm/OCM/${OCM_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}")
@@ -200,9 +195,7 @@ set (IE_LIBS
200195
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_c.${OV_LIB_EXT_DOT}"
201196
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_tensorflow_frontend.${OV_LIB_EXT_DOT}"
202197
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_cpu_plugin.${PLUGIN_LIB_EXT}"
203-
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_myriad_plugin.${PLUGIN_LIB_EXT}"
204-
"${IE_LIBS_PATH}/usb-ma2x8x.mvcmd"
205-
"${IE_LIBS_PATH}/plugins.xml"
198+
"${IE_LIBS_PATH}/plugins.xml"
206199
)
207200

208201
# Install Openvino and TBB libraries

python/CreatePipWhl.cmake

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -91,10 +91,8 @@ if (PYTHON)
9191
if (APPLE)
9292
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
9393
set(libMKLDNNPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_cpu_plugind.so")
94-
set(libmyriadPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_myriad_plugind.so")
9594
else()
9695
set(libMKLDNNPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_cpu_plugin.so")
97-
set(libmyriadPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_myriad_plugin.so")
9896
endif()
9997

10098
# libMKLDNNPluginPath
@@ -111,14 +109,6 @@ if (PYTHON)
111109
endif()
112110

113111
# libmyriadPluginPath
114-
execute_process(COMMAND
115-
install_name_tool -add_rpath
116-
@loader_path
117-
${libmyriadPluginPath}
118-
RESULT_VARIABLE result
119-
ERROR_VARIABLE ERR
120-
ERROR_STRIP_TRAILING_WHITESPACE
121-
)
122112
if(${result})
123113
message(FATAL_ERROR "Cannot add rpath")
124114
endif()

0 commit comments

Comments
 (0)