You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 1, 2024. It is now read-only.
Copy file name to clipboardExpand all lines: docs/ARCHITECTURE.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Architecture of **OpenVINO™ integration with TensorFlow**
2
2
3
-
This document describes a high-level architecture of **OpenVINO™ integration with TensorFlow**. This capability is registered as a graph optimization pass in TensorFlow and optimizes the execution of supported operator clusters using OpenVINO™ runtime. Unsupported operators fall back to native TensorFlow runtime.
3
+
This document describes a high-level architecture of **OpenVINO™ integration with TensorFlow**. This capability is registered as a graph optimization pass in TensorFlow and optimizes the execution of supported operator clusters using OpenVINO™ runtime. Unsupported operators fall back on native TensorFlow runtime.
4
4
5
5
## Architecture Diagram
6
6
@@ -16,15 +16,15 @@ In this section, we will describe the functionality of each module and how it tr
16
16
17
17
#### Operator Capability Manager
18
18
19
-
Operator Capability Manager (OCM) implements several checks on TensorFlow operators to determine if they are supported by OpenVINO™ backends (Intel<sup>®</sup> hardware). The checks include supported operator types, data types, attribute values, input and output nodes, and many more conditions. The checks are implemented based on the results of several thousands of operator tests and model tests. OCM is continuously evolving as we add more operator tests and model tests to our testing infrastructure. This is an important module that determines which layers in the model should go to OpenVINO™ backends and which layers should fall back on native TensorFlow runtime. OCM takes TensorFlow graph as the input and returns a list of operators that can be marked for clustering so that the operators can be run on OpenVINO™ backends.
19
+
Operator Capability Manager (OCM) implements several checks on TensorFlow operators to determine if they are supported by OpenVINO™ backends (Intel<sup>®</sup> hardware). The checks include supported operator types, data types, attribute values, input and output nodes, and many more conditions. The checks are implemented based on the results of several thousands of operator tests and model tests. OCM is continuously evolving as we add more operator tests and model tests to our testing infrastructure. This is an important module that determines which layers in the model should go to OpenVINO™ backends and which layers should fall back on native TensorFlow runtime. OCM takes TensorFlow graph as the input and returns a list of operators that can be marked for clustering so that the operators can be run in OpenVINO™ backends.
20
20
21
21
#### Graph Partitioner
22
22
23
-
Graph partitioner examines the nodes that are marked for clustering by OCM and performs a further analysis on them. In this stage, the marked operators are first assigned to clusters. Some clusters are dropped after the analysis. For example, if the cluster size is very small or if the cluster is not supported by the backend after receiving more context, then the clusters are dropped and the operators fall back to native TensorFlow runtime. Each cluster of operators is then encapsulated into a custom operator that is executed on OpenVINO™.
23
+
Graph partitioner examines the nodes that are marked for clustering by OCM and performs a further analysis on them. In this stage, the marked operators are first assigned to clusters. Some clusters are dropped after the analysis. For example, if the cluster size is very small or if the cluster is not supported by the backend after receiving more context, then the clusters are dropped and the operators fall back on native TensorFlow runtime. Each cluster of operators is then encapsulated into a custom operator that is executed in OpenVINO™ runtime.
24
24
25
25
#### TensorFlow Importer
26
26
27
-
TensorFlow importer translates the TensorFlow operators in the clusters to OpenVINO™ nGraph operators with the latest available [operator set](https://docs.OpenVINOtoolkit.org/latest/openvino_docs_ops_opset.html) for a give version of OpenVINO™ toolkit. An [nGraph function](https://docs.openvinotoolkit.org/latest/openvino_docs_nGraph_DG_build_function.html) is built for each of the clusters. Once created, it is wrapped into an OpenVINO™ CNNNetwork that holds the intermediate representation of the cluster to be executed on OpenVINO™ backend.
27
+
TensorFlow importer translates the TensorFlow operators in the clusters to OpenVINO™ nGraph operators with the latest available [operator set](https://docs.OpenVINOtoolkit.org/latest/openvino_docs_ops_opset.html) for OpenVINO™ toolkit. An [nGraph function](https://docs.openvinotoolkit.org/latest/openvino_docs_nGraph_DG_build_function.html) is built for each of the clusters. Once created, it is wrapped into an OpenVINO™ CNNNetwork that holds the intermediate representation of the cluster to be executed in OpenVINO™ backend.
28
28
29
29
#### Backend Manager
30
30
@@ -35,4 +35,4 @@ Backend manager creates a backend for the execution of the CNNNetwork. We implem
35
35
36
36
Basic backend is used for Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs). The backend creates an inference request and runs inference on a given input data.
37
37
38
-
VAD-M backend is used for Intel® Vision Accelerator Design with 8 Intel<sup>®</sup> Movidius™ MyriadX VPUs (referred as VAD-M or HDDL). We support batched inference execution in the VAD-M backend. When the user provides a batched input, multiple inference requests are creted and inference is run in parallel on all the available VPUs in the VAD-M.
38
+
VAD-M backend is used for Intel® Vision Accelerator Design with 8 Intel<sup>®</sup> Movidius™ MyriadX VPUs (referred as VAD-M or HDDL). We support batched inference execution in the VAD-M backend. When the user provides a batched input, multiple inference requests are created and inference is run in parallel on all the available VPUs in the VAD-M.
Copy file name to clipboardExpand all lines: docs/USAGE.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
1
# APIs and environment variables for **OpenVINO™ integration with TensorFlow**
2
2
3
-
This document describes available Python APIs for **OpenVINO™ integration with TensorFlow**. The first section covers the essential APIs and lines of code required to leverage the functionality of **OpenVINO integration with TensorFlow** in TensorFlow applications.
3
+
This document describes available Python APIs for **OpenVINO™ integration with TensorFlow**. The first section covers the essential APIs and lines of code required to leverage the functionality of **OpenVINO™ integration with TensorFlow** in TensorFlow applications.
4
4
5
5
## APIs for essential functionality
6
6
7
-
To add the **OpenVINO integration with TensorFlow** package to your TensorFlow python application, import the package using this line of code:
7
+
To add the **OpenVINO™ integration with TensorFlow** package to your TensorFlow python application, import the package using this line of code:
8
8
9
9
import openvino_tensorflow
10
10
@@ -20,27 +20,27 @@ To determine available backends on your system, use the following API:
20
20
21
21
openvino_tensorflow.list_backends()
22
22
23
-
To check if the **OpenVINO integration with TensorFlow** is enabled, use the following API:
23
+
To check if the **OpenVINO™ integration with TensorFlow** is enabled, use the following API:
24
24
25
25
openvino_tensorflow.is_enabled()
26
26
27
27
To get the assigned backend, use the following API:
28
28
29
29
openvino_tensorflow.get_backend()
30
30
31
-
To enable verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO integration with TensorFlow**, use the following API:
31
+
To enable verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO™ integration with TensorFlow**, use the following API:
32
32
33
33
openvino_tensorflow.start_logging_placement()
34
34
35
-
To disbale verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO integration with TensorFlow**, use the following API:
35
+
To disbale verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO™ integration with TensorFlow**, use the following API:
36
36
37
37
openvino_tensorflow.stop_logging_placement()
38
38
39
39
To check if the placement logs are enabled, use the following API:
40
40
41
41
openvino_tensorflow.is_logging_placement()
42
42
43
-
To check the CXX11_ABI used to compile **OpenVINO integration with TensorFlow**, use the following API:
43
+
To check the CXX11_ABI used to compile **OpenVINO™ integration with TensorFlow**, use the following API:
0 commit comments