Skip to content
This repository was archived by the owner on Jul 1, 2024. It is now read-only.

Commit 1fe88a7

Browse files
adamczapsuryasidd
authored andcommitted
Minor document updates (#88)
* Updated ARCHITECTURE.md * Updated USAGE.md * Added ™ to OpenVINO integration with TensorFlow
1 parent a3527b0 commit 1fe88a7

File tree

2 files changed

+11
-11
lines changed

2 files changed

+11
-11
lines changed

docs/ARCHITECTURE.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Architecture of **OpenVINO™ integration with TensorFlow**
22

3-
This document describes a high-level architecture of **OpenVINO™ integration with TensorFlow**. This capability is registered as a graph optimization pass in TensorFlow and optimizes the execution of supported operator clusters using OpenVINO™ runtime. Unsupported operators fall back to native TensorFlow runtime.
3+
This document describes a high-level architecture of **OpenVINO™ integration with TensorFlow**. This capability is registered as a graph optimization pass in TensorFlow and optimizes the execution of supported operator clusters using OpenVINO™ runtime. Unsupported operators fall back on native TensorFlow runtime.
44

55
## Architecture Diagram
66

@@ -16,15 +16,15 @@ In this section, we will describe the functionality of each module and how it tr
1616

1717
#### Operator Capability Manager
1818

19-
Operator Capability Manager (OCM) implements several checks on TensorFlow operators to determine if they are supported by OpenVINO™ backends (Intel<sup>®</sup> hardware). The checks include supported operator types, data types, attribute values, input and output nodes, and many more conditions. The checks are implemented based on the results of several thousands of operator tests and model tests. OCM is continuously evolving as we add more operator tests and model tests to our testing infrastructure. This is an important module that determines which layers in the model should go to OpenVINO™ backends and which layers should fall back on native TensorFlow runtime. OCM takes TensorFlow graph as the input and returns a list of operators that can be marked for clustering so that the operators can be run on OpenVINO™ backends.
19+
Operator Capability Manager (OCM) implements several checks on TensorFlow operators to determine if they are supported by OpenVINO™ backends (Intel<sup>®</sup> hardware). The checks include supported operator types, data types, attribute values, input and output nodes, and many more conditions. The checks are implemented based on the results of several thousands of operator tests and model tests. OCM is continuously evolving as we add more operator tests and model tests to our testing infrastructure. This is an important module that determines which layers in the model should go to OpenVINO™ backends and which layers should fall back on native TensorFlow runtime. OCM takes TensorFlow graph as the input and returns a list of operators that can be marked for clustering so that the operators can be run in OpenVINO™ backends.
2020

2121
#### Graph Partitioner
2222

23-
Graph partitioner examines the nodes that are marked for clustering by OCM and performs a further analysis on them. In this stage, the marked operators are first assigned to clusters. Some clusters are dropped after the analysis. For example, if the cluster size is very small or if the cluster is not supported by the backend after receiving more context, then the clusters are dropped and the operators fall back to native TensorFlow runtime. Each cluster of operators is then encapsulated into a custom operator that is executed on OpenVINO™.
23+
Graph partitioner examines the nodes that are marked for clustering by OCM and performs a further analysis on them. In this stage, the marked operators are first assigned to clusters. Some clusters are dropped after the analysis. For example, if the cluster size is very small or if the cluster is not supported by the backend after receiving more context, then the clusters are dropped and the operators fall back on native TensorFlow runtime. Each cluster of operators is then encapsulated into a custom operator that is executed in OpenVINO™ runtime.
2424

2525
#### TensorFlow Importer
2626

27-
TensorFlow importer translates the TensorFlow operators in the clusters to OpenVINO™ nGraph operators with the latest available [operator set](https://docs.OpenVINOtoolkit.org/latest/openvino_docs_ops_opset.html) for a give version of OpenVINO™ toolkit. An [nGraph function](https://docs.openvinotoolkit.org/latest/openvino_docs_nGraph_DG_build_function.html) is built for each of the clusters. Once created, it is wrapped into an OpenVINO™ CNNNetwork that holds the intermediate representation of the cluster to be executed on OpenVINO™ backend.
27+
TensorFlow importer translates the TensorFlow operators in the clusters to OpenVINO™ nGraph operators with the latest available [operator set](https://docs.OpenVINOtoolkit.org/latest/openvino_docs_ops_opset.html) for OpenVINO™ toolkit. An [nGraph function](https://docs.openvinotoolkit.org/latest/openvino_docs_nGraph_DG_build_function.html) is built for each of the clusters. Once created, it is wrapped into an OpenVINO™ CNNNetwork that holds the intermediate representation of the cluster to be executed in OpenVINO™ backend.
2828

2929
#### Backend Manager
3030

@@ -35,4 +35,4 @@ Backend manager creates a backend for the execution of the CNNNetwork. We implem
3535

3636
Basic backend is used for Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs). The backend creates an inference request and runs inference on a given input data.
3737

38-
VAD-M backend is used for Intel® Vision Accelerator Design with 8 Intel<sup>®</sup> Movidius™ MyriadX VPUs (referred as VAD-M or HDDL). We support batched inference execution in the VAD-M backend. When the user provides a batched input, multiple inference requests are creted and inference is run in parallel on all the available VPUs in the VAD-M.
38+
VAD-M backend is used for Intel® Vision Accelerator Design with 8 Intel<sup>®</sup> Movidius™ MyriadX VPUs (referred as VAD-M or HDDL). We support batched inference execution in the VAD-M backend. When the user provides a batched input, multiple inference requests are created and inference is run in parallel on all the available VPUs in the VAD-M.

docs/USAGE.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# APIs and environment variables for **OpenVINO™ integration with TensorFlow**
22

3-
This document describes available Python APIs for **OpenVINO™ integration with TensorFlow**. The first section covers the essential APIs and lines of code required to leverage the functionality of **OpenVINO integration with TensorFlow** in TensorFlow applications.
3+
This document describes available Python APIs for **OpenVINO™ integration with TensorFlow**. The first section covers the essential APIs and lines of code required to leverage the functionality of **OpenVINO integration with TensorFlow** in TensorFlow applications.
44

55
## APIs for essential functionality
66

7-
To add the **OpenVINO integration with TensorFlow** package to your TensorFlow python application, import the package using this line of code:
7+
To add the **OpenVINO integration with TensorFlow** package to your TensorFlow python application, import the package using this line of code:
88

99
import openvino_tensorflow
1010

@@ -20,27 +20,27 @@ To determine available backends on your system, use the following API:
2020

2121
openvino_tensorflow.list_backends()
2222

23-
To check if the **OpenVINO integration with TensorFlow** is enabled, use the following API:
23+
To check if the **OpenVINO integration with TensorFlow** is enabled, use the following API:
2424

2525
openvino_tensorflow.is_enabled()
2626

2727
To get the assigned backend, use the following API:
2828

2929
openvino_tensorflow.get_backend()
3030

31-
To enable verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO integration with TensorFlow**, use the following API:
31+
To enable verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO integration with TensorFlow**, use the following API:
3232

3333
openvino_tensorflow.start_logging_placement()
3434

35-
To disbale verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO integration with TensorFlow**, use the following API:
35+
To disbale verbose logs of the execution of the full TensorFlow pipeline and placement stages along with the **OpenVINO integration with TensorFlow**, use the following API:
3636

3737
openvino_tensorflow.stop_logging_placement()
3838

3939
To check if the placement logs are enabled, use the following API:
4040

4141
openvino_tensorflow.is_logging_placement()
4242

43-
To check the CXX11_ABI used to compile **OpenVINO integration with TensorFlow**, use the following API:
43+
To check the CXX11_ABI used to compile **OpenVINO integration with TensorFlow**, use the following API:
4444

4545
openvino_tensorflow.cxx11_abi_flag()
4646

0 commit comments

Comments
 (0)