Skip to content

Commit 17326ab

Browse files
tsavinarkazants
andauthored
[MO][TF FE] Document freezing as essential step for pruning SM format (openvinotoolkit#17595) (openvinotoolkit#17632)
* [MO][TF FE] Document freezing as essential step for pruning SM format * Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md --------- Signed-off-by: Kazantsev, Roman <[email protected]> Co-authored-by: Roman Kazantsev <[email protected]>
1 parent 8601042 commit 17326ab

File tree

2 files changed

+32
-13
lines changed

2 files changed

+32
-13
lines changed

docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Lines changed: 18 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -108,12 +108,27 @@ If a model contains operations currently unsupported by OpenVINO™,
108108
prune these operations by explicit specification of input nodes using the ``--input`` or ``--output``
109109
options. To determine custom input nodes, visualize a model graph in the TensorBoard.
110110

111-
To generate TensorBoard logs of the graph, use the Model Optimizer ``--tensorboard_logs`` command-line
112-
option.
113-
114111
TensorFlow 2 SavedModel format has a specific graph structure due to eager execution. In case of
115112
pruning, find custom input nodes in the ``StatefulPartitionedCall/*`` subgraph.
116113

114+
Since the 2023.0 release, direct pruning of models in SavedModel format is not supported.
115+
It is essential to freeze the model before pruning. Use the following code snippet for model freezing:
116+
117+
.. code-block:: python
118+
119+
import tensorflow as tf
120+
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
121+
saved_model_dir = "./saved_model"
122+
imported = tf.saved_model.load(saved_model_dir)
123+
# retrieve the concrete function and freeze
124+
concrete_func = imported.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
125+
frozen_func = convert_variables_to_constants_v2(concrete_func,
126+
lower_control_flow=False,
127+
aggressive_inlining=True)
128+
# retrieve GraphDef and save it into .pb format
129+
graph_def = frozen_func.graph.as_graph_def(add_shapes=True)
130+
tf.io.write_graph(graph_def, '.', 'model.pb', as_text=False)
131+
117132
Keras H5
118133
++++++++
119134

src/frontends/tensorflow/README.md

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,8 @@
22

33
The TensorFlow Frontend (TF FE) is a C++ based OpenVINO Frontend component that is responsible for reading and converting a TensorFlow model to an `ov::Model` object
44
that further can be serialized into the Intermediate Representation (IR) format.
5-
This is an internal API for OpenVINO that is used to implement user facing API such as Model Optimizer, `read_model` function, and OpenVINO Integration with TensorFlow.
5+
This is an internal API for OpenVINO that is used to implement user-facing API such as MO tool, MO Python API, and OpenVINO Runtime `read_model` function
6+
for reading TensorFlow models of the original format in run-time. Also, OpenVINO Model Server uses the frontend for serving models.
67
Regular users should not use the frontend directly.
78

89
```mermaid
@@ -15,22 +16,22 @@ flowchart BT
1516
style model3 fill:#427cb0
1617
ov_model[(ov::Model)]
1718
style ov_model fill:#427cb0
18-
ovtf(OpenVINO Integration with TensorFlow)
19-
style ovtf fill:#ffffc2
19+
ovms(OpenVINO Model Server)
20+
style ovms fill:#ffffc2
2021
tf_fe(TensorFlow Frontend)
2122
style tf_fe fill:#ee9a4d
2223
fem(Frontend Manager)
2324
mo(Model Optimizer)
2425
ov_runtime(OpenVINO Runtime)
2526
model --> mo --> fem --> tf_fe
2627
model2 --> ov_runtime --> fem
27-
model3 --> ovtf --> tf_fe
28+
model3 --> ovms --> ov_runtime
2829
tf_fe --> ov_model
29-
click ovtf "https://github.com/openvinotoolkit/openvino_tensorflow"
30+
click ovms "https://github.com/openvinotoolkit/model_server"
3031
```
3132

32-
Currently, it is only used by [OpenVINO Integration with TensorFlow](https://github.com/openvinotoolkit/openvino_tensorflow).
33-
Model Optimizer for now relies on the legacy [TensorFlow Frontend](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html) developed in Python.
33+
The MO tool and MO Python API now use the TensorFlow Frontend as the default path for conversion to IR.
34+
Known limitations of TF FE are described [here](https://docs.openvino.ai/latest/openvino_docs_MO_DG_TensorFlow_Frontend.html).
3435

3536
## Key contacts
3637

@@ -45,7 +46,7 @@ The structure of OpenVINO TensorFlow Frontend sources includes the following dir
4546
* [src](./src/) folder contains the sources of the component.
4647
* [tests](./tests) cover internal transformations.
4748

48-
Additionally, there is a shared [tensorflow common](../tensorflow_common) directory with same structure and purposes.
49+
Additionally, there is a shared [TensorFlow Common](../tensorflow_common) directory with same structure and purposes.
4950
Its content depend only on common FrontEnd APIs thus is free to use in other FrontEnds.
5051

5152
## Architecture
@@ -56,8 +57,8 @@ The whole workflow can be split into two steps: model loading and conversion.
5657
During loading, the `FrontEnd::load()` method creates `InputModel` that encapsulates the `GraphIterator` object.
5758
`GraphIterator` is a reader that iterates through the graph nodes in the topological order.
5859
`GraphIterator::get_decoder()` provides a decoder for the current graph node to read its attributes.
59-
Each TensorFlow model format has its implementation of `GraphIterator`. Currently, the frontend supports only binary frozen format `.pb`,
60-
and `GraphIteratorProto` is used for reading and parsing this format. The architecture of the loading step is shown in the picture below:
60+
Each TensorFlow model format has its implementation of `GraphIterator`. Currently, the frontend supports SavedModel, MetaGraph (`.meta`), and frozen protobuf (`.pb` and `.pbtxt`) formats.
61+
The base class `GraphIteratorProto` is used for reading and parsing these formats. The architecture of the loading step is shown in the picture below:
6162

6263
```mermaid
6364
classDiagram
@@ -70,6 +71,9 @@ classDiagram
7071
Place --o "1..*" InputModel
7172
DecoderBase "1" --o "1" Place
7273
GraphIteratorProto ..|> GraphIterator
74+
GraphIteratorProtoTxt ..|> GraphIterator
75+
GraphIteratorMeta ..|> GraphIterator
76+
GraphIteratorSavedModel ..|> GraphIterator
7377
```
7478

7579
After the loading step, `InputModel` includes a container of topologically sorted operation `Place` objects.

0 commit comments

Comments
 (0)