Skip to content
This repository was archived by the owner on Jan 3, 2023. It is now read-only.

Commit 0f925f0

Browse files
authored
Migrate changes about codegen being experimental to r0.16 (#2673)
1 parent 7759373 commit 0f925f0

File tree

8 files changed

+63
-59
lines changed

8 files changed

+63
-59
lines changed

doc/sphinx/ngraph_theme/static/css/theme.css

Lines changed: 3 additions & 3 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

doc/sphinx/source/core/passes/list-of-passes.rst

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,12 +50,15 @@ Memory Assignment Passes
5050
Codegen Passes
5151
==============
5252

53+
.. important:: Codegen is currently experimental only.
54+
55+
5356
.. csv-table::
5457
:header: "Codegen Passes", "More Detail"
5558
:widths: 29, 31
5659
:escape: ~
5760

58-
``CommonFunctionCollection``, ""
61+
``CommonFunctionCollection``, "Experimental Only"
5962

6063

6164
Debug Passes

doc/sphinx/source/frameworks/index.rst

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -14,15 +14,16 @@ Current framework integrations
1414

1515

1616
A framework is "supported" when there is a framework :term:`bridge` that can be
17-
cloned from one of our GitHub repos and built to connect to nGraph device backends,
18-
all the while maintaining the framework's programmatic or user interface. Bridges
17+
cloned from one of our GitHub repos and built to connect to nGraph device
18+
backends while maintaining the framework's programmatic or user interface. Bridges
1919
currently exist for the TensorFlow\* and MXNet\* frameworks.
2020

2121
.. figure:: ../graphics/whole-stack.png
2222
:width: 733px
2323
:alt: JiT compiling of a computation
2424

25-
:abbr:`Just-in-Time (JiT)` Compiling for computation
25+
:abbr:`Just-in-Time (JiT)` Compiling for computation. nGraph `Core` components
26+
are colored in blue.
2627

2728
Once connected via the bridge, the framework can then run and train a deep
2829
learning model with various workloads on various backends using nGraph Compiler
@@ -33,13 +34,13 @@ end use by data scientists, or for deployment in cloud container environments,
3334
nGraph Core ops and the nGraph C++ Library are designed for framework builders
3435
themselves. We invite anyone working on new and novel frameworks or neural
3536
network designs to explore our highly-modularized stack of components that can
36-
be implemented or integrated in virtually limitless ways.
37+
be implemented or integrated in countless ways.
3738

38-
Please read the articles in this section if you are considering incorporating
39-
components from the nGraph Compiler stack in your framework or neural network
40-
design. Articles here are also useful if you are working on something
41-
built-from-scratch, or on an existing framework that is less widely-supported
42-
than the popular frameworks like TensorFlow and PyTorch.
39+
Please read this section if you are considering incorporating components from
40+
the nGraph Compiler stack in your framework or neural network design. Contents
41+
here are also useful if you are working on something built-from-scratch, or on
42+
an existing framework that is less widely-supported than the popular frameworks
43+
like TensorFlow and PyTorch.
4344

4445
.. figure:: ../graphics/translation-flow-to-ng-fofx.png
4546
:width: 725px

doc/sphinx/source/index.rst

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -24,18 +24,18 @@ nGraph Compiler stack
2424

2525
`nGraph`_ is an open-source graph compiler for :abbr:`Artificial Neural Networks (ANNs)`.
2626
The nGraph Compiler stack provides an inherently efficient graph-based compilation
27-
infrastructure designed to be compatible with the many of the upcoming
27+
infrastructure designed to be compatible with many upcoming
2828
:abbr:`Application-Specific Integrated Circuits (ASICs)`, like the Intel® Nervana™
2929
Neural Network Processor (Intel® Nervana™ NNP), while also unlocking a massive
30-
performance boost on any existing hardware targets in your neural network: both GPUs
31-
and CPUs. Using its flexible infrastructure, you will find it becomes much easier
32-
to create Deep Learning (DL) models that can adhere to the "write once, run anywhere"
33-
mantra that enables your AI solutions to easily go from concept to production to scale.
30+
performance boost on any existing hardware targets for your neural network: both
31+
GPUs and CPUs. Using its flexible infrastructure, you will find it becomes much
32+
easier to create Deep Learning (DL) models that can adhere to the "write once,
33+
run anywhere" mantra that enables your AI solutions to easily go from concept to
34+
production to scale.
3435

3536
Frameworks using nGraph to execute workloads have shown `up to 45X`_ performance
3637
boost compared to native implementations. For a high-level overview, see the
37-
:doc:`project/introduction`.
38-
38+
:doc:`project/introduction` and our latest :doc:`project/release-notes`.
3939

4040
.. toctree::
4141
:maxdepth: 1

doc/sphinx/source/project/contrib.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -128,14 +128,13 @@ Although not always ideal, it is automatically enforced and reduces
128128
merge conflicts.
129129
130130
- The .clang-format file located in the root of the project specifies
131-
our format.
132-
- The script maint/apply-code-format.sh enforces that formatting
133-
at the C/C++ syntactic level.
134-
- The script at maint/check-code-format.sh verifies that the
135-
formatting rules are met by all C/C++ code (again, at the
136-
syntax level). The script has an exit code of `0` when code
137-
meets the standard and non-zero otherwise. This script does
138-
*not* modify the source code.
131+
our format. Simply run:
132+
133+
``` make style-check &&
134+
make style-apply```
135+
136+
137+
139138
- Formatting with `#include` files:
140139
- Put headers in groups separated by a blank line. Logically order
141140
the groups downward from system-level to 3rd-party to `ngraph`.

doc/sphinx/source/project/contribution-guide.rst

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -156,14 +156,12 @@ use **clang format** to enforce certain formatting. Although not always ideal,
156156
it is automatically enforced and reduces merge conflicts.
157157

158158
- The :file:`.clang-format` file located in the root of the project specifies
159-
our format.
160-
161-
* The script :file:`maint/apply-code-format.sh` enforces that formatting
162-
at the C/C++ syntactic level.
163-
* The script at :file:`maint/check-code-format.sh` verifies that the formatting
164-
rules are met by all C/C++ code (again, at the syntax level). The script has
165-
an exit code of ``0`` when code meets the standard and non-zero otherwise.
166-
This script does *not* modify the source code.
159+
our format. Simply run:
160+
161+
.. code-block:: console
162+
163+
$ make style-check
164+
$ make style-apply
167165
168166
- Formatting with ``#include`` files:
169167

doc/sphinx/source/project/introduction.rst

Lines changed: 17 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -95,11 +95,11 @@ optimal:
9595
:width: 555px
9696
:alt:
9797

98-
The following computation is constructed to execute ``(A+B)*C``, but in the
99-
context of nGraph, we can further optimize the graph to be represented as ``A*C``.
100-
From the first graph shown on the left, the operation on the constant ``B`` can
101-
be computed at the compile time (known as constant folding), and the graph can
102-
be further simplified to the one on the right because the constant has value of
98+
The computation is constructed to execute ``(A+B)*C``, but in the context of
99+
nGraph, we can further optimize the graph to be represented as ``A*C``. From the
100+
first graph shown on the left, the operation on the constant ``B`` can be
101+
computed at the compile time (known as constant folding), and the graph can be
102+
further simplified to the one on the right because the constant has value of
103103
zero. Without such graph-level optimizations, a deep learning framework with a
104104
kernel library will compute all operations, and the resulting execution will be
105105
suboptimal.
@@ -153,9 +153,9 @@ PlaidML addresses the kernel explosion problem in a manner that lifts a heavy
153153
burden off kernel developers. It automatically lowers networks from nGraph
154154
into Tile, a :abbr:Domain-Specific Language (DSL) designed for deep learning
155155
that allows developers to express how an operation should calculate tensors in
156-
an intuitive, mathematical form. Integration of PlaidML with nGraph means
157-
extra flexibility to support newer deep learning models in the absence of
158-
by-hand optimized kernels for the new operations.
156+
an intuitive, mathematical form via `Stripe`_. Integration of PlaidML with
157+
nGraph means extra flexibility to support newer deep learning models in the
158+
absence of by-hand optimized kernels for the new operations.
159159

160160

161161
Solution: nGraph and PlaidML
@@ -164,17 +164,17 @@ Solution: nGraph and PlaidML
164164
Each of the problems above can be solved with nGraph and PlaidML. We developed
165165
nGraph and integrated it with PlaidML so developers wanting to craft solutions
166166
with :abbr:`AI (Artificial Intelligence)` won't have to face such a steep
167-
learning curve in taking their concepts from design to production to scale. The
168-
fundamental efficiencies behind Moore's Law are not dead; rather than fitting
169-
`more transistors on denser and denser circuits`_, we're enabling advances
170-
in compute with more transformers on denser and more data-heavy
171-
:abbr:`Deep Learning Networks (DNNs)`, and making it easier to apply
167+
learning curve in taking their concepts from design to production, and to scale.
168+
The fundamental efficiencies behind Moore's Law are not dead; rather than fitting
169+
`more transistors on denser and denser circuits`_, with nGraph and PlaidML,
170+
we're enabling advances in compute with more transformers on denser and more
171+
data-heavy :abbr:`Deep Learning Networks (DNNs)`, and making it easier to apply
172172
:abbr:`Machine Learning (ML)` to different industries and problems.
173173

174174
For developers with a neural network already in place, executing workloads using
175175
the nGraph Compiler provides further performance benefits and allows for quicker
176-
adaptation of models. It also make it much easier to upgrade hardware
177-
infrastructure pieces as workloads grow and require more careful balancing.
176+
adaptation of models. It also makes it much easier to upgrade hardware
177+
infrastructure pieces as workloads grow.
178178

179179
This documentation provides technical details of nGraph's core functionality,
180180
framework and backend integrations. Creating a compiler stack like nGraph and
@@ -188,4 +188,5 @@ will make life easier for many kinds of developers:
188188
a deep learning framework to their silicon.
189189

190190

191-
.. _more transistors on denser and denser circuits: https://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html
191+
.. _more transistors on denser and denser circuits: https://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html
192+
.. _Stripe: https://arxiv.org/abs/1903.06498

doc/sphinx/source/project/release-notes.rst

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,17 @@
33
Release Notes
44
#############
55

6-
|release|
6+
This is |release|.
77

88

9+
10+
11+
12+
CHANGELOG
13+
=========
14+
15+
(Last updated September 2018)
16+
917
This release focuses on accelerating deep learning inference workloads on
1018
Intel® Xeon® (CPU processor) and has the following key features:
1119

@@ -31,9 +39,3 @@ In our tests, the optimized workloads can perform up to 45X faster than native
3139
frameworks, and we expect performance gains for other workloads due to our
3240
powerful :doc:`../core/fusion/index` feature.
3341

34-
35-
See also our recent `API changes`_
36-
37-
38-
39-
.. _API changes: https://github.com/NervanaSystems/ngraph/blob/master/changes.md

0 commit comments

Comments
 (0)