@@ -95,11 +95,11 @@ optimal:
9595 :width: 555px
9696 :alt:
9797
98- The following computation is constructed to execute ``(A+B)*C ``, but in the
99- context of nGraph, we can further optimize the graph to be represented as ``A*C ``.
100- From the first graph shown on the left, the operation on the constant ``B `` can
101- be computed at the compile time (known as constant folding), and the graph can
102- be further simplified to the one on the right because the constant has value of
98+ The computation is constructed to execute ``(A+B)*C ``, but in the context of
99+ nGraph, we can further optimize the graph to be represented as ``A*C ``. From the
100+ first graph shown on the left, the operation on the constant ``B `` can be
101+ computed at the compile time (known as constant folding), and the graph can be
102+ further simplified to the one on the right because the constant has value of
103103zero. Without such graph-level optimizations, a deep learning framework with a
104104kernel library will compute all operations, and the resulting execution will be
105105suboptimal.
@@ -153,9 +153,9 @@ PlaidML addresses the kernel explosion problem in a manner that lifts a heavy
153153burden off kernel developers. It automatically lowers networks from nGraph
154154into Tile, a :abbr: Domain-Specific Language (DSL) designed for deep learning
155155that allows developers to express how an operation should calculate tensors in
156- an intuitive, mathematical form. Integration of PlaidML with nGraph means
157- extra flexibility to support newer deep learning models in the absence of
158- by-hand optimized kernels for the new operations.
156+ an intuitive, mathematical form via ` Stripe `_ . Integration of PlaidML with
157+ nGraph means extra flexibility to support newer deep learning models in the
158+ absence of by-hand optimized kernels for the new operations.
159159
160160
161161Solution: nGraph and PlaidML
@@ -164,17 +164,17 @@ Solution: nGraph and PlaidML
164164Each of the problems above can be solved with nGraph and PlaidML. We developed
165165nGraph and integrated it with PlaidML so developers wanting to craft solutions
166166with :abbr: `AI ( Artificial Intelligence ) ` won't have to face such a steep
167- learning curve in taking their concepts from design to production to scale. The
168- fundamental efficiencies behind Moore's Law are not dead; rather than fitting
169- `more transistors on denser and denser circuits `_, we're enabling advances
170- in compute with more transformers on denser and more data-heavy
171- :abbr: `Deep Learning Networks ( DNNs ) `, and making it easier to apply
167+ learning curve in taking their concepts from design to production, and to scale.
168+ The fundamental efficiencies behind Moore's Law are not dead; rather than fitting
169+ `more transistors on denser and denser circuits `_, with nGraph and PlaidML,
170+ we're enabling advances in compute with more transformers on denser and more
171+ data-heavy :abbr: `Deep Learning Networks ( DNNs ) `, and making it easier to apply
172172:abbr: `Machine Learning ( ML ) ` to different industries and problems.
173173
174174For developers with a neural network already in place, executing workloads using
175175the nGraph Compiler provides further performance benefits and allows for quicker
176- adaptation of models. It also make it much easier to upgrade hardware
177- infrastructure pieces as workloads grow and require more careful balancing.
176+ adaptation of models. It also makes it much easier to upgrade hardware
177+ infrastructure pieces as workloads grow.
178178
179179This documentation provides technical details of nGraph's core functionality,
180180framework and backend integrations. Creating a compiler stack like nGraph and
@@ -188,4 +188,5 @@ will make life easier for many kinds of developers:
188188 a deep learning framework to their silicon.
189189
190190
191- .. _more transistors on denser and denser circuits : https://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html
191+ .. _more transistors on denser and denser circuits : https://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html
192+ .. _Stripe : https://arxiv.org/abs/1903.06498
0 commit comments