This repository was archived by the owner on Jan 3, 2023. It is now read-only.
Commit 8235c2c
authored
Cyphers/27catchup (#3978)
* Fix broadcast v1 reference (#3880)
* Added reproducer for issue with broadcast v1
* Make reference broadcast work with V1 broadcast
* Deprecate runtime::Tensor::copy_from
* Force Gelu decompose on CPU (#3887)
* Round the right bit with denorms (#3885)
* Round the right bit with denorms
* Rounding to inf
* Attribute visitor (#3579)
* Sketch of attribute walker
* Review comments
* merge error?
* Remove unused method
* simplify, make some ser tests work
* Don't look for keys that aren't there
* Factory registry, more ops visited, generic ser/dser start
* More merge
* cleanup
* Adapter for enums
* Compiler error
* Test of user-defined op
* Simplify enum name pairing
* Update distributed.hpp
* Review comments
* compiler error
* Direct access to non-primitive types from adapters
* Define and export type info
* attr enums, AvgPool*, vectors
* Cleanup
* some comments
* Allow type info to be used as a key.
* Don't leave output serialization shapes set.
* Auto adapter
* More ops, adapters
* Missing symbol
* Remove PartialShape and element::Type methods from visitor
* Fix type info
* Remove unused variable
* Simplify
* namespace error
* exports
* Uniform names
* Some better names
* More name cleanup, simplify visitor implementation
* Fix template, add test
* Revert serializer
* Add instantiations
* Work-around gcc issue
* VS exports
* VS exports
* windows export
* vs
* vs
* vs
* vs
* Simplify
* vs
* vs
* Add some missing attributes
* Missing factories
* Merge error
* Fix Add factories
* Missed type
* [FUSED] Add new LogSoftmax fused op (#3867)
* LogSoftmax introduced
* Added LogSoftmax to serializer
* Fixed style
* Fixed CmakeLists style
* code review remarks introduced
* Code review remarks introduced
* [ONNX] Importer should use fused op for MatMul (#3842)
* [ONNX] Importer should use fused op for MatMul
* Fix a bug in fused matmul op
* Dont reshape matmul inputs to at least 2D any more
* [SPEC] Add auto_broadcast parameter to SquaredDifference (#3856)
* [SPEC] Add auto_broadcast parameter to SquaredDifference
* Rename set_autobroadcast->set_autob
* [Spec][FusedOp]Adjust SpaceToDepth fused op to specification (#3862)
* Added support mode for SpaceToDepth
* Added unit tests
* Fixed styles
* Revert changes in prototxt files
* Force AutoBroadcast defaults (#3878)
* Force AutoBroadcast to be specified at the op level since no default is correct for all ops.
* exports
* Added constant folding for binary ops (#3895)
* Modify Gather constant folding to support v1 op.
* Address PR feedback.
* Update fused ops groupconvolution, gelu and layernorm to be dynamic friendly (#3876)
* set output et
* set output et
* overwrote validate and infer
* Add full path to gtest for build via ninja (#3882)
* [FUSED] Add reciprocal op (#3851)
* [FUSED] Add reciprocal op
* Review Fix #1
* Move operator op::v1 -> op
* Fix serializer
* Review Fix I
* [SPEC] Add new v1::FloorMod operator (#3852)
* [SPEC] Add new v1::FloorMod operator
* Review Fix I
* [MLIR] Fix MLIR build on mac OS (#3896)
* Fix MLIR build on mac OS
* Style
* Style
* [MLIR] Bump MLIR commit to c61db4bb (#3879)
* WIP
* WIP
* WIP
* WIP
* style
* WIP
* WIP
* Add err msg
* Fix headers and cleanup
* Bug Fix: incorrect shape validation logic. (#3897)
* Allow for overriding functions in visualization (#3900)
* Add ReplaceSlice to ZeroDimTensorEliminiation pass (#3899) (#3910)
* Add ReplaceSlice to ZeroDimTensorEliminiation pass
* style
* Default constructor needs to init autob (#3913)
* Implementation of CrossEntropy and CrossEntropyBackprop as fused Op's (#3818)
* - Implementaion of CrossEntropy and CrossEntropyBackprop as fused Op's
* - unit test case for CE fprop
- fix bug in decompose_op
* WIP debug PDPD unit test failure
* fixed broadcasting issue
* -fix bdcast issue for multi dim tensor
* utilities to restore the original tensor shape
* i) style-fix ii) rename variables
* - unit test for multiple dimensions ii) refactor create_mask to seperate function
* - fixed unit tests
* fix style
* set output element type to dynamic in pre_validate and infer shape
* disable ce with one hot unit test on PlaidML
* add CE op to fused_op_tbl
* - add serialzier support for CE and CE Backprop
* Update ToC to better match docplan spreadsheet (#3846)
* New ToC
* Working on docplan
* Clean up for toc
* Link to existing APIs on quantization doc
* Better align topics with docplan ToC; add section for dyn shapes
* Title casing to be consistent
* PR reviews
* New build preview
* Add default opset version, new versioning schema
* Remove duplicate file causing doc build warning
* Fix CSS rendering issues (#3921)
* Fix for the bug with as_type_ptr for TensorIterator::Input/Ouput desc (#3906)
* Updated unit test to reproduce a bug
* Code style
* Add exports
* Added missed export
* Bug fix in conv v1 shape inference (#3912)
* [SPEC] Add new v1::VariadicSplit operator (#3868)
* [SPEC] Add new v1::VariadicSplit operator
* Add missing namespace, fix a typo in doc
* Apply suggestions from code review
Co-Authored-By: Michał Karzyński <[email protected]>
* Style fix
* Set all of the inputs to be relevant to output shape
* Set output type if numer of outputs is known
* Add node validation for known input
* Fix for windows ninja (#3917)
* Fix for windows ninja
* Fix for centos build
* Remove fix for centosa
* Update ONNX importer to use v1 version of Softmax (#3894)
* Added downgrade pass for Softmax.
* Updated Softmax op to v1.
* Created vector with a right capacity.
* Include numeric header to enable std::iota function
* Removed unused numeric header from the old file
* Fix includes style
* Fix shape inference of TensorIterator body (#3922)
* fix for shape inference of tensor iterator body
* updated unit test for case end = -2
* indexes in unit tests
* Updated formula for num_iterations
* resolve compiler warning (#3923)
* Added u1 precision for binary weights (#3914)
* Added U1 precision for binary weights
* Handle switch cases with u1 type
* Fixed code style
* Added convert_to_string support for u1 type
* Use real C type for u1 type.
Co-Authored-By: Robert Kimball <[email protected]>
* Fused_op: BatchMatMulTranspose (#3871)
* Initial commit
* Add decompose_op and unit-test
* Style fix
* Fix CI error
* Address review comments
* Remove CPUBatchFusion
* Address review feedback
* Address review feedback
* Added type_prop tests
* Moved 1 test from cpu to core to keep together
* Address PR comments
* Fix style
* Change repositories addresses to use SSH (#3889)
* Move CPU only unit tests to the cpu test file (#3919)
* Cyphers/uop (#3903)
* Address op_tbl issues
* fix
* fix
* fix
* Cleanup
* cleanup
* cleanup
* More fixes
* Revert ser changes
* Compiles
* opset conversion fixed
* Fix opset conversion tests
* Deal with Reciprocal and FloorMod movement
* Cleanup
* Remove duplicate enums
* Experiment
* experiment
* Types
* Reorg around clang 3.9 bug
* Add default constructor to some ops missing them (#3924)
* [SPEC] HardSigmoid adjustments (#3857)
* Construct HardSigmoid with alpha and beta as inputs
* Switch to the new HardSigmoid constructor entirely
* Broadcast with numpy style in hard sigmoid
* Python bindings adjustment to the new constructor
* Different way of creating constants
* Accept scalars instead of 1D vectors for alpha and beta
* Adjust the python tests to the new HardSigmoid constructor
* Use v1 ops in fused HardSigmoid
* Relax the static shape requirement for alpha and beta
* Fix merge
* CropAndResize op (#3893) (#3925)
* Stub for CropAndResize
* Cut and pasteo
* Need a cast
* Put all the op header includes in one header file, ops.hpp (#3929)
* Put all the op header includes in one header file, ops.hpp
* Update ops.hpp
* Fix compilation issues for default constructors (#3928)
* Make Node's type_info mandatory (#3891)
* Make Node's type_info mandatory
* Add ReplaceSlice to ZeroDimTensorEliminiation pass (#3899)
* Add ReplaceSlice to ZeroDimTensorEliminiation pass
* style
* Force Gelu decompose on CPU (#3902)
* Copy rt info (#3934)
* Matmul float type test case for UEP (#3877)
* Matmul float type test case for UEP
Signed-off-by: suryasidd <[email protected]>
* Removed microsoft ops domains and ran clang-format
Signed-off-by: suryasidd <[email protected]>
* [SPEC] Add OneHot:v1 (#3884)
* Moved OneHot to v0
* Introduced OneHot:v1
* Added shape calculation for OneHot:v1
* Added element types checking
* Added output shape tests
* Added tests to checking if inputs are scalars
* Updated OneHot:v1 doc
* Implemented OneHot:v1 downgrade pass
* Using OneHot:v1 in onnx_importer
* Implemented OneHot:v0 upgrade
* Fixed OneHot onnx_importer
* Refactored normalize_axis
* Added OneHot:v1 serialized
* Code review remarks introduced
* Added doc to normalize_axis
* Enable pipelining in CPU Backend (#3916)
* Enable pipelining in CPU Backend
* Applying clang-formatting to my previous commit
* Changing CPU backend test. executable_can_create_tensor will now return true
* [SPEC] Add support string as AutoBroadcastSpec (#3909)
* Support string casting to AutoBroadcastSpec
* Make string values consistent
* Adding default ctor for Constant (#3938)
* Adding default ctor
* Address PR feedback
* Cumulative Sum (#3873)
* - Op defination for cummalative sum
* WIP reference kernel for cummulative sum
* - unit test case for default cum_sum
- addition ctor for cumsum to accept axis as a integer insted of Node
type
- style fix
* - add serializer support
- fix failing unit test case
- update Op in the interpreter dispatcher
* - CPU builder and DEX support for CumSum
* - implemented mapping tensor elements to corrosponding axis
* - unit test for multiple dims
- fix axis in the op defination
- support for reference kernel to compute across all axis
* - added support for exclusive and reverse modes
- more unit test case for all modes
* - codegen support for CumSum
- disable CumSum unit test for PlaidML
* -Add missing header to codegen stream writer
* fixed codegen writer
* change return type of exclusive and reverse to bool
* - support for dynamic shape
- support to handle all tensor types in CPU builder
* - add support for interpreter to handle different axis types
* Style fix
* Fix incorrect uses of `description()` (#3946)
* Fix incorrect uses of `description()`
* type-o/namespace
* Move non-primitive attribute adapters to adaptee's files (#3949)
* Move non-primitive attribute adapters to adaptee's files
* Cast in copy
* Update ONNX importer Gemm to produce MatMul op (#3927)
* Update ONNX importer Gemm to produce MatMul op
* Address opset3 bug
* [SPEC][FusedOp] Add Mod operator (#3908)
* Mod operator introduced
* Introduced onnx importer, fixed implementation
* styles applied
* Refactored assert comment for mod
* Add failure mod test to plaidml manifest
* Code review remarks introduced
* Changed ops used in decompose to v1
* Moved Mod to op_v1_tbl
* Partially fixed visibility for symbols (Ops, Nodes, Transformations, Matchers) (#3767)
* Partially fixed visibility for symbols:
* Resolved issues with RTTI and AppleClang
* style
* review fixes
* fixed compilation with msvc 2019
* Export extra API which is used in other public classes
* CMAKE: MSVS -> MSVC
* Fixed template export
* Fixed compilation flags
* Fixed default args
* removed self-inclusion
* export
* shape
* export strides
* Export all symbols needed for OpenVINO
* Export
* disable cpu
* AxisSet
* disable warning
* fix
* removed second declaration
* fixed runtime exports
* Reverted some changes
* Fixed LNK2005 error on Windows
* Fixed code style check
* Fixed EnumAttributeAdapterBase
* Remove export of template classes
* Fixed code style for EnumAttributeAdapterBase
* Fixed for protobuf
* Test cleanups (#3942)
* Documentation for Dynamic Shapes and additional graph construction options (#3930)
* Initial dynamic shapes doc
* Basics on dynamic shapes, with example code
* Add glossary defs and dynamic shapes example
* Slightly better organization
* Address make style check failure, maybe
* Test dynamic shapes doc w 0.27.0-rc.0+9aa81d9
* Resolve doc build error w new opset versioning
* Review comments addressed
* Add theme-relevant revised illustrations from collab_ngai
* style
* Style fixes
* Run make style-apply with clang-format-3.9
* [ONNX] Add CumSum to ONNX importer (#3918)
* Register CumSum operator in onnx importer
* Missing whitespace
* Update CMakeLists.txt
* ONNX importer - CumSum op init
* Simple CumSum onnx model
* ONNX CumSum model simple test
* Default axis
* Axis input test
* Inputs variable
* Style apply
* Test 3d exclusive reverse
* Apply style
* Add memory header and std namespace
* Add model_cum_sum tests to plsidml unit_test.manifest
* Add model_cum_sum tests to plaidml unit_test.manifest
* Changed default axis type
* Test model update
* Style apply
* Add test for dynamic axis input
* [MLIR] Fused Ops dialect declaration (#3860)
* WIP
* WIP
* WIP
* All ops
* Fix layernorm backprop op name
* WIP: Adding tests
* WIP: Adding LIT parsing/printing tests
* WIP
* Added LSTM cells. Fixed some ops
* All builder tests
* PR fixes
* Fix spacing. Add missing setter to SpaceToDepth
* Update spaceToDepth lit test
* PR fixes
* Build fix
* Another fix
* Fixed optional args
* [MLIR] Enable ViewOp in Affine Lowerer (#3911)
* Map each ng tensor to a linear buffer and a view
* fix comment
* Create views only when a value is assigned a buffer id
* style
* Fix lit test
* ConstantFolding for v1::StridedSlice operation (#3955)
* constant folding for strided slice
* code style
* Refactoring
* fix for warning: deleting an unused variable
* Opset1 Definition (#3813)
* Opset1
* Added opset1.hpp
* Added more ops to opset0 and opset1
* Move opset1.hpp up and remove opset0.hpp
* Add versioning to more ops
* Revert to older pass names to keep compatibility for external components
* Fix compilation errors with codegen
* merge
* Added compile-time check for opset
* Added opset1 tbl
* Add op_version table of all ops
* Create factories from op_version_tbl
* reorg unsupported ops in int backend
* Added temporary alias for GreaterEqual
* Add missing case to interpreter enumeration
* Finish opset serializer cleanup (#3939)
* Opset-based opset conversion (#3937)
* Opset-based opset conversion
* Add other opset conversion
* Use ops.hpp
* Update opset0_tbl.hpp
* Switch interpreter to opset0 + a few extras (#3941)
* Switch interpreter, gcpu to opset0
* Remove unnused files
* Give interpreter its own opset
* style
* Fix namespace
* Fix rounding type conversion
* Work-around for bad clang3.9 bug
* Work-around
* [SPEC] Add negative axes support for ReverseSequence (#3926)
* Added negative axes support for ReverseRequence
* code review remarks introduced
* Disable reverse sequence for PlaidMl tests
* Fixed styles
* Fixed axes assignment
* Fixed normalized axes assignment
* [SPEC] Adjust ConvolutionBackpropData op. (#3935)
* [SPEC] Adjust ConvolutionBackpropData op.
```
inputs:
1. filters-------+
2. output_delta | -> 1. data
+---> 2. filters
3. data_batch_shape -> 3. output_shape(+optional)
attributes:
1. strides -> 1. strides
2. dilations-----+
3. pads_begin | -> 2. pads_begin
4. pads_end | -> 3. pads_end
+---> 4. dilations
-> 5. +auto_pad(optional)[PadType::EXPLICIT]
-> 6. +output_padding(optional)[zeros]
```
* Review fix I
* [SPEC] ConvertLike op (#3944)
* [Spec] Add 3-input constructor to DetectionOutput (#3966)
* Add 3-input constructor to DetectionOutput
* Review comments
* v1::Reshape zero_flag renamed. Default value unset (#3945)
* Add groupconvolution bprop (#3940)
* add placeholder for conv bprop
* add constructor, api, serializer and can compile
* implement decompose_op
* fix arg num
* fix and update
* address comment, clean up and add ut placeholder
* update ut
* address comment on groups
* Added explicit dependencies between buildable target and external project (#3962)
* Relax check on LRN for rank requirement to be >=3 (#3952)
* relax check for LRN for requirement rank should be >=3
* rename unit test names
* - Disable lrn unit test with axes for CPU backend
* remove outdated unit test on rank requirement from type_prop
* - disable newly added lrn unit test in plaidMl
* [SPEC] ReduceLogicalAnd & ReduceLogicalOr (#3874)
* ReduceLogicalAnd op implementation
* ReduceLogicalOr op implementation
* Add basic constant folding support
* Fix typo
* Revert "Add basic constant folding support"
This reverts commit 5d14a18.
* Introduce and use a new base class for logical reductions
* Constant folding for v1::ReduceLogicalAnd
* Constant folding for v1::ReduceLogicalOr
* Obsolete cout removal
* [SPEC] Adjust Split (#3943)
* Changed axis to Node
* Added using normalize from validation util
* refactored split
* Added typrop tests to Split
* Added set_input_is_relevant_to_shape for Split
* clang style applied
* Fixed var name
* Code refactor
* mergre from master. part.2
* Constructor to provide CI compatibility
* CI compatibility
* CI compatibility
* Updated get_outputs
* CI compitability
* Fixed get_outputs function
* [SPEC] Add DeformablePSROIPooling v1 (#3954)
* Initial commit
* Moved DeformablePSROIPooling to v1
* Moved DeformablePSROIPooling to v1. Part.2
* Added missing fields
* Added inferance shape
* Added type prop UT
* Added serialization
* Doc + styles applied
* Revert incorrect changes
* Revert incorrect changes. Part.2
* Moved to NGRAPH_API
* integration with master
* Code review remarks introduced
* DeformablePSROIPooling updated to new spec
* Add v1 version of Subtract with Numpy broadcasting as default (#3957)
* V1 version of Subtract with default Numpy autobcast
* Update op_v1_tbl.hpp with v1 version of Subtract
* Use v1 of Subtract in ONNX importer
* Add v1 namespace
* Update namspece
* Missing punctuation
* Add Subtract to opset0 downgrade
* Add Subtract to opset1 upgrade
* Add Subtract header to cpu emmiter
* Update serializer
* Add Subtract to opset_pass tests
* Use downgrade method
* Add get_version method
* Style apply
* Add v1 Substract to check opset1
* Add NGRAPH_API before class name
* Removed get_version method
* Separate cases for Subtract and Subtract_v1 in serializer
* Update op_version_tbl with v1 Subtract
* NUMPY autobcast for no args constructor
* Add Subtract_v1 to serializer
* [SPEC] Add constant folding for LogicalNot:v1 (#3961)
* Added consant folding for LogicalNot
* Fixed alphabetical order
* Update the tolerance on auto_broadcast_test (#3959)
* Copy RT info for parameters (#3969)
* [SPEC] Add GatherTree:v1 (#3967)
* GatherTree introduced
* Added GatherTree type_prop tests1 parent f854fd6 commit 8235c2c
File tree
583 files changed
+20359
-8536
lines changed- .ci/onnx/jenkins
- cmake
- Modules
- doc
- examples/dynamic_tensor
- sphinx
- ngraph_theme
- static/css
- source
- backends
- core
- constructing-graphs
- passes
- dynamic
- frameworks
- other
- graphics
- ops
- project
- provenance
- training
- python
- ngraph
- pyngraph/ops/fused
- test/ngraph
- src
- contrib/mlir
- backend/pass
- core
- ngraph_dialect
- pass
- runtime
- cpu
- ngraph
- builder
- descriptor
- layout
- distributed
- frontend/onnx_import
- op
- utils
- opsets
- op
- experimental
- layers
- fused
- util
- pass
- pattern
- op
- runtime
- cpu
- builder
- kernel
- op
- pass
- generic_cpu
- kernel
- interpreter
- plaidml
- reference
- type
- tools/nbench
- test
- backend
- mlir
- affine_conversion
- ngraph_dialect
- models/onnx
- onnx
- opset_pass
- type_prop
- util
Some content is hidden
Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
583 files changed
+20359
-8536
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
113 | 113 | | |
114 | 114 | | |
115 | 115 | | |
116 | | - | |
117 | | - | |
118 | | - | |
119 | | - | |
| 116 | + | |
| 117 | + | |
| 118 | + | |
| 119 | + | |
120 | 120 | | |
121 | 121 | | |
122 | 122 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
69 | 69 | | |
70 | 70 | | |
71 | 71 | | |
72 | | - | |
73 | | - | |
74 | | - | |
75 | | - | |
76 | | - | |
77 | | - | |
78 | | - | |
79 | | - | |
80 | 72 | | |
81 | 73 | | |
82 | 74 | | |
| |||
151 | 143 | | |
152 | 144 | | |
153 | 145 | | |
154 | | - | |
| 146 | + | |
155 | 147 | | |
156 | 148 | | |
157 | 149 | | |
| |||
361 | 353 | | |
362 | 354 | | |
363 | 355 | | |
364 | | - | |
| 356 | + | |
| 357 | + | |
| 358 | + | |
| 359 | + | |
| 360 | + | |
| 361 | + | |
| 362 | + | |
| 363 | + | |
| 364 | + | |
| 365 | + | |
365 | 366 | | |
366 | 367 | | |
367 | 368 | | |
| |||
461 | 462 | | |
462 | 463 | | |
463 | 464 | | |
464 | | - | |
| 465 | + | |
465 | 466 | | |
466 | | - | |
| 467 | + | |
467 | 468 | | |
468 | 469 | | |
469 | 470 | | |
470 | 471 | | |
471 | 472 | | |
472 | 473 | | |
473 | | - | |
474 | | - | |
| 474 | + | |
| 475 | + | |
475 | 476 | | |
476 | 477 | | |
477 | 478 | | |
478 | 479 | | |
479 | | - | |
480 | | - | |
| 480 | + | |
| 481 | + | |
481 | 482 | | |
482 | 483 | | |
483 | 484 | | |
484 | 485 | | |
485 | | - | |
486 | | - | |
| 486 | + | |
| 487 | + | |
487 | 488 | | |
488 | 489 | | |
489 | 490 | | |
490 | 491 | | |
491 | 492 | | |
492 | | - | |
493 | | - | |
| 493 | + | |
| 494 | + | |
494 | 495 | | |
495 | 496 | | |
496 | 497 | | |
| |||
545 | 546 | | |
546 | 547 | | |
547 | 548 | | |
548 | | - | |
| 549 | + | |
549 | 550 | | |
550 | 551 | | |
551 | 552 | | |
552 | 553 | | |
553 | 554 | | |
554 | | - | |
| 555 | + | |
555 | 556 | | |
556 | 557 | | |
557 | 558 | | |
| |||
591 | 592 | | |
592 | 593 | | |
593 | 594 | | |
594 | | - | |
| 595 | + | |
595 | 596 | | |
596 | 597 | | |
597 | 598 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
16 | 16 | | |
17 | 17 | | |
18 | 18 | | |
19 | | - | |
| 19 | + | |
| 20 | + | |
20 | 21 | | |
21 | 22 | | |
22 | 23 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
35 | 35 | | |
36 | 36 | | |
37 | 37 | | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
38 | 44 | | |
39 | 45 | | |
40 | 46 | | |
| |||
43 | 49 | | |
44 | 50 | | |
45 | 51 | | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
46 | 63 | | |
47 | 64 | | |
48 | 65 | | |
| |||
60 | 77 | | |
61 | 78 | | |
62 | 79 | | |
| 80 | + | |
63 | 81 | | |
64 | 82 | | |
65 | 83 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
20 | 20 | | |
21 | 21 | | |
22 | 22 | | |
23 | | - | |
24 | | - | |
| 23 | + | |
| 24 | + | |
25 | 25 | | |
26 | 26 | | |
27 | 27 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
23 | 23 | | |
24 | 24 | | |
25 | 25 | | |
26 | | - | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
27 | 32 | | |
28 | 33 | | |
29 | 34 | | |
| |||
103 | 108 | | |
104 | 109 | | |
105 | 110 | | |
106 | | - | |
107 | 111 | | |
108 | | - | |
109 | | - | |
110 | | - | |
111 | | - | |
112 | | - | |
113 | | - | |
| 112 | + | |
| 113 | + | |
| 114 | + | |
| 115 | + | |
| 116 | + | |
| 117 | + | |
| 118 | + | |
| 119 | + | |
| 120 | + | |
| 121 | + | |
| 122 | + | |
| 123 | + | |
| 124 | + | |
| 125 | + | |
| 126 | + | |
| 127 | + | |
| 128 | + | |
| 129 | + | |
| 130 | + | |
| 131 | + | |
| 132 | + | |
| 133 | + | |
| 134 | + | |
| 135 | + | |
| 136 | + | |
114 | 137 | | |
115 | 138 | | |
116 | 139 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
| 44 | + | |
| 45 | + | |
| 46 | + | |
| 47 | + | |
| 48 | + | |
| 49 | + | |
| 50 | + | |
| 51 | + | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
| 67 | + | |
| 68 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
38 | 38 | | |
39 | 39 | | |
40 | 40 | | |
41 | | - | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
42 | 44 | | |
43 | 45 | | |
44 | 46 | | |
| |||
84 | 86 | | |
85 | 87 | | |
86 | 88 | | |
87 | | - | |
88 | | - | |
89 | | - | |
90 | | - | |
91 | | - | |
92 | | - | |
| 89 | + | |
| 90 | + | |
| 91 | + | |
| 92 | + | |
| 93 | + | |
93 | 94 | | |
94 | | - | |
| 95 | + | |
95 | 96 | | |
96 | 97 | | |
97 | 98 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
2 | 2 | | |
3 | 3 | | |
4 | 4 | | |
5 | | - | |
| 5 | + | |
6 | 6 | | |
7 | 7 | | |
8 | 8 | | |
9 | | - | |
| 9 | + | |
10 | 10 | | |
11 | 11 | | |
12 | | - | |
13 | | - | |
14 | | - | |
| 12 | + | |
| 13 | + | |
15 | 14 | | |
16 | 15 | | |
17 | 16 | | |
| |||
0 commit comments