Skip to content

Commit 915e6fd

Browse files
Copybara Botshelkesagar29
authored andcommitted
Integrate internal changes
This PR moves the following internal changes to OSS, commit 2986cac97888b5a9c1cd9064e4728cb38ca9dc45 Author: Sagar Shelke <[email protected]> [executor] Add complex type support to `ScalarValue` Previously, ScalarValue which represents scalar runtime value did not support complex type. This MR adds support for complex type by making storage union of real and complex data instaed of just real. MLIR tests are added via constant subgraph execution. commit cf83a0d318b8035695d0b9fd24d578733632e253 Author: Christopher Bate <[email protected]> [compiler] Enable more `stablehlo.dot_general` to TensorRT using `tensorrt.einsum` Previously, we relied on canonicalization of `stablehlo.dot_general` to put all such contraction operations into a form that could be converted to `tensorrt.matrix_multiply`. Based on recent experiments, this can actually produce very inefficient TensorRT programs due to the number of reshapes and transpositions that must be inserted to coerce general `stablehlo.dot_general` into batched matrix multiplications. This change enables conversion of `stablehlo.dot_general` to `tensorrt.einsum`, and the pass and patterns now contain configurable parameters to control whether `tensorrt.einsum` is used as the primary method or only for fallback when conversion to `tensorrt.matrix_multiply` is not possible. A follow on change will revamp the Stablehlo preprocessing that we perform on 'stablehlo.dot_general' to avoid creating inefficient patterns and enable wider use of this pattern. commit 528651ed1cd36c36376180c1c2232526ce972fef Author: Christopher Bate <[email protected]> [compiler] Fix stablehlo-to-scf scalarization heuristics Fixes an issue where float tensors in the 'before' region of converted while loops where scalarized. The transform should only scalarize operands which are likely to be for-style induction variables. commit 1d52e0a9e30dc104178c4761c1a24153abc7ea90 Author: Christopher Bate <[email protected]> [compiler] NFC: Drop dead code from StablehloToExecutableTask commit f1c8d8c7cd860aedfe339d76ef7fb953baf9bd55 Author: Chris Bate <[email protected]> [compiler] Add `plan-promote-host-tensors-to-host-pinned` pass Adds a simple pass to promote "host" tensors to "host-pinned" tensors in common cases where we know a tensor will be transferred between host and device spaces. This pass runs after `plan-optimize-memory-spaces` since the former is sensitive to mismatching host spaces for patterns related to moving tranfers out of loops. commit c27d56ea7a9661395e17fa895c610a79a92fa0c2 Author: Sagar Shelke <[email protected]> [executor] Handle elided dense resource elements attr during translation Translation to executable (which is flatbuffer) uses MLIR attr serialization to serialize `ElementsAttr`. However, this doesn't work when attr is elided dense resource and results in segfault. This MR handles this situation by replacing elided resource with `DenseElementsAttr` of all `one`s (`true` in case of boolean). IR with elided resource is usally seen only during testing of passes and not useful for e2e functional execution. Testing of `ExecuteConstantFoldableSubgraphs` pass is such case. Thus, MLIR test cases for this pass are added. commit 920a84e648833764563d3dc1de544a8f1b9f027e Author: Chris Bate <[email protected]> [tensorrt] Fix TRT layer name generation function The TRT layer naming had some faulty logic that could cause the layer name to grow very large in the process to create a unique name. Fix the issue and use a static counter to reduce time spent in the loop. commit ff0c5fa4bf5321ad0ce18579598c49f4b552fb37 Author: Christopher Bate <[email protected]> Further fixes to LIT configs Previously, we were setting `lit_config.parallelism_group` instead of `config.parallelism_group`. Apparently, the previous method does nothing, only `config.parallelism_group` has any effect. commit d65c220b712c262992dbdf5a87fa3220a06bfb21 Author: Chris Bate <[email protected]> Update LIG test parallelism configs In more recent versions of TensorRT (10.11+ at least), the builder is taking a much larger amount of host memory. This can cause OOM when running the LIT test suites under their existing configurations. This change updates all LIT configs: - Make sure to use `%pick-one-gpu` in the LIT command line to ensure we stall if there are not enough GPU or host resources available. Add a hard limit that there must be at least 5GB of host memory available. - Update configurations to reduce the amount of estimated parallelism by increasing host memory requirements and reducing the amount of host memory to 50% for the purposes of the parallelism calculation. - Force all tests to use a common parallelism group unless otherwise specified in the test config. commit 1f996f607640d81bf7137a4ed874b20c2a16cca2 Author: Christopher Bate <[email protected]> [compiler] Fix failure case in stablehlo-to-scf Fixes a failure case due to one of the recently introduced rewrites in `stablehlo-to-scf`. commit 2779b632465fc3e840f5ce987f6233e824fe2ed3 Author: Christopher Bate <[email protected]> [compiler] Further improvements to plan bufferization pipeline - Split `plan-assign-memory-spaces` into three passes: - `plan-assign-memory-spaces` - `plan-optimize-memory-spaces` - `plan-materialize-explicit-transfers` - The last one is the only new code: `plan-materialize-explicit-transfers` converts `tensor.cast` ops that change the memory space encoding into explicit `bufferization.alloc_tensor` + `bufferization.materialize_in_destination` operations. - Improve handling of `bufferization.alloc_tensor` and optimization of `scf.for` iteration args in `plan-assign-memory-spaces`. - Improve handling of `tensor.reshape` in `plan-assign-memory-spaces`. - Fix handling of `tensor.reshape` when rewriting functions to be in DPS style in `plan-alloc-tensors`. This change also updates the LLVM dependencies in order to cherry-pick fix to the `tensor.reshape` bufferization interface that I merged upstream (llvm/llvm-project#128590). In addition, fix APInt assertions in `plan-execute-constant-foldable-subgraphs`. commit 312170d8cbcd4c1fcf9cefdd628583e3dbdcc4f5 Author: Chris Bate <[email protected]> [compiler] Enable While-to-For conversion in Stablehlo-to-Scf pass Stablehlo only has one type of loop construct, `stablehlo.while`. The `while` loop can represent "for"-style loops as well, but if we only have `scf.while` loops after conversion to SCF, then we miss out on lot of potential optimizations which are rooted on `scf.for`. Experiments show that complicated JAX programs like the PhysicalIntelligence Pi0 model can benefit from converting `scf.while` to `scf.for` where possible. This improves opportunities for constant folding and makes analysis much easier to gauge the benefit of transforms like unrolling. This change adds some patterns to the Stablehlo-to-Scf pass to enable While-to-For conversion after the Stablehlo-to-Scf conversion. This transformation is combined with the Stablehlo-to-Scf conversion because the While-to-For patterns require first scalarizing block arguments of the While operation. The heuristics for which block arguments should be scalarized are implemented as control callbacks for the scalarization patterns. These callbacks need Stablehlo-specific logic, so it makes sense to test the combined conversion as a single pass. From the pass users' perspective, it gives the appearence of going directly from `stablehlo.while` to `scf.for`. The test cases are updated to cover the new patterns. commit 425d19e749104354b5ea9e76e7509d029f9eac59 Author: Chris Bate <[email protected]> [compiler] Fix assign-memory-spaces pass to respect function-level constraints Fixes an issue where the `plan.memory_space` attribute on a function was not being respected when converting function signatures. MR: initialdl/mlir-tensorrt!2146 commit b612d5a22e7e3c4f08bf80fd504df5193b370bd3 Author: Chris Bate <[email protected]> [compiler] Update scf.while detensorization to increase flexibility In order to incorporate the upstream "uplift scf.while to scf.for" transformation as part of the `stablehlo-to-scf` conversion, we need to detensorize the operands of `scf.while` that are likely to correspond to the loop induction variable. This change refactors our existing 'scf.while' detensorization transformation to give more flexibility and control. The TensorKindAnalysis is no longer required in order to use the pattern(s). Detensorization of `after` and `before` arguments of `scf.while` are now controlled separately. commit 3e21bf465b90e1eaaad872da40c305b70253cce0 Author: Chris Bate <[email protected]> [compiler] Improve handling of memory space constraints in the Plan dialect This commit improves the handling of memory space constraints in the Plan dialect. Constraints are now specified using a common attribute 'plan.memory_space' that can be applied to functions or individual arguments/results. In addition, patterns in `plan-alloc-tensors` and `plan-assign-memory-spaces` are updated to avoid introducing unnecessary transfers between memory spaces. commit 36a3b4a77242685e473817cb692a4010f690c0b3 Author: Chris Bate <[email protected]> [compiler] Add plan-buffer-results-to-out-params pass This change adds a new Plan dialect pass `plan-buffer-results-to-out-params`. This pass is based on the upstream Bufferization pass `buffer-results-to-out-params`, but it can handle a wider number of cases (such as promoting dynamic allocations) and uses alias analysis utilities to guard against failure cases that the upstream pass currently cannot handle. These improvements should eventually be upstreamed back to the Bufferization dialect. commit 9e7127ca1e61be72b032a54d270a3da0d75639b2 Author: Chris Bate <[email protected]> [compiler] Update func conversion in host-to-emitc In the EmitC conversion/translation process, you can use `func.func` or `emitc.func` to define functions. Previously, we converted all `func.func` to `emitc.func`. However, `emitc.func` does not have a path for supporting multiple return values. Therefore, prefer use of type conversions on `func.func` instead of converting the entire op to `emitc.func`. Add tests to verify that we can support multiple return values. commit 934db1f78ef3e7bedb67f1252b41ded7419010f8 Author: Chris Bate <[email protected]> [compiler] Fix two host-to-emitc bugs This change fixes two bugs exposed by new 'host-to-emitc' conversion testing: - The `!emitc.size_t` type does not have DataLayout information specified upstream. Therefore, to ensure that the type can be queried using DataLayout, we add a DataLayoutTypeInterface external model to the type. All queries are simply mapped to queries to the `index` type. - The upstream `func.call` conversion has a bug where it does not correctly convert the result types of the call operation, which can lead to a type mismatch for any type that does not have an identity conversion. Additional tests are added to `host-to-emitc`. Eventually the fixes for both these issues should be moved upstream. commit 9d27f08ee4429f4ffbb72023babc193c7724a700 Author: Chris Bate <[email protected]> [common] Add Linalg-to-loops (on tensors) implementation and conversion pass Adds a ToLoopsOpInterface implementation and for Linalg operations. In addition, a conversion pass is added that converts ToLoopOpInterface operations to loops. commit 3a419f120808eafc31f45516977ed6169b809ab9 Author: Chris Bate <[email protected]> NFC: Move ToLoopsOpInterface to 'mlir-tensorrt-common' Moves the ToLoopsOpInterface to the 'mlir-tensorrt-common' project. This is in preperation for enabling the ToLoopsOpInterface on LinalgOp (lowering while still using Tensor types) to replace the `convert-stablehlo-arith-to-scalar` pipeline. MR: initialdl/mlir-tensorrt!2137 commit 442bea12b763dd36fce864695f63896912438d87 Author: Christopher Bate <[email protected]> NFC: Fix formatting across several files commit b2a65bc3e806aaa95d932af512cfa4750a9cbe4e Author: Chris Bate <[email protected]> [executor] Introduce RuntimeSession "features" to control loading of runtime modules Previously, the RuntimeSession would always load all available runtime modules. This causes some inefficiences. For example, in certain integration tests for the Executor runtime, we don't use CUDA at all. However, because CUDA is still initialized by default, we would still require a GPU to be present just to run the integration test. Furthermore, some experimental modules (e.g. Lua cublas module) are not ready for "production" use and are only really invoked inside special integration tests. This change inroduces a notion of "features" to the RuntimeSession and RuntimeSessionOptions. A feature is just a string that identifies a particular runtime component. The particular semantic of a "feature" depends on the the actual runtime implementation. For example, for the LuaRuntimeSession, the feature names correspond to the available Lua "modules" (a module is just a group of C++ Lua extension functions), e.g. "core", "cuda", "tensorrt", etc. The RuntimeSessionOptions gains methods for enabling/disabling features. Certain features cause others to be added to the set automatically, e.g. "tensorrt" and "nccl" both require "cuda" to be added. The API is piped through all the way to the Python bindings to allow control of loaded modules at all levels. To preserve existing behavior, RuntimeSessions created from Python will load all available modules by default, but the `executor-runner|mlir-tensorrt-runner` tools now require features to be explicitly specified. commit b90f8f345b2941e958f3a1cc5bcac21daebe783b Author: Christopher Bate <[email protected]> NFC: Fix include guard for 'mlir-executor/Support/Status.h' commit cdbe1f560483047291a30115a043a60bdce34d99 Author: Sagar Shelke <[email protected]> [compiler/lib] Add stablehlo composite to call pass to pre-processing pipeline This MR adds `StablehloLegalizeCompositeToCallPass` to the pre-processing pipeline. MLIR test is added. commit 6ea3ab77aa2909cee11d08aa24543f247e8a24bf Author: Chris Bate <[email protected]> [compiler] Add "default memory space" to ClusterKindAttrInterface Adds a new method to the ClusterKindAttrInterface so that backends can control the default tensor encoding (#plan.memory_space<..>) assigned by the `plan.assign-memory-spaces` pass at a function-scope level. In addition, we also allow an attribute to override the default space at function argument/results. This override mechnanism was previously lacking and will help resolve a long-standing issue where users cannot control the memory space of arguments/results reliably. commit 0ea59238f5c280ab3ffbc340bb9aee7ed7bfbebb Author: Christopher Bate <[email protected]> [compiler] Fix some issues related to pipeline extension mechanism The StablehloToExecutableTensorRTExtension had both 'disable' and an inherited 'disabled' member variable. Delete the inherited one such it should not have been introduced and was not bound to any option. Further, remove unused 'extensions' vector from CompilationTaskOptionsBase. commit 372476d77fcaa399460965ab7bfc052f0e44c99f Author: Christopher Bate <[email protected]> [executor] Fix ptrtoint and inttoptr op translation to Lua Previously, we could generate conflicting function types (due to pointer address space) when converting `executor.ptrtoint` and `executor.inttoptr` ops to opaque calls. Instead, defer the conversion to function call until the actual Lua translation point. At that point we can generate a function name without having to consider the pointer address space. commit 75d18534fa67b452dd2253d6981bda6954bf1056 Author: Chris Bate <[email protected]> Introduce 'MLIRTensorRTCommmon' sub-project Certain targets need to be used across multiple sub-projects. For example, the 'TensorRTDynamicLoader' target is used in all sub-projects. In addition, the sub-projects need to be independently buildable. This change introduces another sub-project under the 'common' directory where shared code can be placed. This allows us to use `find_package` to declare the dependency, and downstream consumers to meet the requirement using any number of techniques to fullfill the 'find_package' call. commit d7d8104087cf272bdd08f6330f27734754f0d71d Author: Chris Bate <[email protected]> [compiler] Harden `stablehlo.constant` to `arith.constant` conversion There is a utility pass that runs in the stablehlo-to-executable pipeline that converts `stablehlo.constant` to `arith.constant`. This pass can temporarily create invalid IR due to `arith.constant` not supporting signful integer types. If the "verify-each" option is off, then the issue will not be caught since it happens to be self-correcting. However, the issue can still cause verification failures while debugging. This change fixes the issue by adding a `builtin.unrealized_conversion_cast` operation to bridge the type change between signless-and-signfull integer types. commit a500de82a7bd70d6bfe32234719b4daa7cf32a8a Author: Chris Bate <[email protected]> Integrate LLVM at f137c3d592e96330e450a8fd63ef7e8877fc1908 commit cd56aa6a511e2091fcd86106f20d27ff3673db75 Author: Christopher Bate <[email protected]> Fix build with BUILD_SHARED_LIBS=ON The new InferTensorValueRangeInterface was used without correctly specifying the library dependency the PlanIR and StablehloExtIR libraries. commit cf1aff0ad0997947ab87485cfeec4595cb0285d7 Author: Sagar Shelke <[email protected]> [compiler] Maintain output order in TensorRT engine. For TensorRT engine conversion, first step in lowering a cluster containing TensorRT ops is created inline group op. Operands to the yield op (i.e. terminator) of inline group op are values from the cluster that are used outside the cluster. These values are collected by getting uses of each op (with `op->getUses()`) and checking if they are outside the cluster. However, this use order is not deterministic and sometimes it is desired to get yield results in a certian order. This MR makes the following changes, 1. Add a function callback option named `ReorderRegionOpYieldValues` to `mlir::createRegionOpFromCluster` method. This callback function has signature `std::function<void(SetVector<Value> &yieldValues, SmallVectorImpl<Type> &yieldTypes)>` which takes cluster values used outside the cluster (in SetVector) and their types. By default this is set to nullptr. 2. TensorRTToExecutable task is used in cases where a single `func.func` represents a single TensorRT engine. In this case, `ReorderRegionOpYieldValues` callback is implemented to make sure inline group op yield value order is same as func.func return values order. Valid MLIR test is added. GitOrigin-RevId: 630a69d8e14506db43cfefe4be2c790f9352da4f
1 parent 12995a1 commit 915e6fd

File tree

163 files changed

+5829
-1637
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

163 files changed

+5829
-1637
lines changed

mlir-tensorrt/CMakeLists.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -190,6 +190,8 @@ if(MLIR_TRT_ENABLE_PYTHON)
190190
mlir_tensorrt_find_dlpack()
191191
endif()
192192

193+
find_package(MLIRTensorRTCommon REQUIRED)
194+
193195
#--------------------------------------------------
194196
# Diagnostics
195197
#--------------------------------------------------

mlir-tensorrt/DependencyProvider.cmake

Lines changed: 52 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ if("${MLIR_TRT_USE_LLVM}" STREQUAL "prebuilt")
1717
set(MTRT_BUILD_LLVM_FROM_SOURCE OFF)
1818
endif()
1919

20-
set(MLIR_TRT_LLVM_COMMIT "729416e586fba71b4f63d71b1b5c765aefbf200b")
20+
set(MLIR_TRT_LLVM_COMMIT "f137c3d592e96330e450a8fd63ef7e8877fc1908")
2121

2222
set(mlir_patch_dir "${CMAKE_CURRENT_LIST_DIR}/build_tools/patches/mlir")
2323

@@ -123,6 +123,54 @@ nv_register_package(
123123
]]
124124
)
125125

126+
#-------------------------------------------------------------------------------------
127+
# MLIRTensorRTCommon
128+
#
129+
# MLIRTensorRTCommon is a sub-project that contains components used across the
130+
# other sub-projects like MLIRExecutor and MLIRTensorRTDialect.
131+
#-------------------------------------------------------------------------------------
132+
133+
nv_register_package(
134+
NAME MLIRTensorRTCommon
135+
SOURCE_DIR "${CMAKE_SOURCE_DIR}/common"
136+
)
137+
138+
# -----------------------------------------------------------------------------
139+
# NVTX
140+
# -----------------------------------------------------------------------------
141+
142+
nv_register_package(
143+
NAME NVTX
144+
GIT_REPOSITORY https://github.com/NVIDIA/NVTX.git
145+
GIT_TAG v3.1.0
146+
GIT_SHALLOW TRUE
147+
SOURCE_SUBDIR c
148+
EXCLUDE_FROM_ALL TRUE
149+
DOWNLOAD_ONLY TRUE
150+
POST_ADD_HOOK [[
151+
if(NOT TARGET nvtx3-cpp)
152+
add_library(nvtx3-cpp INTERFACE IMPORTED)
153+
target_include_directories(nvtx3-cpp INTERFACE
154+
"$<BUILD_INTERFACE:${NVTX_SOURCE_DIR}/c/include>")
155+
# Ignore some warnings due to NVTX3 code style.
156+
target_compile_options(nvtx3-cpp INTERFACE
157+
-Wno-missing-braces)
158+
endif()
159+
]]
160+
)
161+
162+
#-------------------------------------------------------------------------------------
163+
# MLIRTensorRTCommon
164+
#
165+
# MLIRTensorRTCommon is a sub-project that contains components used across the
166+
# other sub-projects like MLIRExecutor and MLIRTensorRTDialect.
167+
#-------------------------------------------------------------------------------------
168+
169+
nv_register_package(
170+
NAME MLIRTensorRTCommon
171+
SOURCE_DIR "${CMAKE_SOURCE_DIR}/common"
172+
)
173+
126174
#-------------------------------------------------------------------------------------
127175
# MLIR-Executor
128176
#
@@ -164,7 +212,7 @@ nv_register_package(
164212
NAME torch_mlir
165213
GIT_REPOSITORY https://github.com/llvm/torch-mlir.git
166214
GIT_TAG 0bb263e99415d43255350d29263097b4980303bf
167-
PATCHES
215+
PATCHES
168216
"build_tools/patches/torch_mlir/0001-cmake-Allow-finding-Stablehlo-via-find_package.patch"
169217
"build_tools/patches/torch_mlir/0002-Make-compatible-with-more-recent-Stablehlo-version.patch"
170218
"build_tools/patches/torch_mlir/0003-Fix-some-configuration-paths-in-LIT-cfg.patch"
@@ -202,7 +250,7 @@ macro(mtrt_provide_dependency method dep_name)
202250
endif()
203251

204252
if("${dep_name}" MATCHES
205-
"^(MLIRExecutor|MLIRTensorRTDialect|Stablehlo|torch_mlir)$")
253+
"^(MLIRExecutor|MLIRTensorRTDialect|Stablehlo|torch_mlir|NVTX|MLIRTensorRTCommon)$")
206254
nv_add_package("${dep_name}")
207255
set("${dep_name}_FOUND" TRUE)
208256
endif()
@@ -230,6 +278,7 @@ macro(mtrt_provide_dependency method dep_name)
230278
find_package(LLVM ${ARGN} BYPASS_PROVIDER)
231279
endif()
232280
endif()
281+
233282
endmacro()
234283

235284
cmake_language(

mlir-tensorrt/build_tools/cmake/Dependencies.cmake

Lines changed: 11 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,16 @@ include(${CMAKE_CURRENT_LIST_DIR}/TensorRTDownloadURL.cmake)
66
# expected version.
77
#-------------------------------------------------------------------------------------
88
macro(get_tensorrt_version nvinfer_version_file out_var)
9-
file(STRINGS "${nvinfer_version_file}" VERSION_STRINGS REGEX "#define NV_TENSORRT_.*")
9+
file(STRINGS "${nvinfer_version_file}" VERSION_STRINGS REGEX "#define (TRT_.+|NV_TENSORRT_.+) [0-9]+")
1010
foreach(TYPE MAJOR MINOR PATCH BUILD)
11-
string(REGEX MATCH "NV_TENSORRT_${TYPE} [0-9]+" TRT_TYPE_STRING ${VERSION_STRINGS})
12-
string(REGEX MATCH "[0-9]+" TRT_${TYPE} ${TRT_TYPE_STRING})
11+
string(REGEX MATCH "(TRT_${TYPE}_ENTERPRISE|NV_TENSORRT_${TYPE}) [0-9]+" TRT_TYPE_STRING ${VERSION_STRINGS})
12+
if("${TRT_TYPE_STRING}" STREQUAL "")
13+
message(FATAL_ERROR "Failed to extract TensorRT ${TYPE} version from ${nvinfer_version_file}")
14+
endif()
15+
string(REGEX MATCH "[0-9]+" "TRT_${TYPE}" "${TRT_TYPE_STRING}")
16+
if("TRT_${TYPE}" STREQUAL "")
17+
message(FATAL_ERROR "Failed to extract TensorRT ${TYPE} version from ${nvinfer_version_file}")
18+
endif()
1319
endforeach(TYPE)
1420
set("${out_var}" "${TRT_MAJOR}.${TRT_MINOR}.${TRT_PATCH}.${TRT_BUILD}")
1521
endmacro()
@@ -50,7 +56,7 @@ macro(configure_tensorrt_python_plugin_header)
5056
if(ARG_INSTALL_DIR)
5157
find_file(
5258
trt_python_plugin_header
53-
NAMES plugin.h
59+
NAMES NvInferPythonPlugin.h plugin.h
5460
HINTS ${ARG_INSTALL_DIR} ${ARG_INSTALL_DIR}/python/include/impl
5561
PATHS ${ARG_INSTALL_DIR} ${ARG_INSTALL_DIR}/python/include/impl
5662
REQUIRED
@@ -60,7 +66,7 @@ macro(configure_tensorrt_python_plugin_header)
6066
else()
6167
find_path(
6268
trt_python_plugin_header
63-
NAMES plugin.h
69+
NAMES NvInferPythonPlugin.h plugin.h
6470
REQUIRED
6571
NO_CACHE
6672
)
@@ -173,36 +179,6 @@ function(find_tensorrt)
173179
)
174180
endfunction()
175181

176-
macro(configure_tensorrt_python_plugin_header)
177-
if(ARG_INSTALL_DIR)
178-
find_file(
179-
trt_python_plugin_header
180-
NAMES plugin.h
181-
HINTS ${ARG_INSTALL_DIR} ${ARG_INSTALL_DIR}/python/include/impl
182-
PATHS ${ARG_INSTALL_DIR} ${ARG_INSTALL_DIR}/python/include/impl
183-
REQUIRED
184-
NO_CMAKE_PATH NO_DEFAULT_PATH
185-
NO_CACHE
186-
)
187-
else()
188-
find_path(
189-
trt_python_plugin_header
190-
NAMES plugin.h
191-
REQUIRED
192-
NO_CACHE
193-
)
194-
endif()
195-
file(MAKE_DIRECTORY "${CMAKE_BINARY_DIR}/include/nvinfer")
196-
file(COPY_FILE "${trt_python_plugin_header}"
197-
"${CMAKE_BINARY_DIR}/include/nvinfer/trt_plugin_python.h"
198-
ONLY_IF_DIFFERENT
199-
RESULT copy_result
200-
)
201-
if(copy_result)
202-
message(FATAL_ERROR "failed to copy TensorRT QDP plugin header: ${copy_result}")
203-
endif()
204-
endmacro()
205-
206182
#-------------------------------------------------------------------------------------
207183
# Download and add DLPack to the build (header only)
208184
#-------------------------------------------------------------------------------------
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
cmake_minimum_required(VERSION 3.25)
2+
project(mlir-tensorrt-common LANGUAGES CXX)
3+
4+
# Depdendencies
5+
find_package(LLVM REQUIRED CONFIG)
6+
find_package(MLIR REQUIRED CONFIG)
7+
include(HandleLLVMOptions)
8+
include_directories(${LLVM_INCLUDE_DIRS})
9+
include_directories(${MLIR_INCLUDE_DIRS})
10+
11+
if(MLIR_TRT_TARGET_TENSORRT)
12+
find_package(TensorRT REQUIRED)
13+
endif()
14+
15+
find_package(CUDAToolkit REQUIRED)
16+
17+
set(MLIR_TENSORRT_COMMON_SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR})
18+
set(MLIR_TENSORRT_COMMON_BINARY_DIR ${CMAKE_CURRENT_BINARY_DIR})
19+
20+
include_directories(include ${CMAKE_CURRENT_BINARY_DIR}/include)
21+
22+
add_library(MLIRTensorRTCommonIncludes INTERFACE)
23+
target_include_directories(MLIRTensorRTCommonIncludes INTERFACE
24+
"$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>"
25+
"$<BUILD_INTERFACE:${CMAKE_CURRENT_BINARY_DIR}/include>"
26+
)
27+
28+
add_subdirectory(include/mlir-tensorrt-common)
29+
add_subdirectory(lib)
30+
31+
install(TARGETS MLIRTensorRTCommonIncludes
32+
EXPORT MLIRTensorRTCommonTargets
33+
RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR}
34+
LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
35+
ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
36+
INCLUDES DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}
37+
)
38+

mlir-tensorrt/common/include/mlir-tensorrt-common/CMakeLists.txt

Whitespace-only changes.
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
//===- LuaRegistration.h ----------------------------------------*- C++ -*-===//
1+
//===- Passes.h -------------------------------------------------*- C++ -*-===//
22
//
3-
// SPDX-FileCopyrightText: Copyright 2024 NVIDIA CORPORATION & AFFILIATES.
3+
// SPDX-FileCopyrightText: Copyright 2025 NVIDIA CORPORATION & AFFILIATES.
44
// All rights reserved.
55
// SPDX-License-Identifier: Apache-2.0
66
//
@@ -18,22 +18,22 @@
1818
//
1919
//===----------------------------------------------------------------------===//
2020
///
21-
/// Registration for the Lua runtime methods.
21+
/// This file contains the declarations for the common conversion passes.
2222
///
2323
//===----------------------------------------------------------------------===//
24+
#ifndef MLIR_TENSORRT_COMMON_CONVERSION_PASSES
25+
#define MLIR_TENSORRT_COMMON_CONVERSION_PASSES
2426

25-
#include "mlir-executor/Runtime/API/API.h"
27+
#include "mlir/Pass/Pass.h"
28+
#include <memory>
2629

27-
struct lua_State;
28-
29-
namespace mlirtrt::runtime {
30-
/// Register various external functions with the given Lua state using a
31-
/// directly specified device number, total device count, and a pre-determined
32-
/// NCCL uuid.
33-
void registerLuaRuntimeMethods(lua_State *state,
34-
const RuntimeSessionOptions &options,
35-
PinnedMemoryAllocator *pinnedMemoryAllocator,
36-
AllocTracker *allocTracker,
37-
ResourceTracker *resourceTracker);
30+
//===----------------------------------------------------------------------===//
31+
// Add Tablegen'd pass declarations and registration methods.
32+
//===----------------------------------------------------------------------===//
33+
namespace mlir {
34+
#define GEN_PASS_DECL
35+
#define GEN_PASS_REGISTRATION
36+
#include "mlir-tensorrt-common/Conversion/Passes.h.inc"
37+
} // namespace mlir
3838

39-
} // namespace mlirtrt::runtime
39+
#endif // MLIR_TENSORRT_COMMON_CONVERSION_PASSES
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
#ifndef MLIR_TENSORRT_COMMON_CONVERSION_PASSES
2+
#define MLIR_TENSORRT_COMMON_CONVERSION_PASSES
3+
4+
include "mlir/Pass/PassBase.td"
5+
6+
def ConvertToLoops : Pass<"convert-to-loops"> {
7+
let summary = "Convert a LoopLikeOpInterface to loops";
8+
let description = [{
9+
This pass converts a LoopLikeOpInterface to loops.
10+
}];
11+
12+
let dependentDialects = [
13+
"::mlir::tensor::TensorDialect",
14+
"::mlir::scf::SCFDialect",
15+
];
16+
}
17+
18+
#endif // MLIR_TENSORRT_COMMON_CONVERSION_PASSES
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
//===- DataLayoutImpl.h -----------------------------------------*- C++ -*-===//
2+
//
3+
// SPDX-FileCopyrightText: Copyright 2025 NVIDIA CORPORATION & AFFILIATES.
4+
// All rights reserved.
5+
// SPDX-License-Identifier: Apache-2.0
6+
//
7+
// Licensed under the Apache License, Version 2.0 (the "License");
8+
// you may not use this file except in compliance with the License.
9+
// You may obtain a copy of the License at
10+
//
11+
// http://www.apache.org/licenses/LICENSE-2.0
12+
//
13+
// Unless required by applicable law or agreed to in writing, software
14+
// distributed under the License is distributed on an "AS IS" BASIS,
15+
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16+
// See the License for the specific language governing permissions and
17+
// limitations under the License.
18+
//
19+
//===----------------------------------------------------------------------===//
20+
///
21+
/// This file contains the declarations for the DataLayout extensions to
22+
/// the EmitC dialect.
23+
/// TODO: These interfaces should be upstreamed to the EmitC dialect so that
24+
/// external models are not required.
25+
///
26+
//===----------------------------------------------------------------------===//
27+
#ifndef MLIR_TENSORRT_COMMON_DIALECT_EMITCEXT_IR_DATALAYOUTIMPL_H
28+
#define MLIR_TENSORRT_COMMON_DIALECT_EMITCEXT_IR_DATALAYOUTIMPL_H
29+
30+
#include "mlir/IR/DialectRegistry.h"
31+
32+
namespace mlir::emitc_ext {
33+
void registerDataLayoutInterfaceExternalModels(DialectRegistry &registry);
34+
}
35+
36+
#endif // MLIR_TENSORRT_COMMON_DIALECT_EMITCEXT_IR_DATALAYOUTIMPL_H
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
//===- ToLoopsOpInterfaceImpl.h ---------------------------------*- C++ -*-===//
2+
//
3+
// SPDX-FileCopyrightText: Copyright 2025 NVIDIA CORPORATION & AFFILIATES.
4+
// All rights reserved.
5+
// SPDX-License-Identifier: Apache-2.0
6+
//
7+
// Licensed under the Apache License, Version 2.0 (the "License");
8+
// you may not use this file except in compliance with the License.
9+
// You may obtain a copy of the License at
10+
//
11+
// http://www.apache.org/licenses/LICENSE-2.0
12+
//
13+
// Unless required by applicable law or agreed to in writing, software
14+
// distributed under the License is distributed on an "AS IS" BASIS,
15+
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16+
// See the License for the specific language governing permissions and
17+
// limitations under the License.
18+
//
19+
//===----------------------------------------------------------------------===//
20+
///
21+
/// This file contains the declarations for the ToLoopsOpInterface extensions to
22+
/// the Linalg dialect.
23+
///
24+
//===----------------------------------------------------------------------===//
25+
#ifndef MLIR_TENSORRT_COMMON_DIALECT_LINALGEXT_TRANSFORMS_TOLOOPSOPINTERFACEIMPL
26+
#define MLIR_TENSORRT_COMMON_DIALECT_LINALGEXT_TRANSFORMS_TOLOOPSOPINTERFACEIMPL
27+
28+
#include "mlir-tensorrt-common/Interfaces/ToLoopsOpInterface.h"
29+
#include "mlir/IR/DialectRegistry.h"
30+
31+
namespace mlir::linalg {
32+
class LinalgOp;
33+
}
34+
35+
namespace mlir::scf {
36+
class ForOp;
37+
}
38+
39+
namespace mlir::linalg_ext {
40+
41+
/// Register the ToLoopsOpInterface external models for GenericOp. For other
42+
/// kinds of operations that are LinalgOps, we don't register an external model
43+
/// because there are so many; instead use the below function to perform
44+
/// conversion.
45+
void registerToLoopsOpInterfaceExternalModels(DialectRegistry &registry);
46+
47+
/// Convert a LinalgOp (on tensors) to SCF loops.
48+
FailureOr<SmallVector<Operation *>>
49+
convertLinalgOpToLoops(RewriterBase &rewriter, linalg::LinalgOp op);
50+
51+
} // namespace mlir::linalg_ext
52+
53+
#endif // MLIR_TENSORRT_COMMON_DIALECT_LINALGEXT_TRANSFORMS_TOLOOPSOPINTERFACEIMPL

0 commit comments

Comments
 (0)