Skip to content

Conversation

@yizhuoz004
Copy link
Collaborator

No description provided.

@yizhuoz004
Copy link
Collaborator Author

yizhuoz004 commented Oct 2, 2024

@christopherbate The canonicalizer was not triggered in this case:

// -----// IR Dump Before StablehloClusteringPass (stablehlo-clustering) ('builtin.module' operation: @outs_t8_1) //----- //
module @outs_t8_1 {
  func.func @main() -> tensor<1x1x8x8xf32> {
    %false = arith.constant false
    %true = arith.constant true
    %c8_i32 = arith.constant 8 : i32
    %c1_i32 = arith.constant 1 : i32
    %cst = stablehlo.constant dense<1.000000e+00> : tensor<1x1x1x1xf32>
    %c = stablehlo.constant dense<1> : tensor<4xi32>
    %cst_0 = stablehlo.constant dense<[[[[0.000000e+00, 1.000000e+00, 2.000000e+00, 3.000000e+00], [4.000000e+00, 5.000000e+00, 6.000000e+00, 7.000000e+00], [8.000000e+00, 9.000000e+00, 1.000000e+01, 1.100000e+01], [1.200000e+01, 1.300000e+01, 1.400000e+01, 1.500000e+01]]]]> : tensor<1x1x4x4xf32>
    %c_1 = stablehlo.constant dense<[1, 1, 8, 8]> : tensor<4xi32>
    %0 = tensorrt.resize_linear {coordinateTransformation = #tensorrt.resize_coordinate_transformation<kHALF_PIXEL>, selectorForSinglePixel = #tensorrt.resize_selector<kFORMULA>} %cst_0, %c_1 : (tensor<1x1x4x4xf32>, tensor<4xi32>) -> tensor<?x?x?x?xf32>
    %1 = plan.with_shape %0(%c1_i32, %c1_i32, %c8_i32, %c8_i32) : (tensor<?x?x?x?xf32>, i32, i32, i32, i32) -> tensor<?x?x?x?xf32>
    %2 = stablehlo.get_dimension_size %1, dim = 0 : (tensor<?x?x?x?xf32>) -> tensor<i32>
    %3 = plan.with_values %2(%c1_i32) : tensor<i32>
    %4 = stablehlo.reshape %3 : (tensor<i32>) -> tensor<1xi32>
    %5 = plan.with_values %4(%c1_i32) : tensor<1xi32>
    %6 = stablehlo.get_dimension_size %1, dim = 1 : (tensor<?x?x?x?xf32>) -> tensor<i32>
    %7 = plan.with_values %6(%c1_i32) : tensor<i32>
    %8 = stablehlo.reshape %7 : (tensor<i32>) -> tensor<1xi32>
    %9 = plan.with_values %8(%c1_i32) : tensor<1xi32>
    %10 = stablehlo.get_dimension_size %1, dim = 2 : (tensor<?x?x?x?xf32>) -> tensor<i32>
    %11 = plan.with_values %10(%c8_i32) : tensor<i32>
    %12 = stablehlo.reshape %11 : (tensor<i32>) -> tensor<1xi32>
    %13 = plan.with_values %12(%c8_i32) : tensor<1xi32>
    %14 = stablehlo.get_dimension_size %1, dim = 3 : (tensor<?x?x?x?xf32>) -> tensor<i32>
    %15 = plan.with_values %14(%c8_i32) : tensor<i32>
    %16 = stablehlo.reshape %15 : (tensor<i32>) -> tensor<1xi32>
    %17 = plan.with_values %16(%c8_i32) : tensor<1xi32>
    %18 = stablehlo.concatenate %5, %9, %13, %17, dim = 0 : (tensor<1xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<4xi32>
    %19 = plan.with_values %18(%c1_i32, %c1_i32, %c8_i32, %c8_i32) : tensor<4xi32>
    %20 = stablehlo.compare  EQ, %19, %c : (tensor<4xi32>, tensor<4xi32>) -> tensor<4xi1>
    %21 = plan.with_values %20(%true, %true, %false, %false) : tensor<4xi1>
    %22 = stablehlo.select %21, %c, %19 : tensor<4xi1>, tensor<4xi32>
    %23 = plan.with_values %22(%c1_i32, %c1_i32, %c8_i32, %c8_i32) : tensor<4xi32>
    %cast = tensor.cast %0 : tensor<?x?x?x?xf32> to tensor<1x1x8x8xf32>
    %24 = stablehlo.dynamic_broadcast_in_dim %cst, %23, dims = [0, 1, 2, 3] : (tensor<1x1x1x1xf32>, tensor<4xi32>) -> tensor<1x1x8x8xf32>
    %25 = stablehlo.add %cast, %24 : tensor<1x1x8x8xf32>
    return %25 : tensor<1x1x8x8xf32>
  }
}

Is there anything else needs to be fixed?

@yizhuoz004 yizhuoz004 force-pushed the trt-resize-refine-types branch from 95e1c8a to 2950dd1 Compare October 2, 2024 23:30
@yizhuoz004 yizhuoz004 marked this pull request as ready for review October 4, 2024 18:17
@christopherbate
Copy link
Collaborator

Description is inaccurate. A canonicalizer refers something very specific https://mlir.llvm.org/docs/DefiningDialects/Operations/#hascanonicalizer

@yizhuoz004 yizhuoz004 force-pushed the trt-resize-refine-types branch from 9e7f578 to 87c5d75 Compare October 7, 2024 22:03
@yizhuoz004 yizhuoz004 changed the title Add type canonicalizer for tensorrt ops [tensorrt] Refine types for tensorrt ops with plan.with_shape Oct 7, 2024
@yizhuoz004 yizhuoz004 force-pushed the trt-resize-refine-types branch from 87c5d75 to 7524556 Compare October 7, 2024 22:12
Copy link
Collaborator

@christopherbate christopherbate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like there's an issue with the commit message format but otherwise LGTM

@christopherbate
Copy link
Collaborator

I'll have to check the CI issue, looks unrelated to commit message

@yizhuoz004 yizhuoz004 force-pushed the trt-resize-refine-types branch from 7524556 to 1968710 Compare October 7, 2024 22:50
@yizhuoz004 yizhuoz004 force-pushed the trt-resize-refine-types branch 4 times, most recently from 956fbe7 to 20c2e1a Compare October 8, 2024 00:28
Adds TensorRTRefineTypeFromWithShapeGeneric pattern to refine
types of tensorrt ops in PlanRefineTypesPass.
@yizhuoz004 yizhuoz004 force-pushed the trt-resize-refine-types branch from 20c2e1a to 4cf26c5 Compare October 8, 2024 17:14
@yizhuoz004 yizhuoz004 merged commit 223cc67 into main Oct 8, 2024
1 check passed
@yizhuoz004 yizhuoz004 deleted the trt-resize-refine-types branch October 8, 2024 17:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants