[Mlir-commits] [mlir] [mlir][linalg] Add pattern to clean unused results after fusion (PR #158627)

Thomas Preud'homme llvmlistbot at llvm.org
Tue Sep 23 02:32:42 PDT 2025


================
@@ -1079,4 +1079,49 @@ module {
 // CHECK-NOT:     linalg.generic
 // CHECK:         tensor.expand_shape
 // CHECK:         linalg.generic {{.*}}, iterator_types = ["parallel", "parallel", "parallel", "parallel", "parallel", "parallel", "reduction"]}
-// CHECK-SAME:     ins(%[[ARG0]], %[[FUSED]]#1 : tensor<1x1x2x1xf32>, tensor<4x1x1x1xf32>)
\ No newline at end of file
+// CHECK-SAME:     ins(%[[ARG0]], %[[FUSED]]#1 : tensor<1x1x2x1xf32>, tensor<4x1x1x1xf32>)
+
+// -----
+
+// CHECK-LABEL: @drop_unused_results
+// CHECK-SAME:   [[ARG0:%[a-zA-Z0-9]+]]: tensor<64xf32>, [[ARG1:%[a-zA-Z0-9]+]]: tensor<1x56x56x64xf32>
+func.func @drop_unused_results(%arg0: tensor<64xf32>, %arg1: tensor<1x56x56x64xf32>) -> tensor<1x56x56x64xf32> {
+  %cst = arith.constant 3.40282347E+38 : f32
+  %cst_0 = arith.constant 0.000000e+00 : f32
+  // CHECK: [[OUT:%[a-zA-Z0-9]+]] = tensor.empty() : tensor<1x56x56x64xf32>
+  %0 = tensor.empty() : tensor<1x56x56x64xf32>
+  // CHECK: [[RES:%[0-9]+]] = linalg.generic {{.*}} ins([[ARG0]], [[ARG1]] : tensor<64xf32>, tensor<1x56x56x64xf32>) outs([[OUT]] : tensor<1x56x56x64xf32>)
+  %1:2 = linalg.generic {indexing_maps = [affine_map<(d0, d1, d2, d3) -> (d3)>, affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)>, affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)>], iterator_types = ["parallel", "parallel", "parallel", "parallel"]} ins(%arg0 : tensor<64xf32>) outs(%arg1, %0 : tensor<1x56x56x64xf32>, tensor<1x56x56x64xf32>) {
----------------
RoboTux wrote:

[taking over Pavel whose internship is now finished]

Ack. This pattern was the result of some tensor fusion pattern, but I need to investigate if it's an upstream pattern or not. I've put this PR as draft for the time being while I check whether it was an upstream pattern that caused this invalid IR. All I know is we seem to call populateMoveInitOperandsToInput implicitely via LinalgFoldUnitExtentDimsPass but when removing the pattern added by this patch we get worse code generation. I'll update once we've found the root cause. Thanks for the review so far!

https://github.com/llvm/llvm-project/pull/158627


More information about the Mlir-commits mailing list