[Mlir-commits] [mlir] [mlir][math] expand-math pass assumes the static shaped type (PR #128299)

Thomas Raoux llvmlistbot at llvm.org
Sat Feb 22 00:32:57 PST 2025


================
@@ -761,3 +761,25 @@ func.func @rsqrt_tns(%float: tensor<5x8xf32>) -> (tensor<5x8xf32>)  {
   %float_result = math.rsqrt %float : tensor<5x8xf32>
   return %float_result : tensor<5x8xf32>
 }
+
+// -----
+
+// CHECK-LABEL  func.func @non_static_shape_ceil_op
+// CHECK:     %[[IDX:.*]] = index.constant 0
+// CHECK:     %[[CST:.*]] = arith.constant dense<1.000000e+00> : tensor<2xf32>
+// CHECK:     %[[CAST:.*]] = tensor.cast %[[CST]] : tensor<2xf32> to tensor<?xf32>
+// CHECK:     %[[CEIL:.*]] = math.ceil %[[CAST]] : tensor<?xf32>
+// CHECK:     %[[DIM:.*]] = tensor.dim %[[CEIL]], %[[IDX]] : tensor<?xf32>
+// CHECK:     vector.print %[[DIM]] : index
+// CHECK:         return
+
+func.func @non_static_shape_ceil_op() {
+  %idx0 = index.constant 0
+  %cst_90 = arith.constant 1.000000e+00 : f32
+  %from_elements_92 = tensor.from_elements %cst_90, %cst_90 : tensor<2xf32>
+  %cast_93 = tensor.cast %from_elements_92 : tensor<2xf32> to tensor<?xf32>
----------------
ThomasRaoux wrote:

nit: are those ops necessary? It would be better to minimize the test (same for tensor.dim..
you can do:
```
func.func @non_static_shape_ceil_op(%arg: tensor<?xf32) -> tensor<?xf32>{
  %a = math.ceil %arg : tensor<?xf32>
  return %a: tensor<?xf32>
}
```

https://github.com/llvm/llvm-project/pull/128299


More information about the Mlir-commits mailing list