[Mlir-commits] [mlir] [MLIR][Linalg] Harden parsing Linalg named ops (PR #145337)

Mehdi Amini llvmlistbot at llvm.org
Tue Jun 24 09:57:27 PDT 2025


================
@@ -1868,9 +1868,49 @@ func.func @unpack_static_inner_tile_size_and_dynamic_output_shape(
 
 // -----
 
+//===----------------------------------------------------------------------===//
+// linalg.reduce
+//===----------------------------------------------------------------------===//
+
+
 func.func @reduce_non_operation_name(%arg0: tensor<4xf32>, %arg1: tensor<f32>) -> tensor<f32> {
   // expected-error @below {{expected bare identifier or keyword}}
   %0 = linalg.reduce {@reduce_fusion_elementwise} ins(
     %arg0: tensor<4xf32>) outs(%arg1: tensor<f32>) dimensions = [0]
   return %0 : tensor<f32>
 }
+
+// -----
+
+
+//===----------------------------------------------------------------------===//
+// Named op error checking.
+//===----------------------------------------------------------------------===//
+
+module {
+  func.func @add_invalid_mixed_types(%in_f32: memref<3xf32>, %in_i32 : memref< 3xi32>, %out_f32: memref<3xf32>, %arg3: memref<3xf32>) {
+    // expected-error @below {{Cannot build binary Linalg operation: expects allComplex, allFloatingPoint, or allInteger, got 'f32' and 'i32'}}
+    linalg.add ins(%in_f32, %in_i32 : memref<3xf32>, memref< 3xi32>) outs(%out_f32 : memref<3xf32>)
+    return
+  }
+}
+
+// -----
+
+func.func @elemwise_unary_invalid_mixed_types(%arg0 : tensor<?xi32>) -> tensor<?xi32> {
+  // expected-error @below {{unsupported non numeric type}}
+  %0 = linalg.elemwise_unary ins(%arg0 : tensor<?xi32>) outs(%arg0 : tensor<?xi32>) -> tensor<?xi32>
+  return %0 : tensor<?xi32>
+}
+
+// -----
+
+func.func @matmul_invalid_mixed_types(%t: tensor<?xf16>, %f: vector<4xf16>)
+  -> (tensor<?xf16>, vector<4xf16>)
+{
+  // expected-warning @unknown {{could not cast operand of type 'f16' to 'vector<4xf16>'}}
+  // expected-error @below {{Cannot build binary Linalg operation: expects allComplex, allFloatingPoint, or allInteger, got 'vector<4xf16>' and 'f16'}}
+  %0 = linalg.matmul ins(%t, %t : tensor<?xf16>, tensor<?xf16>)
+                                outs(%f : vector<4xf16>) -> tensor<?xf16>
+  func.return %0, %f : tensor<?xf16>, vector<4xf16>
+}
----------------
joker-eph wrote:

I'm not sure about this: it's not really testing matmul, but matmul is used as proxy to reach some of the generic infrastructure testing named ops.

For example removing the matmul op would lose coverage and we should use another op to get this diagnostic.

https://github.com/llvm/llvm-project/pull/145337


More information about the Mlir-commits mailing list