[Mlir-commits] [mlir] [mlir][sparse] check test on code coming from torch-mlir and linalg (PR #80369)

Aart Bik llvmlistbot at llvm.org
Thu Feb 1 17:19:51 PST 2024


https://github.com/aartbik created https://github.com/llvm/llvm-project/pull/80369

Ensures we can combine passes with sparse assembler for public API

>From ab31e2c42efeba6d50bb39e9a1cf7359e1b8086b Mon Sep 17 00:00:00 2001
From: Aart Bik <ajcbik at google.com>
Date: Thu, 1 Feb 2024 17:17:48 -0800
Subject: [PATCH] [mlir][sparse] check test on code coming from torch-mlir and
 linalg

Ensures we can combine passes with sparse assembler for public API
---
 .../Dialect/SparseTensor/torch_linalg.mlir    | 54 +++++++++++++++++++
 1 file changed, 54 insertions(+)
 create mode 100644 mlir/test/Dialect/SparseTensor/torch_linalg.mlir

diff --git a/mlir/test/Dialect/SparseTensor/torch_linalg.mlir b/mlir/test/Dialect/SparseTensor/torch_linalg.mlir
new file mode 100644
index 0000000000000..1f0cdf8a79b02
--- /dev/null
+++ b/mlir/test/Dialect/SparseTensor/torch_linalg.mlir
@@ -0,0 +1,54 @@
+// RUN: mlir-opt %s --sparse-assembler                 | FileCheck %s --check-prefix=CHECK-HI
+// RUN: mlir-opt %s --sparse-assembler \
+// RUN:             --linalg-generalize-named-ops \
+// RUN:             --linalg-fuse-elementwise-ops \
+// RUN:             --sparsification-and-bufferization | FileCheck %s --check-prefix=CHECK-MID
+// RUN: mlir-opt %s --sparse-assembler \
+// RUN:             --sparsifier                       | FileCheck %s --check-prefix=CHECK-LOW
+
+//
+// An example of a module generated by torch-mlir with a sparse tensor from
+// torch.sparse. The MLIR sparsifier should be able to provide the external
+// API through a wrapper method (spiface and ciface). Various passes should
+// compose without trouble.
+//
+
+// CHECK-HI-LABEL: func.func @spiface_main
+// CHECK-HI:         sparse_tensor.assemble
+// CHECK-HI:         call @main
+// CHECK-HI:         return
+// CHECK-HI:       func.func @main
+// CHECK-HI:         linalg.matmul
+// CHECK-HI:         return
+//
+// CHECK-MID-LABEL: func.func @spiface_main
+// CHECK-MID:          memref.load
+// CHECK-MID:          call @main
+// CHECK-MID:          return
+// CHECK-MID:       func.func @main
+// CHECK-MID:          scf.for
+// CHECK-MID:            scf.for
+// CHECK-MID:          return
+
+// CHECK-LOW-LABEL: llvm.func @spiface_main
+// CHECK-LOW:         llvm.call @main
+// CHECK-LOW:         llvm.return
+// CHECK-LOW:       llvm.func @_mlir_ciface_spiface_main
+// CHECK-LOW:         llvm.call @spiface_main
+// CHECK-LOW:         llvm.return
+// CHECK-LOW:       llvm.func @main
+// CHECK-LOW:         llvm.return
+
+#csc = #sparse_tensor.encoding<{ map = (d0, d1) -> (d1 : dense, d0 : compressed) }>
+module {
+  func.func @main(%arg0: tensor<64x64xf32, #csc>,
+                  %arg1: tensor<64x64xf32>) -> tensor<64x64xf32> attributes {llvm.emit_c_interface} {
+    %cst = arith.constant 0.000000e+00 : f32
+    %0 = tensor.empty() : tensor<64x64xf32>
+    %1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<64x64xf32>) -> tensor<64x64xf32>
+    %2 = linalg.matmul
+      ins(%arg0, %arg1 : tensor<64x64xf32, #csc>, tensor<64x64xf32>)
+      outs(%1 : tensor<64x64xf32>) -> tensor<64x64xf32>
+    return %2 : tensor<64x64xf32>
+  }
+}



More information about the Mlir-commits mailing list