[Mlir-commits] [mlir] [mlir][sparse] Migrate tests to use new syntax (PR #66543)

llvmlistbot at llvm.org llvmlistbot at llvm.org
Fri Sep 15 12:59:09 PDT 2023


llvmbot wrote:


<!--LLVM PR SUMMARY COMMENT-->

@llvm/pr-subscribers-mlir-gpu

<details>
<summary>Changes</summary>

**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton)` 
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1) -> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0, d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 : singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl = affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod 3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the CHECK tests will be updated once printing is set to output the new syntax.
---

Patch is 51.38 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/66543.diff


58 Files Affected:

- (modified) mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td (+7-3) 
- (modified) mlir/test/Dialect/SparseTensor/GPU/gpu_matvec_lib.mlir (+1-1) 
- (modified) mlir/test/Dialect/SparseTensor/codegen.mlir (+5-7) 
- (modified) mlir/test/Dialect/SparseTensor/codegen_sparse_alloc.mlir (+1-1) 
- (modified) mlir/test/Dialect/SparseTensor/conversion.mlir (+1-2) 
- (modified) mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir (+1-2) 
- (modified) mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir (+1-2) 
- (modified) mlir/test/Dialect/SparseTensor/convert_sparse2sparse.mlir (+5-6) 
- (modified) mlir/test/Dialect/SparseTensor/invalid.mlir (+4-4) 
- (modified) mlir/test/Dialect/SparseTensor/pre_rewriting.mlir (+1-1) 
- (modified) mlir/test/Dialect/SparseTensor/rewriting_for_codegen.mlir (+1-1) 
- (modified) mlir/test/Dialect/SparseTensor/roundtrip.mlir (+2-2) 
- (modified) mlir/test/Dialect/SparseTensor/roundtrip_encoding.mlir (+6-2) 
- (modified) mlir/test/Dialect/SparseTensor/sorted_coo.mlir (+1-1) 
- (modified) mlir/test/Dialect/SparseTensor/sparse_2d.mlir (+1-1) 
- (modified) mlir/test/Dialect/SparseTensor/sparse_3d.mlir (+8-8) 
- (modified) mlir/test/Dialect/SparseTensor/sparse_broadcast.mlir (+1-1) 
- (modified) mlir/test/Dialect/SparseTensor/sparse_foreach.mlir (+1-1) 
- (modified) mlir/test/Dialect/SparseTensor/sparse_nd.mlir (+4-2) 
- (modified) mlir/test/Dialect/SparseTensor/sparse_out.mlir (+1-1) 
- (modified) mlir/test/Dialect/SparseTensor/sparse_pack.mlir (+1-1) 
- (modified) mlir/test/Dialect/SparseTensor/sparse_perm.mlir (+1-2) 
- (modified) mlir/test/Dialect/SparseTensor/sparse_perm_lower.mlir (+1-2) 
- (modified) mlir/test/Dialect/SparseTensor/sparse_reshape_dot.mlir (+2-2) 
- (modified) mlir/test/Dialect/SparseTensor/unsparsifiable_dense_op.mlir (+2-2) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/reshape_dot.mlir (+2-2) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_foreach.mlir (+3-5) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_collapse_shape.mlir (+2-2) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_1d_nwc_wcf.mlir (+3-2) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d_nchw_fchw.mlir (+2-2) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d_nhwc_hwcf.mlir (+3-3) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_3d.mlir (+3-3) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_3d_ndhwc_dhwcf.mlir (+2-2) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion.mlir (+3-6) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_element.mlir (+3-4) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2dense.mlir (+6-12) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2sparse.mlir (+12-7) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_coo_test.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_expand_shape.mlir (+2-2) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_flatten.mlir (+5-3) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_foreach_slices.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_2d.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_3d.mlir (+4-4) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matmul_slice.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_mttkrp.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_reduction.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir (+3-3) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack_libgen.mlir (+3-3) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pooling_nhwc.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reshape.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_scf_nested.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sorted_coo.mlir (+5-6) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_strided_conv_2d_nhwc_hwcf.mlir (+2-2) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_tensor_mul.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_tensor_ops.mlir (+2-2) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_transpose_coo.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matmul-lib.mlir (+1-1) 
- (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec-lib.mlir (+1-1) 


``````````diff
diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td
index 8b79fbf726495ad..e671c5e323a09d9 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td
+++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td
@@ -200,7 +200,7 @@ def SparseTensorEncodingAttr : SparseTensor_Attr<"SparseTensorEncoding",
 
     // Sorted Coordinate Scheme.
     #SortedCOO = #sparse_tensor.encoding<{
-      lvlTypes = [ "compressed_nu", "singleton" ]
+      map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton)
     }>
     ... tensor<?x?xf64, #SortedCOO> ...
 
@@ -214,8 +214,12 @@ def SparseTensorEncodingAttr : SparseTensor_Attr<"SparseTensorEncoding",
 
     // Block sparse row storage (2x3 blocks).
     #BCSR = #sparse_tensor.encoding<{
-      lvlTypes = [ "compressed", "compressed", "dense", "dense" ],
-      dimToLvl = affine_map<(i, j) -> (i floordiv 2, j floordiv 3, i mod 2, j mod 3)>
+      map = ( i, j ) ->
+      ( i floordiv 2 : compressed,
+        j floordiv 3 : compressed,
+        i mod 2      : dense,
+        j mod 3      : dense
+      )
     }>
     ... tensor<20x30xf32, #BCSR> ...
 
diff --git a/mlir/test/Dialect/SparseTensor/GPU/gpu_matvec_lib.mlir b/mlir/test/Dialect/SparseTensor/GPU/gpu_matvec_lib.mlir
index 4f25322df2c90dd..50ff81cb6ecd0a6 100644
--- a/mlir/test/Dialect/SparseTensor/GPU/gpu_matvec_lib.mlir
+++ b/mlir/test/Dialect/SparseTensor/GPU/gpu_matvec_lib.mlir
@@ -2,7 +2,7 @@
 // RUN:             --sparsification="enable-gpu-libgen" | FileCheck %s
 
 #SortedCOO = #sparse_tensor.encoding<{
-  lvlTypes = [ "compressed_nu", "singleton" ]
+  map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton)
 }>
 
 module {
diff --git a/mlir/test/Dialect/SparseTensor/codegen.mlir b/mlir/test/Dialect/SparseTensor/codegen.mlir
index c5061c40eb0b16b..f1317f23d656848 100644
--- a/mlir/test/Dialect/SparseTensor/codegen.mlir
+++ b/mlir/test/Dialect/SparseTensor/codegen.mlir
@@ -27,7 +27,7 @@
 }>
 
 #UCSR = #sparse_tensor.encoding<{
-  lvlTypes = [ "dense", "compressed_no" ]
+  map = (d0, d1) -> (d0 : dense, d1 : compressed(nonordered))
 }>
 
 #CSC = #sparse_tensor.encoding<{
@@ -41,21 +41,19 @@
 }>
 
 #Dense3D = #sparse_tensor.encoding<{
-  lvlTypes = [ "dense", "dense", "dense" ],
-  dimToLvl = affine_map<(i, j, k) -> (k, i, j)>
+  map = (d0, d1, d2) -> (d2 : dense, d0 : dense, d1 : dense)
 }>
 
 #Coo = #sparse_tensor.encoding<{
-  lvlTypes = [ "compressed_nu", "singleton" ]
+  map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton)
 }>
 
 #CooPNo = #sparse_tensor.encoding<{
-  lvlTypes = [ "compressed_nu", "singleton_no" ],
-  dimToLvl = affine_map<(i, j) -> (j, i)>
+  map = (d0, d1) -> (d1 : compressed(nonunique), d0 : singleton(nonordered))
 }>
 
 #ccoo = #sparse_tensor.encoding<{
-  lvlTypes = [ "compressed", "compressed_nu", "singleton" ]
+  map = (d0, d1, d2) -> (d0 : compressed, d1 : compressed(nonunique), d2 : singleton)
 }>
 
 // CHECK-LABEL: func @sparse_nop(
diff --git a/mlir/test/Dialect/SparseTensor/codegen_sparse_alloc.mlir b/mlir/test/Dialect/SparseTensor/codegen_sparse_alloc.mlir
index 479642e5db4ed1e..50ad4a0a1ea247d 100644
--- a/mlir/test/Dialect/SparseTensor/codegen_sparse_alloc.mlir
+++ b/mlir/test/Dialect/SparseTensor/codegen_sparse_alloc.mlir
@@ -1,7 +1,7 @@
 // RUN: mlir-opt %s --sparse-tensor-codegen --canonicalize --cse | FileCheck %s
 
 #CSR = #sparse_tensor.encoding<{ map = (d0, d1) -> (d0 : dense, d1 : compressed)}>
-#COO = #sparse_tensor.encoding<{ lvlTypes = ["compressed_nu", "singleton"]}>
+#COO = #sparse_tensor.encoding<{ map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton)}>
 
 // CHECK-LABEL:   func.func @sparse_alloc_copy_CSR(
 // CHECK-SAME:      %[[VAL_0:.*0]]: memref<?xindex>,
diff --git a/mlir/test/Dialect/SparseTensor/conversion.mlir b/mlir/test/Dialect/SparseTensor/conversion.mlir
index f8e30872a0756c7..64f66c5cb65eba9 100644
--- a/mlir/test/Dialect/SparseTensor/conversion.mlir
+++ b/mlir/test/Dialect/SparseTensor/conversion.mlir
@@ -25,8 +25,7 @@
 }>
 
 #SparseTensor = #sparse_tensor.encoding<{
-  lvlTypes = ["dense", "compressed", "compressed"],
-  dimToLvl = affine_map<(i,j,k) -> (k,i,j)>
+  map = (d0, d1, d2) -> (d2 : dense, d0 : compressed, d1 : compressed)
 }>
 
 // CHECK-LABEL: func @sparse_nop(
diff --git a/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir b/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir
index 4707b199222ad49..9fb1946d56263db 100644
--- a/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir
+++ b/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir
@@ -15,8 +15,7 @@
 }>
 
 #SparseTensor = #sparse_tensor.encoding<{
-  lvlTypes = ["dense", "compressed", "compressed"],
-  dimToLvl = affine_map<(i,j,k) -> (k,i,j)>
+  map = (d0, d1, d2) -> (d2 : dense, d0 : compressed, d1 : compressed)
 }>
 
 // CHECK-LABEL: func @sparse_convert_1d(
diff --git a/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir b/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir
index 363a63eb8ed1eca..621235182c9a840 100644
--- a/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir
+++ b/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir
@@ -12,8 +12,7 @@
 }>
 
 #SparseTensor = #sparse_tensor.encoding<{
-  lvlTypes = ["dense", "compressed", "compressed"],
-  dimToLvl = affine_map<(i,j,k) -> (k,i,j)>
+  map = (d0, d1, d2) -> (d2 : dense, d0 : compressed, d1 : compressed)
 }>
 
 // CHECK-LABEL: func @sparse_convert_1d(
diff --git a/mlir/test/Dialect/SparseTensor/convert_sparse2sparse.mlir b/mlir/test/Dialect/SparseTensor/convert_sparse2sparse.mlir
index 296e1bf9030c624..b3eb50f1755dace 100644
--- a/mlir/test/Dialect/SparseTensor/convert_sparse2sparse.mlir
+++ b/mlir/test/Dialect/SparseTensor/convert_sparse2sparse.mlir
@@ -26,17 +26,16 @@
 }>
 
 #SortedCOO2D = #sparse_tensor.encoding<{
-  lvlTypes = [ "compressed_nu", "singleton" ],
+  map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton),
 }>
 
 #SortedCOO3D = #sparse_tensor.encoding<{
-  lvlTypes = [ "compressed_nu", "singleton_nu", "singleton" ]
+  map = (d0, d1, d2) -> (d0 : compressed(nonunique), d1 : singleton(nonunique), d2 : singleton)
 
 }>
 
 #TsssPermuted = #sparse_tensor.encoding<{
-  lvlTypes = [ "compressed", "compressed", "compressed" ],
-  dimToLvl = affine_map<(i,j,k) -> (k,i,j)>
+  map = (d0, d1, d2) -> (d2 : compressed, d0 : compressed, d1 : compressed)
 }>
 
 #COOSlice = #sparse_tensor.encoding<{
@@ -115,13 +114,13 @@ func.func @sparse_convert(%arg0: tensor<?xf32, #SparseVector64>) -> tensor<?xf32
 }
 
 #SparseSingleton64 = #sparse_tensor.encoding<{
-  lvlTypes = ["singleton"],
+  map = (d0) -> (d0 : singleton),
   posWidth = 64,
   crdWidth = 64
 }>
 
 #SparseSingleton32 = #sparse_tensor.encoding<{
-  lvlTypes = ["singleton"],
+  map = (d0) -> (d0 : singleton),
   posWidth = 32,
   crdWidth = 32
 }>
diff --git a/mlir/test/Dialect/SparseTensor/invalid.mlir b/mlir/test/Dialect/SparseTensor/invalid.mlir
index 8e25cf06bcb6212..71e6eebb30261c8 100644
--- a/mlir/test/Dialect/SparseTensor/invalid.mlir
+++ b/mlir/test/Dialect/SparseTensor/invalid.mlir
@@ -32,7 +32,7 @@ func.func @invalid_pack_type(%values: tensor<6xf64>, %pos: tensor<2xi32>, %coord
 
 // -----
 
-#SparseVector = #sparse_tensor.encoding<{lvlTypes = ["compressed_nu", "singleton"], posWidth=32, crdWidth=32}>
+#SparseVector = #sparse_tensor.encoding<{map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton), posWidth=32, crdWidth=32}>
 
 func.func @invalid_pack_type(%values: tensor<6xf64>, %pos: tensor<2xi32>, %coordinates: tensor<6x3xi32>)
                             -> tensor<100x2xf64, #SparseVector> {
@@ -68,7 +68,7 @@ func.func @invalid_unpack_type(%sp: tensor<100xf32, #SparseVector>, %values: ten
 
 // -----
 
-#SparseVector = #sparse_tensor.encoding<{lvlTypes = ["compressed_nu", "singleton"], posWidth=32, crdWidth=32}>
+#SparseVector = #sparse_tensor.encoding<{map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton), posWidth=32, crdWidth=32}>
 
 func.func @invalid_unpack_type(%sp: tensor<100x2xf64, #SparseVector>, %values: tensor<6xf64>, %pos: tensor<2xi32>, %coordinates: tensor<6x3xi32>) {
   // expected-error at +1 {{input/output trailing COO level-ranks don't match}}
@@ -270,7 +270,7 @@ func.func @sparse_get_md(%arg0: !sparse_tensor.storage_specifier<#SparseVector>)
 
 // -----
 
-#COO = #sparse_tensor.encoding<{lvlTypes = ["compressed_nu", "singleton"]}>
+#COO = #sparse_tensor.encoding<{map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton)}>
 
 func.func @sparse_get_md(%arg0: !sparse_tensor.storage_specifier<#COO>) -> index {
   // expected-error at +1 {{requested position memory size on a singleton level}}
@@ -658,7 +658,7 @@ func.func @invalid_concat_dim(%arg0: tensor<2x4xf64, #DC>,
 
 #C = #sparse_tensor.encoding<{map = (d0) -> (d0 : compressed)}>
 #DC = #sparse_tensor.encoding<{map = (d0, d1) -> (d0 : dense, d1 : compressed)}>
-#DCC = #sparse_tensor.encoding<{lvlTypes = ["dense", "compressed", "compressed"]}>
+#DCC = #sparse_tensor.encoding<{map = (d0, d1, d2) -> (d0 : dense, d1 : compressed, d2 : compressed)}>
 func.func @invalid_concat_rank_mismatch(%arg0: tensor<2xf64, #C>,
                                         %arg1: tensor<3x4xf64, #DC>,
                                         %arg2: tensor<4x4x4xf64, #DCC>) -> tensor<9x4xf64, #DC> {
diff --git a/mlir/test/Dialect/SparseTensor/pre_rewriting.mlir b/mlir/test/Dialect/SparseTensor/pre_rewriting.mlir
index 1245cb0eeed3c55..5aa4acaf863937f 100644
--- a/mlir/test/Dialect/SparseTensor/pre_rewriting.mlir
+++ b/mlir/test/Dialect/SparseTensor/pre_rewriting.mlir
@@ -5,7 +5,7 @@
 }>
 
 #SortedCOO = #sparse_tensor.encoding<{
-  lvlTypes = [ "compressed_nu", "singleton" ]
+  map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton)
 }>
 
 #DCSR = #sparse_tensor.encoding<{
diff --git a/mlir/test/Dialect/SparseTensor/rewriting_for_codegen.mlir b/mlir/test/Dialect/SparseTensor/rewriting_for_codegen.mlir
index 0312758722bea82..913059e2197f12d 100644
--- a/mlir/test/Dialect/SparseTensor/rewriting_for_codegen.mlir
+++ b/mlir/test/Dialect/SparseTensor/rewriting_for_codegen.mlir
@@ -10,7 +10,7 @@
 }>
 
 #COO = #sparse_tensor.encoding<{
-  lvlTypes = [ "compressed_nu", "singleton" ]
+  map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton)
 }>
 
 // CHECK-LABEL:   func.func @sparse_new(
diff --git a/mlir/test/Dialect/SparseTensor/roundtrip.mlir b/mlir/test/Dialect/SparseTensor/roundtrip.mlir
index d3f07fd298d72e9..d1262cb7aea02df 100644
--- a/mlir/test/Dialect/SparseTensor/roundtrip.mlir
+++ b/mlir/test/Dialect/SparseTensor/roundtrip.mlir
@@ -77,7 +77,7 @@ func.func @sparse_convert_1d_to_sparse(%arg0: tensor<64xf32>) -> tensor<64xf32,
 
 // -----
 
-#SparseTensor = #sparse_tensor.encoding<{ lvlTypes = [ "dense", "dense", "compressed" ] }>
+#SparseTensor = #sparse_tensor.encoding<{ map = (d0, d1, d2) -> (d0 : dense, d1 : dense, d2 : compressed) }>
 
 // CHECK-LABEL: func @sparse_convert_3d_from_sparse(
 // CHECK-SAME: %[[A:.*]]: tensor<8x8x8xf64, #{{.*}}>)
@@ -103,7 +103,7 @@ func.func @sparse_positions(%arg0: tensor<128xf64, #SparseVector>) -> memref<?xi
 
 // -----
 
-#COO = #sparse_tensor.encoding<{lvlTypes = ["compressed_nu", "singleton"]}>
+#COO = #sparse_tensor.encoding<{map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton)}>
 
 // CHECK-LABEL: func @sparse_indices_buffer(
 //  CHECK-SAME: %[[A:.*]]: tensor<?x?xf64, #{{.*}}>)
diff --git a/mlir/test/Dialect/SparseTensor/roundtrip_encoding.mlir b/mlir/test/Dialect/SparseTensor/roundtrip_encoding.mlir
index 1cc5c0e3f61527e..60367b43a6ee0e4 100644
--- a/mlir/test/Dialect/SparseTensor/roundtrip_encoding.mlir
+++ b/mlir/test/Dialect/SparseTensor/roundtrip_encoding.mlir
@@ -85,8 +85,12 @@ func.func private @sparse_sorted_coo(tensor<10x10xf64, #SortedCOO>)
 // -----
 
 #BCSR = #sparse_tensor.encoding<{
-   lvlTypes = [ "compressed", "compressed", "dense", "dense" ],
-   dimToLvl  = affine_map<(i, j) -> (i floordiv 2, j floordiv 3, i mod 2, j mod 3)>
+   map = ( i, j ) ->
+      ( i floordiv 2 : compressed,
+        j floordiv 3 : compressed,
+        i mod 2      : dense,
+        j mod 3      : dense
+      )
 }>
 
 // CHECK-LABEL: func private @sparse_bcsr(
diff --git a/mlir/test/Dialect/SparseTensor/sorted_coo.mlir b/mlir/test/Dialect/SparseTensor/sorted_coo.mlir
index e4bc55fcdb06298..95c410a08b0e408 100644
--- a/mlir/test/Dialect/SparseTensor/sorted_coo.mlir
+++ b/mlir/test/Dialect/SparseTensor/sorted_coo.mlir
@@ -1,7 +1,7 @@
 // RUN: mlir-opt %s -sparsification --canonicalize | FileCheck %s
 
 #SortedCOO = #sparse_tensor.encoding<{
-  lvlTypes = [ "compressed_nu", "singleton" ]
+  map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton)
 }>
 
 #trait_scale = {
diff --git a/mlir/test/Dialect/SparseTensor/sparse_2d.mlir b/mlir/test/Dialect/SparseTensor/sparse_2d.mlir
index 44c731e9302749a..00bf4ea440628cd 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_2d.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_2d.mlir
@@ -1050,7 +1050,7 @@ func.func @cmp_ss_ss(%arga: tensor<32x16xf32, #Tss>, %argb: tensor<32x16xf32, #T
 }
 
 #BatchedVector = #sparse_tensor.encoding<{
-  lvlTypes = [ "dense", "compressed_hi" ],
+  map = (d0, d1) -> (d0 : dense, d1 : compressed(high))
 }>
 // CHECK-LABEL:   func.func @sub_ss_batched(
 // CHECK-SAME:      %[[VAL_0:.*]]: tensor<2x3xf64, #{{.*}}>>,
diff --git a/mlir/test/Dialect/SparseTensor/sparse_3d.mlir b/mlir/test/Dialect/SparseTensor/sparse_3d.mlir
index 9019b9984d8f680..3513bc48ffc0e38 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_3d.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_3d.mlir
@@ -3,14 +3,14 @@
 
 #Td = #sparse_tensor.encoding<{ map = (d0) -> (d0 : dense) }>
 
-#Tddd = #sparse_tensor.encoding<{ lvlTypes = [ "dense",      "dense",      "dense"      ] }>
-#Tdds = #sparse_tensor.encoding<{ lvlTypes = [ "dense",      "dense",      "compressed" ] }>
-#Tdsd = #sparse_tensor.encoding<{ lvlTypes = [ "dense",      "compressed", "dense"      ] }>
-#Tdss = #sparse_tensor.encoding<{ lvlTypes = [ "dense",      "compressed", "compressed" ] }>
-#Tsdd = #sparse_tensor.encoding<{ lvlTypes = [ "compressed", "dense",      "dense"      ] }>
-#Tsds = #sparse_tensor.encoding<{ lvlTypes = [ "compressed", "dense",      "compressed" ] }>
-#Tssd = #sparse_tensor.encoding<{ lvlTypes = [ "compressed", "compressed", "dense"      ] }>
-#Tsss = #sparse_tensor.encoding<{ lvlTypes = [ "compressed", "compressed", "compressed" ] }>
+#Tddd = #sparse_tensor.encoding<{ map = (d0, d1, d2) -> (d0 : dense, d1 : dense, d2 : dense) }>
+#Tdds = #sparse_tensor.encoding<{ map = (d0, d1, d2) -> (d0 : dense, d1 : dense, d2 : compressed) }>
+#Tdsd = #sparse_tensor.encoding<{ map = (d0, d1, d2) -> (d0 : dense, d1 : compressed, d2 : dense) }>
+#Tdss = #sparse_tensor.encoding<{ map = (d0, d1, d2) -> (d0 : dense, d1 : compressed, d2 : compressed) }>
+#Tsdd = #sparse_tensor.encoding<{ map = (d0, d1, d2) -> (d0 : compressed, d1 : dense, d2 : dense) }>
+#Tsds = #sparse_tensor.encoding<{ map = (d0, d1, d2) -> (d0 : compressed, d1 : dense, d2 : compressed) }>
+#Tssd = #sparse_tensor.encoding<{ map = (d0, d1, d2) -> (d0 : compressed, d1 : compressed, d2 : dense) }>
+#Tsss = #sparse_tensor.encoding<{ map = (d0, d1, d2) -> (d0 : compressed, d1 : compressed, d2 : compressed) }>
 
 #trait3 = {
   indexing_maps = [
diff --git a/mlir/test/Dialect/SparseTensor/sparse_broadcast.mlir b/mlir/test/Dialect/SparseTensor/sparse_broadcast.mlir
index 3af4614dfe93e6a..ae5b941259f6515 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_broadcast.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_broadcast.mlir
@@ -1,7 +1,7 @@
 // RUN: mlir-opt %s --sparsification --canonicalize --cse | FileCheck %s
 
 #DCSR = #sparse_tensor.encoding<{ map = (d0, d1) -> (d0 : compressed, d1 : compressed) }>
-#SparseTensor = #sparse_tensor.encoding<{ lvlTypes = [ "compressed", "compressed", "compressed" ] }>
+#SparseTensor = #sparse_tensor.encoding<{ map = (d0, d1, d2) -> (d0 : compressed, d1 : compressed, d2 : compressed) }>
 
 #trait = {
   indexing_maps = [
diff --git a/mlir/test/Dialect/SparseTensor/sparse_foreach.mlir b/mlir/test/Dialect/SparseTensor/sparse_foreach.mlir
index baa30c9457d2dce..822cfb0148f249a 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_foreach.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_foreach.mlir
@@ -141,7 +141,7 @@ func.func @foreach_print_slice(%A: tensor<4x4xf64, #CSR_SLICE>) {
 }
 
 #BCOO = #sparse_tensor.encoding<{
-  lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ],
+  map = (d0, d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 : singleton)
 }>
 
 // CHECK-LABEL:   func.func @foreach_bcoo(
diff --git a/mlir/test/Dialect/SparseTensor/sparse_nd.mlir b/mlir/test/Dialect/SparseTensor/sparse_nd.mlir
index 742d42be3f8c584..d05cebd55ce3128 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_nd.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_nd.mlir
@@ -5,8 +5,10 @@
 // but an acyclic iteration graph using sparse constraints only.
 
 #SparseTensor = #sparse_tensor.encoding<{
-  lvlTypes = [ "dense", "dense", "dense", "compressed",
-                   "compressed", "dense", "dense", "dense" ]
+  map = (d0, d1, d2, d3,
+         d4, d5, d6, d7) -> (d0 : dense, d1 : dense, d2 : dense,
+                             d3 : compressed, d4 : compressed, d5 : dense,
+                             d6 : dense, d7 : dense)
 }>
 
 #trait_mul = {
diff --git a/mlir/test/Dialect/SparseTensor/sparse_out.mlir b/mlir/test/Dialect/SparseTensor/sparse_out.mlir
index 128c290e966b92d..97d8da213423d9e 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_out.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_out.mlir
@@ -9,7 +9,7 @@
 }>
 
 #SparseTensor = #sparse_tensor.encoding<{
-  lvlTypes = [ "compressed", "compressed", "compressed" ]
+  map = (d0, d1, d2) -> (d0 : compressed, d1 : compressed, d2 : compressed)
 }>
 
 #trait_scale_inpl = {
diff --git a/mlir/test/Dialect/SparseTensor/sparse_pack.mlir b/mlir/test/Dialect/SparseTensor/sparse_pack.mlir
index 45906f498356700..8caf75636277762 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_pack.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_pack.mlir
@@ -1,7 +1,7 @@
 // RUN: mlir-opt %s --canonicalize --post-sparsification-rewrite="enable-runtime-library=false" --sparse-tensor-codegen -cse --canonicalize | FileCheck %s
 
 #COO = #sparse_tensor.encoding<{
-  lvlTypes = ["compressed_nu", "singleton"],
+  map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton),
   crdWidth=32
 }>
 
diff --git a/mlir/test/Dialect/SparseTensor/sparse_perm.mlir b/mlir/test/Dialect/SparseTensor/sparse_perm.mlir
index 438f2c496d89157..26712f2c7b001b3 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_perm.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_perm.mlir
@@ -2,8 +2,7 @@
 // RUN: mlir-opt %s -sparsification | FileCheck %s
 
 #X = #sparse_tensor.encoding<{
- lvlTypes = [ "dense", "dense", "dense" ],
- dimToLvl = affine_map<(i,j,k) -> (k,i,j)>
+  map = (d0, d1, d2) -> (d2 : dense, d0 : dense, d1 : dense)
 }>
 
 #trait = {
diff --git a/mlir/test/Dialect/SparseTensor/sparse_perm_lower.mlir b/mlir/test/Dialect/SparseTensor/sparse_perm_lower.mlir
index 2e3d723889cdd76..fd8aaf28697e42d 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_perm_lower.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_perm_lower.mlir
@@ -4,8 +4,7 @@
 // RUN: FileCheck %s --check-prefix=CHECK-MIR
 
 #X = #sparse_tensor.encoding<{
- lvlTypes = [ "dense", "dense", "dense" ],
- dimToLvl = affine_map<(i,j,k) -> (k,i,j)>
+  map = (d0, d1, d2) -> (d2 : dense, d0 : dense, d1 : dense)
 }>
 
 #trait = {
diff --git a/mlir/test/Dialect/SparseTensor/sparse_reshape_dot.mlir b/mlir/test/Dialect/SparseTensor/sparse_reshape_dot.mlir
index 43cf36fc0ac72aa..395a915f32806e8 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_reshape_dot.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_reshape_dot.mlir
@@ -4,8 +4,8 @@
 //
 // RUN: mlir-opt %s --linalg-generalize-named-ops --sparsification --cse --canonicalize | FileCheck %s
 
-#COO_2D = #sparse_tensor.encoding<{ lvlTypes = [ "compressed_nu", "singleton" ], posWidth = 32, crdWidth = 32 }>
-#COO_3D = #sparse_tensor.encoding<{ lvlTypes = [ "compressed_nu", "singleton_nu", "singleton" ], posWidth = 32, crdWidth = 32 }>
+#COO_2D = #sparse_tensor.encoding<{ map = (d0, d1) -> (d0 : compressed(nonunique), d1 : singleton), posWidth = 32, crdWidth = 32 }>
+#COO_3D = #sparse_tensor.enco...
[truncated]

``````````

</details>


https://github.com/llvm/llvm-project/pull/66543


More information about the Mlir-commits mailing list