[Mlir-commits] [mlir] [mlir][Linalg] Preserve encodings in static shape inference. (PR #132311)

Andrzej Warzyński llvmlistbot at llvm.org
Fri Mar 21 10:17:01 PDT 2025


================
@@ -649,6 +649,29 @@ func.func @cast_dest(%arg0: tensor<?x?x?xf32>, %arg1: tensor<1x?x?xf32>, %arg2:
 
 // -----
 
+#map = affine_map<(d0, d1) -> (d0, d1)>
+#sparse = #sparse_tensor.encoding<{ map = (d0, d1) -> (d0 : dense, d1 : compressed) }>
----------------
banach-space wrote:

You could use `linalg.generic` above as a source of inspiration:
```mlir
  %2 = linalg.generic {
    indexing_maps = [#map, #map, #map],
    iterator_types = ["parallel", "parallel", "parallel"]
  } ins(%arg0, %arg1 : tensor<?x?x?xf32>, tensor<1x?x?xf32>)
    outs(%0 : tensor<?x?x?xf32>) {
  ^bb0(%arg5: f32, %arg6: f32, %arg7: f32):
    %3 = arith.subf %arg5, %arg6 : f32
    linalg.yield %3 : f32
  } -> tensor<?x?x?xf32>
```

IMHO, we should strive to make the official docs suggest some good reference point and then use that in tests. For example, for `linalg.elementwise`, you get this:
* https://mlir.llvm.org/docs/Dialects/Linalg/#linalgelementwise-linalgelementwiseop

And, for `linalg.generic`:
* https://mlir.llvm.org/docs/Dialects/Linalg/#linalggeneric-linalggenericop

But I am bike-shedding a bit 😅 My main motivation was to follow the example immediately above yours.

Whatever you decide it will be great, I will be happy and MLIR will be in a better place :)

https://github.com/llvm/llvm-project/pull/132311


More information about the Mlir-commits mailing list