[Mlir-commits] [mlir] [mlir][Linalg] use linalg.reduce to simplify the mergeReductions in partialReductionInterface (PR #94579)
zhicong zhong
llvmlistbot at llvm.org
Thu Jun 6 18:34:31 PDT 2024
zhczhong wrote:
> I just have a question from downstream use of this. If I run generalization of the `linalg.reduce` op do we get back the same `linalg.generic` generated?
Yes, the linalg generalization can convert the `linalg.reduce` to `linalg.generic` in the same form as the original implementation.
```mlir
func.func @test(%input: tensor<16x32x64xf32>,
%init: tensor<16x64xf32>) -> tensor<16x64xf32> {
%reduce = linalg.reduce
ins(%input:tensor<16x32x64xf32>)
outs(%init:tensor<16x64xf32>)
dimensions = [1]
(%in: f32, %out: f32) {
%0 = arith.addf %out, %in: f32
linalg.yield %0: f32
}
func.return %reduce : tensor<16x64xf32>
}
module attributes {transform.with_named_sequence} {
transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
%0 = transform.structured.match interface{LinalgOp} in %arg1 : (!transform.any_op) -> !transform.any_op
%1 = transform.structured.generalize %0 : (!transform.any_op) -> !transform.any_op
transform.yield
}
}
```
will be converted to
```mlir
#map1 = affine_map<(d0, d1, d2) -> (d0, d1, d2)>
#map2 = affine_map<(d0, d1, d2) -> (d0, d2)>
func.func @test(%arg0: tensor<16x32x64xf32>, %arg1: tensor<16x64xf32>) -> tensor<16x64xf32> {
%0 = linalg.generic {indexing_maps = [#map1, #map2], iterator_types = ["parallel", "reduction", "parallel"]} ins(%arg0 : tensor<16x32x64xfarg0 : tensor<16x32x64xf32>) outs(%arg1 : tensor<16x64xf32>) {
^bb0(%in: f32, %out: f32):
%1 = arith.addf %out, %in : f32
linalg.yield %1 : f32
} -> tensor<16x64xf32>
return %0 : tensor<16x64xf32>
}
```
https://github.com/llvm/llvm-project/pull/94579
More information about the Mlir-commits
mailing list