[Mlir-commits] [mlir] [mlir] Fix #93973 - linalg::ReduceOp verifier crash (PR #107005)

Clément Fournier llvmlistbot at llvm.org
Fri Sep 20 03:12:21 PDT 2024


oowekyala wrote:

Thanks for taking a look :) 

> Not sure I follow this change. Shouldn't we just combine the two tensors before the linalg.reduce with a mul and then just add reduce the result of that?

It probably is possible but it would be much more involved I think. How would you combine the two tensors into one tensor `16x32x64` where the second dimension is the mul, short of using a `linalg.generic`? And if we have to use a generic, then we could just fuse the reduce into it directly, but then we don't have nice structured ops anymore.

For a compiler that has to generate this code, it is simpler to just generate one `reduce` op, and the generated code is higher level than a generic, or a sequence of generic + reduce. To me at least it is very natural to think about the `reduce` operator this way.

I did a usage search for `linalg.reduce` on GH: https://github.com/search?q=linalg.reduce+language%3AMLIR&type=code

It seems nobody is using reduce with several operands (except from our compiler, with the semantics that this PR gives it ^^). This hints that the current behavior is not useful to anyone so far. OTOH these updated semantics are more general, but would support the older behavior just fine, so I don't see the harm in them :)







https://github.com/llvm/llvm-project/pull/107005


More information about the Mlir-commits mailing list