[PATCH] D136377: [mlir][MemRefToLLVM] Reuse existing subview lowering

Alex Zinenko via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Oct 27 02:25:41 PDT 2022


ftynse added a comment.

> To give a little bit of context, Nicolas realized that some code that currently lives in memref-to-llvm has to be reimplemented when doing memref-to-something-else (e.g., like memref to vmvx in the IREE compiler https://github.com/iree-org/iree/blob/main/compiler/src/iree/compiler/Dialect/VMVX/Transforms/ResolveBufferDescriptors.cpp#L21.). (In particular all the code about figuring out the strides, sizes, and offset for view like operations.) The goal was to move the common part of this lowering in core MLIR, i.e., what we do in populateSimplifyExtractStridedMetadataOpPatterns, but that common part cannot obviously target the final dialect (LLVM or something else). We settle on affine for the intermediate step (but using arith would yield the exact same problems), hence we need an additional lowering step.

Thanks. The implementation of "extract metadata" was a bit rushed and this explanation wasn't sufficiently clear before. This approach to factoring out the common code may be overusing the pattern idea, surely there are other ways of sharing the implementation. IMO, we could have just introduced a helper function along the lines of "emit arith dialect IR equivalent to applying the given affine map to the given operands" and called it from several patterns instead of emitting an "affine.apply" (with all the baggage it brings) and then applying another pattern to lower it.

> I didn't get that part. populateMemRefToLLVMConversionPatterns is augmented to include all the required patterns in that patch.

My bad, I haven't noticed what is modified exactly. But you are right that it doesn't matter for the actual issue.

> I don't think that's something possible in the current framework, but to be NFC(-ish) here we would need to only apply the affine-to-std and arith-to-llvm to the view like operations, not all operations.

Yeah, apply some additional patterns only to the operations produced by a specific other patterns. I'd also like this feature to run "local cleanups" instead of calling the global canonicalizer, but we don't have this and the driver code is already highly complex.

> It looks like I am missing only one thing: how do you connect the pipeline so that it works for all the users of memref-to-llvm? We cannot expect all of them to call expand-memref-ops before calling memref-to-llvm, can we?
> I didn't see how it works for the math dialect: I see the expand pass, but I don't see how it is connected with the conversion math-to-llvm conversion.

The user is expected to run `expand-math` or `convert-math-to-libm` in their pass pipeline or otherwise get rid of those operations before calling `convert-math-to-llvm`. So I guess we can expect users to call `expand-memref-ops`, we just have to broadcast this requirement on the communication channels.
In some distant future, we could have some abstract description of the state of the IR before and after applying a group of patterns and then have some logic figure out the order of groups to reach a certain state...


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D136377/new/

https://reviews.llvm.org/D136377



More information about the llvm-commits mailing list