[Mlir-commits] [mlir] [mlir][Linalg]: Optimize any structured linalg operation in transform::PromoteOp to avoid unnecessary copies (PR #69876)
Aviad Cohen
llvmlistbot at llvm.org
Mon Oct 30 22:47:02 PDT 2023
================
@@ -177,10 +177,8 @@ LinalgOpInstancePromotionOptions::LinalgOpInstancePromotionOptions(
Operation *op = opOperand.get().getDefiningOp();
if (auto sv = dyn_cast_or_null<memref::SubViewOp>(op)) {
subViews[operandNumber] = sv;
- // In case of linalg generic, copy in only if subview is used in linalg
- // payload.
- if (!isa<linalg::GenericOp>(linalgOp) ||
- linalgOp.payloadUsesValueFromOperand(&opOperand))
+ // Copy in only if subview is being used by the linalg operation.
+ if (linalgOp.isDpsInput(&opOperand) || !linalgOp.isInitTensor(&opOperand))
----------------
AviadCo wrote:
@nicolasvasilache thanks for the response.
Unfortunately, we can't guarantee on `payloadUsesValueFromOperand ` for inputs.
In the general case (like as in CopyOp), the input doesn't appear in the payload, but we still need to promote it.
I believe that as so now we must promote any input and all outputs that are considered `InitTensors`.
In case of unused input for GenericOp, user may use `populateEraseUnnecessaryInputsPatterns` (maybe worth a transform? I can add one if you think it is useful) what do you think?
https://github.com/llvm/llvm-project/pull/69876
More information about the Mlir-commits
mailing list