[Mlir-commits] [mlir] [mlir][tensor] Fix bug when having multiple result (PR #93374)

Prashant Kumar llvmlistbot at llvm.org
Sat May 25 01:19:19 PDT 2024


https://github.com/pashu123 created https://github.com/llvm/llvm-project/pull/93374

For patterns where there are multiple results apart from dpsInits, this fails.
E.g.:
```
        %13:2 = iree_codegen.ukernel.generic "iree_uk_unpack"
ins(%extracted_slice : tensor<?x1x16x16xf32>) outs(%11 :
tensor<?x16xf32>) ..
``` 
The above op has results apart from dpsInit and hence fails. The PR assumes that the result has dpsInits followed by nondpsInits.

>From aeadd8437ad09688fe03c2a03411b5c7d0a54f94 Mon Sep 17 00:00:00 2001
From: Prashant Kumar <pk5561 at gmail.com>
Date: Sat, 25 May 2024 13:45:07 +0530
Subject: [PATCH] [mlir][tensor] Fix bug when having multiple result

For patterns where there are multiple results apart from dpsInits this
fails.
For eg:
```
        %13:2 = iree_codegen.ukernel.generic "iree_uk_unpack"
ins(%extracted_slice : tensor<?x1x16x16xf32>) outs(%11 :
tensor<?x16xf32>) ..
``` The above op has results apart from dpsInit and hence fails.
The PR assumes that the result has dpsInits followed by nondpsInits.
---
 mlir/lib/Dialect/Tensor/IR/TensorOps.cpp | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp b/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp
index 8545c7b9af8f7..986008b9d379d 100644
--- a/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp
+++ b/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp
@@ -4531,17 +4531,17 @@ struct FoldTensorCastProducerOp
     if (!hasTensorCastOperand)
       return failure();
 
-    SmallVector<Type, 4> newResultTypes;
-    newResultTypes.reserve(op->getNumResults());
+    SmallVector<Type, 4> newResultTypes(op->getResultTypes());
     SmallVector<Value, 4> newOperands;
     newOperands.reserve(op->getNumOperands());
+    int64_t dpsInitIdx = 0;
     for (OpOperand &opOperand : op->getOpOperands()) {
       auto tensorCastOp = opOperand.get().getDefiningOp<tensor::CastOp>();
       bool fold = canFoldIntoConsumerOp(tensorCastOp);
       newOperands.push_back(fold ? tensorCastOp.getOperand() : opOperand.get());
       if (op.isDpsInit(&opOperand) &&
           !llvm::isa<MemRefType>(newOperands.back().getType()))
-        newResultTypes.push_back(newOperands.back().getType());
+        newResultTypes[dpsInitIdx++] = newOperands.back().getType();
     }
 
     // Clone op.



More information about the Mlir-commits mailing list