[Mlir-commits] [mlir] Fixes in 'tosa.reshape' lowering and folder (PR #85798)
Rafael Ubal
llvmlistbot at llvm.org
Fri Mar 22 09:42:02 PDT 2024
================
@@ -19,24 +19,77 @@
#include "mlir/IR/PatternMatch.h"
#include "mlir/Transforms/DialectConversion.h"
+#include <numeric>
+
using namespace mlir;
using namespace tosa;
-static bool findIntermediateShape(ArrayRef<int64_t> lhsShape,
- ArrayRef<int64_t> rhsShape,
- SmallVector<int64_t> &intermediateShape,
- bool isDynamic) {
- if (isDynamic) {
- // TODO (natashaknk): Make dynamic intermediate shape not always be rank-1
- intermediateShape = {ShapedType::kDynamic};
- return true;
- }
+namespace {
- if (lhsShape.empty() || rhsShape.empty()) {
- intermediateShape = {};
- return true;
- }
+// Infer the result type of 'tensor.expand_shape' in the collapse-expand
+// pair emitted for a 'tosa.reshape' op.
+TensorType inferReshapedType(TypedValue<TensorType> input,
+ ArrayRef<int64_t> newShape) {
+ // Check if the input is static, and if so, get its total size
+ bool inputIsStatic = input.getType().hasStaticShape();
+ int64_t totalSize = inputIsStatic ? input.getType().getNumElements() : -1;
+
+ // Compute result shape
+ bool resultIsStatic = true;
+ auto resultShape = llvm::map_to_vector(newShape, [&](int64_t size) -> int64_t {
+ // If this is not a placeholder, do not change it
+ if (size >= 0)
+ return size;
+
+ // If we do not know the total size of the tensor, keep this dimension
+ // dynamic in the result shape.
+ if (!inputIsStatic) {
+ resultIsStatic = false;
+ return ShapedType::kDynamic;
+ }
+
+ // Calculate the product of all elements in 'newShape' except for the -1
+ // placeholder, which we discard by negating the result.
+ int64_t totalSizeNoPlaceholder = -std::accumulate(
+ newShape.begin(), newShape.end(), 1, std::multiplies());
----------------
rafaelubalmw wrote:
This is interesting. The TOSA standard doesn't even mention the possibility of using the -1 placeholder for a target dimension.
https://www.mlplatform.org/tosa/tosa_spec.html#_reshape
I suspect this is a behavior inherited from NumPy or Tensorflow, both of which set the restriction that at most one value in the target shape can be -1.
https://numpy.org/doc/stable/reference/generated/numpy.reshape.html
https://www.tensorflow.org/api_docs/python/tf/reshape
The only possibility for a `tosa.reshape` op to make sense with more than one occurrence of -1 in the `new_shape` attribute would be that the corresponding target dimension size is obtained from a statically shaped result type. But that would be a very strange use of `tosa.reshape`.
I think the most reasonable option is introducing this additional check in the op verifier. I have updated this PR to include that check in `tosa::ReshapeOp::verify()`.
https://github.com/llvm/llvm-project/pull/85798
More information about the Mlir-commits
mailing list