[Mlir-commits] [mlir] [mlir][tensor] Add a tensor.concat operation (PR #72779)
llvmlistbot at llvm.org
llvmlistbot at llvm.org
Mon Nov 20 09:11:05 PST 2023
================
@@ -121,6 +121,66 @@ def Tensor_CastOp : Tensor_Op<"cast", [
let hasCanonicalizer = 1;
}
+//===----------------------------------------------------------------------===//
+// ConcatOp
+//===----------------------------------------------------------------------===//
+
+def Tensor_ConcatOp : Tensor_Op<"concat",
+ [Pure,
+ DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
+ DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>]> {
+ let summary = "tensor concatenation operation";
+ let description = [{
+ The "concat" operation constructs a tensor out of a variadic list of input
+ tensors, concatenated along a static dimension. All inputs and the result
+ type must share the same rank.
+
+ `dim` specifies the dimension along which to concatenate. The size of the
+ concatenated dimension in the result must be equal to the sum of the sizes
+ of the inputs along that dimension. All other dimensions in both the inputs
+ and result must be the same size.
+
+ Example:
+
+ ```mlir
+ %0 = tensor.concat dim(0) %0, %1, %2 :
----------------
MaheshRavishankar wrote:
I dont think `memref.concat` makes sense. The norm is that `memref` operations dont have copy semantics, so `memref.concat` might not be expressible as that (without introducing some pointer arithmetic stuff). I think at `memref` level `memref.subviews` + `memref.copy` are better representation of concat, with something else converting the `tensor.concat` into as much as in-place update as possible. Given that I think not having DPS for `concat` might be fine...
https://github.com/llvm/llvm-project/pull/72779
More information about the Mlir-commits
mailing list