[Mlir-commits] [mlir] [mlir][tensor] Add a tensor.concat operation (PR #72779)
Quinn Dawkins
llvmlistbot at llvm.org
Mon Nov 20 09:49:42 PST 2023
================
@@ -121,6 +121,66 @@ def Tensor_CastOp : Tensor_Op<"cast", [
let hasCanonicalizer = 1;
}
+//===----------------------------------------------------------------------===//
+// ConcatOp
+//===----------------------------------------------------------------------===//
+
+def Tensor_ConcatOp : Tensor_Op<"concat",
+ [Pure,
+ DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
+ DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>]> {
+ let summary = "tensor concatenation operation";
+ let description = [{
+ The "concat" operation constructs a tensor out of a variadic list of input
+ tensors, concatenated along a static dimension. All inputs and the result
+ type must share the same rank.
+
+ `dim` specifies the dimension along which to concatenate. The size of the
+ concatenated dimension in the result must be equal to the sum of the sizes
+ of the inputs along that dimension. All other dimensions in both the inputs
+ and result must be the same size.
+
+ Example:
+
+ ```mlir
+ %0 = tensor.concat dim(0) %0, %1, %2 :
----------------
qedawkins wrote:
I was assuming a `memref.concat` would take the destination and implicitly just be that sequence of subviews + copies. Maybe that is just a convenience op and not really worth adding, or `linalg.concat` would be preferred. The reason I'm thinking about wanting DPS now is if we wanted to tile then bufferize. Without the destination I would expect concat to turn into an alloc + sequence of copies. With a destination, we're concatenating into a slice.
https://github.com/llvm/llvm-project/pull/72779
More information about the Mlir-commits
mailing list