[Mlir-commits] [mlir] [mlir][tensor] Add a tensor.concat operation (PR #72779)

Quinn Dawkins llvmlistbot at llvm.org
Mon Nov 20 06:08:01 PST 2023


================
@@ -121,6 +121,66 @@ def Tensor_CastOp : Tensor_Op<"cast", [
   let hasCanonicalizer = 1;
 }
 
+//===----------------------------------------------------------------------===//
+// ConcatOp
+//===----------------------------------------------------------------------===//
+
+def Tensor_ConcatOp : Tensor_Op<"concat",
+    [Pure,
+     DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
+     DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>]> {
+  let summary = "tensor concatenation operation";
+  let description = [{
+    The "concat" operation constructs a tensor out of a variadic list of input
+    tensors, concatenated along a static dimension. All inputs and the result
+    type must share the same rank.
+
+    `dim` specifies the dimension along which to concatenate. The size of the
+    concatenated dimension in the result must be equal to the sum of the sizes
+    of the inputs along that dimension. All other dimensions in both the inputs
+    and result must be the same size.
+
+    Example:
+
+    ```mlir
+    %0 = tensor.concat dim(0) %0, %1, %2 :
----------------
qedawkins wrote:

I was expecting the same as `tensor.pad` where there are ways to rewrite it as an equivalent set of ops in DPS (either as a single linalg op or as a group of linalg ops). Bufferization would either require the DPS rewrite in advance (although that linalg op is quite ugly), or we allocate based on the result shape and copy one-by-one into the allocated buffer.

I'm not married to leaving it as non-DPS. It's just more consistent with other input dialects and might make higher level transformations easier.

https://github.com/llvm/llvm-project/pull/72779


More information about the Mlir-commits mailing list