[llvm] [NVPTX] Add TMA bulk tensor reduction intrinsics (PR #116854)

Durgadoss R via llvm-commits llvm-commits at lists.llvm.org
Tue Nov 26 11:48:50 PST 2024


================
@@ -596,11 +598,37 @@ multiclass CP_ASYNC_BULK_TENSOR_S2G_INTR<int dim, bit shared32, string mode> {
            Requires<[hasPTX<80>, hasSM<90>]>;
 }
 
+def TMAReductionFlags : Operand<i32> {
+  let PrintMethod = "printTmaReductionMode";
+}
+
+// TMA Copy from Shared to Global memory with Reduction
+multiclass CP_ASYNC_BULK_TENSOR_REDUCE_INTR<int dim, bit shared32, string mode> {
+  defvar dims_dag = !dag(ins, !listsplat(Int32Regs, dim), !foreach(i, !range(dim), "d" # i));
+  defvar dims_str = !interleave(!foreach(i, !range(dim), "$d" # i), ", ");
+  defvar asm_str = " [$tmap, {{" # dims_str # "}}], [$src]";
+  defvar rc = !if(shared32, Int32Regs, Int64Regs);
+
+  defvar prefix = "cp.reduce.async.bulk.tensor" # "." # dim # "d" # ".global.shared::cta";
+  defvar suffix = "." # mode # ".bulk_group";
+
+  def "": NVPTXInst<(outs),
----------------
durga4github wrote:

Sure, I fixed it in the latest revision, only for the set of intrinsics that this PR adds.
I will send a separate NFC PR to do a similar change for other TMA ones.

And, Thanks for approval, Artem!

I will merge once the build results are clean.
 

https://github.com/llvm/llvm-project/pull/116854


More information about the llvm-commits mailing list