[Mlir-commits] [mlir] [mlir][ArmSME] Add rudimentary support for tile spills to the stack (PR #76086)

Cullen Rhodes llvmlistbot at llvm.org
Thu Dec 21 07:02:11 PST 2023


================
@@ -0,0 +1,153 @@
+// RUN: mlir-opt %s -allocate-arm-sme-tiles -split-input-file -verify-diagnostics | \
+// RUN: FileCheck %s  --check-prefix=AFTER-TILE-ALLOC
+// RUN: mlir-opt %s -allocate-arm-sme-tiles -convert-arm-sme-to-llvm -canonicalize -cse \
+// RUN:   -split-input-file -verify-diagnostics | \
+// RUN: FileCheck %s  --check-prefix=AFTER-LLVM-LOWERING
+
+/// Checks tile spill/reloads are inserted around in-memory tiles (i.e. tiles
+/// that were not assigned a physical SME tile).
+///
+/// These spills are currently very naive and paranoid and will spill/reload
+/// entire tiles around ArmSME ops.
+///
+/// The general pattern is:
+///
+/// During tile allocation if there's not a physical tile ID available an op
+/// will be assigned an in-memory tile ID (which is a tile ID >= 16).
+///
+/// Example:
+///
+///   arm_sme.zero : vector<[8]x[8]xi16>
+///
+/// Becomes:
+///
+///   arm_sme.zero { tile_id = 16 } : vector<[8]x[8]xi16>
+///
+/// This works like normal till the final lowering to LLVM, where spills and
+/// reloads will be inserted around uses of in-memory tiles.
+///
+/// So the above example becomes:
+///
+/// // Placed at the top of the function:
+/// %tileAlloca = memref.alloca(%svl_h, %svl_h) : memref<?x?xi16>
+///
+/// Then around the op:
+///
+/// // Swap contents of %tileAlloca and tile 0
+/// scf.for %sliceIdx ... {
+///   %currentSlice = arm_sme.intr.read.horiz {tile_id = 0}
+///   arm_sme.intr.ld1h.horiz %tileAlloca[%sliceIdx, %c0] {tile_id = 0}
+///   vector.store %currentSlice, %tileAlloca[%sliceIdx, %c0]
----------------
c-rhodes wrote:

just to clarify, there's nothing in `%tileAlloca` at the point of the load here?

https://github.com/llvm/llvm-project/pull/76086


More information about the Mlir-commits mailing list