[Mlir-commits] [mlir] [mlir][memref-to-spirv]: Reverse Image Load Coordinates (PR #160495)

Jack Frankland llvmlistbot at llvm.org
Mon Sep 29 09:14:53 PDT 2025


================
@@ -699,6 +699,36 @@ LoadOpPattern::matchAndRewrite(memref::LoadOp loadOp, OpAdaptor adaptor,
   return success();
 }
 
+template <typename OpAdaptor>
+static FailureOr<SmallVector<Value>>
+extractLoadCoordsForComposite(memref::LoadOp loadOp, OpAdaptor adaptor,
+                              ConversionPatternRewriter &rewriter) {
+  // At present we only support linear "tiling" as specified in Vulkan, this
+  // means that texels are assumed to be laid out in memory in a row-major
+  // order. This allows us to support any memref layout that is a permutation of
+  // the dimensions. Future work will pass an optional image layout to the
+  // rewrite pattern so that we can support optimized target specific tilings.
+  //
+  // The memrefs layout determines the dimension ordering so we need to invert
+  // the map to get the ordering.
+  SmallVector<Value> indices = adaptor.getIndices();
+  auto map = loadOp.getMemRefType().getLayout().getAffineMap();
+  if (!map.isPermutation())
+    return rewriter.notifyMatchFailure(
+        loadOp,
+        "Cannot lower memrefs with memory layout which is not a permutation");
+
+  const unsigned dimCount = map.getNumDims();
+  SmallVector<Value, 3> coords(dimCount);
+  for (unsigned dim = 0; dim < dimCount; ++dim)
+    coords[map.getDimPosition(dim)] = indices[dim];
----------------
FranklandJack wrote:

Yeah sure, sorry I probably should have added a clearer comment in the source, will do that as well. 

We are assuming here that the memory layout is "linear tiling" from the Vulkan spec:https://registry.khronos.org/vulkan/specs/latest/man/html/VkImageTiling.html
```
VK_IMAGE_TILING_LINEAR specifies linear tiling (texels are laid out in memory in row-major order, possibly with some padding on each row).
```
so the first coordinate is in the `coords` vector is the width, the second the height and the third the depth (generalizing this to arbitrary image layouts is possible but would require the rewrite pattern take something like an affine map as a constructor argument so that the coordinate mapping can be done in a target specific way - for now we are just going to support linear layouts).

Since the memref can have any layout that can be expressed as an affine map we need to map the coordinates from the index space of the memref to the index space of the image. Any memref layout that is just a permutation of the dimensions is row, column or depth major, all of which we can support with a row major image, we just need to make sure we map the coordinates appropriately. 

Consider the column major memref layout `(d0, d1, d2) -> (d0, d2, d1)` and let's say we are loading at `indices = [%a, %b, %c]`. Since the fetch operation is row major we'd want to give it the coordinate vector `<%b, %c, %a>` so this isn't just a case of reversing the indices. Instead what we need to do for each dimension is look up the resulting position of that dimension in the permutation and map these values to the indices actually being loaded which for our example will be:
`dim = 0 -> dim position = 0, indices[0] = %a so coords[0] = %a`
`dim = 1 -> dim position = 2, indices[1] = %b so coords[2] = %b`
`dim = 2 -> dim position = 1, indices[2] = %c so coords[1] = %c`
or in other words `coords = [%a, %c, %b]` which we then need to reverse to get the vector `<%b, %c, %a>`.

Basically, I think that before the reversal the `coords` map is giving us the indices in the order of "fastest moving" to "slowest moving", it's just that because the memref's layout map expresses this right to left, it is the reverse of what the vector coordinates expectec by the image op is, so we need to reverse it.

https://github.com/llvm/llvm-project/pull/160495


More information about the Mlir-commits mailing list