[Mlir-commits] [mlir] [mlir][vector] Hoist transfer pairs when the source is AssumeAlignmentOp (PR #144809)

Andrzej Warzyński llvmlistbot at llvm.org
Fri Jun 20 07:57:33 PDT 2025


banach-space wrote:

> What alias-analysis is needed for doing the hoisting?

(waving hands here a bit…)

* If there are **no aliases**, it's safe to analyse the underlying `memref`s in **isolation**.
* But with aliasing (e.g., via `memref.subview`), **all** "references" to a given memref need to be considered together.

---

I've been experimenting with some simpler repros (the existing ones in "hoisting.mlir" are a bit too dense), and honestly, I'm seeing some confusing results. I’ve added a repro below with comments inline.

Even with the original code, I'm puzzled: reading/writing `%mem` is hoisted, but reading/writing `%sv` is not - I would have expected neither to be hoisted:

```mlir
func.func @example(
    %mem: memref<?x?xf32>,
    %lb : index, %ub : index, %step: index, %in: vector<1xf32>) {
  %c0 = arith.constant 0 : index
  %cst = arith.constant 0.0 : f32
  %sv = memref.subview %mem[0, 0][1, 1][1, 1] : memref<?x?xf32> to memref<1x1xf32, strided<[?, 1]>>


  scf.for %i = %lb to %ub step %step {
      %r0 = vector.transfer_read %mem[%c0, %c0], %cst: memref<?x?xf32>, vector<1xf32>
      %r1 = vector.transfer_read %sv[%c0, %c0], %cst: memref<1x1xf32, strided<[?, 1]>>, vector<1xf32>

      %u0 = "some_use"(%r0) : (vector<1xf32>) -> vector<1xf32>
      %u1 = "some_use"(%r1) : (vector<1xf32>) -> vector<1xf32>

      vector.transfer_write %u0, %mem[%c0, %c0] : vector<1xf32>, memref<?x?xf32>
      vector.transfer_write %u1, %sv[%c0, %c0] : vector<1xf32>, memref<1x1xf32, strided<[?, 1]>>

      // NOTE: Using %in - some input argument. 
      // 1. Hoisting still happens with this uncommented
      // vector.transfer_write %in, %sv[%c0, %c0] : vector<1xf32>, memref<1x1xf32, strided<[?, 1]>>
      // 2. Hoisting does not happen with this uncommented
      // vector.transfer_write %in, %mem[%c0, %c0] : vector<1xf32>, memref<?x?xf32>
  }
  "unrelated_use"(%mem) : (memref<?x?xf32>) -> ()
  return
}

module attributes {transform.with_named_sequence} {
  transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
    %0 = transform.structured.match ops{["func.func"]} in %arg1
      : (!transform.any_op) -> !transform.any_op
    transform.structured.hoist_redundant_vector_transfers %0
      : (!transform.any_op) -> !transform.any_op
    transform.yield
  }
}
```

And this is without even introducing `memref.assume_alignment`. So yes - maybe it's just me being slow on a Friday, but I think something isn't adding up here.

---

> (...) [ffb9bbf](https://github.com/llvm/llvm-project/commit/ffb9bbfd0745dc22e1fd6edd7b62f72b91f4f6de) breaks a set of tests in our downstream project. The reason is that it disables hoisting on memrefs when memref.assume_alignment is involved. It means that we'll never be able to hoist vectors on memrefs without a fix.

Totally agree - it feels like the consequences of that change weren’t fully evaluated before merging.

In short:
* **Before** that change: no alias was created.
**After**: an alias is created — and that introduces knock-on effects, just like we’re seeing here.

---

> The author mentioned that it's fine to remove ViewLikeOpInterface from the op, but I agree with you that it creates an alias. So maybe we want to keep the interface? https://github.com/llvm/llvm-project/pull/139521#issuecomment-2886458733

To me the interface makes sense, yes. At least after adding a return value.

> The intention of memref.assume_alignment is allowing the compiler to generate more efficient code based on the alignment assumption; i

Right —-but aren't we missing some kind of explicit “assume” mechanism here?

Diego raised something similar (though in the context of vector masking):
* https://github.com/llvm/llvm-project/issues/143920#issuecomment-2968376118

That’s a topic I’m actively exploring, though it'll take time - and it may or may not translate cleanly to the `memref` case.

---

My overall feeling is this:
* The change in [PR #139521](https://github.com/llvm/llvm-project/pull/139521) seems to have introduced a real perf regression. We should consider reverting and re-evaluating the design.
* Before reverting, though, it would be good to have minimal repros. I’ve tried creating some (see above), but I’m still a bit stuck.

In general, I don't see an obvious fix - and it feels like `memref.assume_alignment` shouldn’t cause this kind of trouble.

For context, this isn’t really my area of deep expertise, and I’ve only skimmed through the original PR. That said, happy to keep digging and discussing!

https://github.com/llvm/llvm-project/pull/144809


More information about the Mlir-commits mailing list