[Mlir-commits] [mlir] [mlir][vector] Deal with special patterns when emulating masked load/store (PR #75587)
Jakub Kuderski
llvmlistbot at llvm.org
Tue Dec 19 09:09:02 PST 2023
kuhar wrote:
I would expect the backend compiler to coalesce such memory accesses into wider ones. This definitely happens on AMDVLK. And separately, I'd think it's preferred to emit memref memory access to benefit from the existing emulation patterns these. If we emit wider vector memory accesses won't they get broken down anyway if we go to memrefs?
https://github.com/llvm/llvm-project/pull/75587
More information about the Mlir-commits
mailing list