[all-commits] [llvm/llvm-project] ea03bd: [MLIR][AMDGPU] Adding Vector transfer_read to load...

Zhuoran Yin via All-commits all-commits at lists.llvm.org
Fri Mar 21 05:42:26 PDT 2025


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: ea03bdee7081f25c652d581e274b10cb7c552357
      https://github.com/llvm/llvm-project/commit/ea03bdee7081f25c652d581e274b10cb7c552357
  Author: Zhuoran Yin <zhuoryin at amd.com>
  Date:   2025-03-21 (Fri, 21 Mar 2025)

  Changed paths:
    M mlir/include/mlir/Dialect/AMDGPU/Transforms/Passes.h
    M mlir/include/mlir/Dialect/AMDGPU/Transforms/Passes.td
    M mlir/lib/Dialect/AMDGPU/Transforms/CMakeLists.txt
    A mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp
    A mlir/test/Dialect/AMDGPU/transfer-read-to-load.mlir

  Log Message:
  -----------
  [MLIR][AMDGPU] Adding Vector transfer_read to load rewrite pattern (#131803)

This PR adds the Vector transfer_read to load rewrite pattern. The
pattern creates a transfer read op lowering. A vector trasfer read op
will be lowered to a combination of `vector.load`, `arith.select` and
`vector.broadcast` if:
 - The transfer op is masked.
 - The memref is in buffer address space.
 - Other conditions introduced from `TransferReadToVectorLoadLowering`

The motivation of this PR is due to the lack of support of masked load
from amdgpu backend. `llvm.intr.masked.load` lower to a series of
conditional scalar loads refer to (`scalarize-masked-mem-intrin` pass).
This PR will make it possible for masked transfer_read to be lowered
towards buffer load with bounds check, allowing a more optimized global
load accessing pattern compared with existing implementation of
`llvm.intr.masked.load` on vectors.



To unsubscribe from these emails, change your notification settings at https://github.com/llvm/llvm-project/settings/notifications


More information about the All-commits mailing list