[all-commits] [llvm/llvm-project] cafb62: [mlir][VectorToGPU] Update memref stride precondit...

Chris via All-commits all-commits at lists.llvm.org
Thu Sep 14 12:51:57 PDT 2023


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: cafb6284d18bbdb952ae6d5e4aa97912d57dbfb8
      https://github.com/llvm/llvm-project/commit/cafb6284d18bbdb952ae6d5e4aa97912d57dbfb8
  Author: Christopher Bate <cbate at nvidia.com>
  Date:   2023-09-14 (Thu, 14 Sep 2023)

  Changed paths:
    M mlir/include/mlir/Dialect/NVGPU/Utils/MMAUtils.h
    M mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
    M mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp
    M mlir/test/Conversion/VectorToGPU/vector-to-mma-ops-mma-sync.mlir

  Log Message:
  -----------
  [mlir][VectorToGPU] Update memref stride preconditions on `nvgpu.mma.sync` path

This change removes the requirement that the row stride be statically known when
converting `vector.transfer_read` and `vector.transfer_write` to distributed
SIMT operations in the `nvgpu` lowering path. It also adds a check to verify
that the last dimension of the source memref is statically known to have stride
1 since this is assumed in the conversion logic.  No other change should be
required since the generated `vector.load` operations are never created across
dimensions other than the last. The routines for checking preconditions on
`vector.transfer_read/write` are moved to under nvgpu utilities.

The change is NFC with respect to the GPU dialect lowering path.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D155753




More information about the All-commits mailing list