[all-commits] [llvm/llvm-project] 5caae7: [mlir][gpu] Productize `test-lower-to-nvvm` as `gp...
Guray Ozen via All-commits
all-commits at lists.llvm.org
Mon Dec 18 23:40:59 PST 2023
Branch: refs/heads/main
Home: https://github.com/llvm/llvm-project
Commit: 5caae72d1a4f58c9525977a93d86c3c833da4b34
https://github.com/llvm/llvm-project/commit/5caae72d1a4f58c9525977a93d86c3c833da4b34
Author: Guray Ozen <guray.ozen at gmail.com>
Date: 2023-12-19 (Tue, 19 Dec 2023)
Changed paths:
A mlir/include/mlir/Dialect/GPU/Pipelines/Passes.h
M mlir/include/mlir/InitAllPasses.h
M mlir/lib/Dialect/GPU/CMakeLists.txt
A mlir/lib/Dialect/GPU/Pipelines/CMakeLists.txt
A mlir/lib/Dialect/GPU/Pipelines/GPUToNVVMPipeline.cpp
M mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/dump-ptx.mlir
M mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-mma-2-4-f16.mlir
M mlir/test/Integration/Dialect/Vector/GPU/CUDA/test-reduction-distribute.mlir
M mlir/test/Integration/Dialect/Vector/GPU/CUDA/test-warp-distribute.mlir
M mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f16-f16-accum.mlir
M mlir/test/Integration/GPU/CUDA/TensorCore/sm80/transform-mma-sync-matmul-f32.mlir
M mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f16.mlir
M mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32-bare-ptr.mlir
M mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32.mlir
M mlir/test/Integration/GPU/CUDA/all-reduce-and.mlir
M mlir/test/Integration/GPU/CUDA/all-reduce-maxsi.mlir
M mlir/test/Integration/GPU/CUDA/all-reduce-minsi.mlir
M mlir/test/Integration/GPU/CUDA/all-reduce-op.mlir
M mlir/test/Integration/GPU/CUDA/all-reduce-or.mlir
M mlir/test/Integration/GPU/CUDA/all-reduce-region.mlir
M mlir/test/Integration/GPU/CUDA/all-reduce-xor.mlir
M mlir/test/Integration/GPU/CUDA/gpu-to-cubin.mlir
M mlir/test/Integration/GPU/CUDA/multiple-all-reduce.mlir
M mlir/test/Integration/GPU/CUDA/printf.mlir
M mlir/test/Integration/GPU/CUDA/shuffle.mlir
M mlir/test/Integration/GPU/CUDA/sm90/cga_cluster.mlir
M mlir/test/Integration/GPU/CUDA/sm90/gemm_f32_f16_f16_128x128x128.mlir
M mlir/test/Integration/GPU/CUDA/sm90/gemm_pred_f32_f16_f16_128x128x128.mlir
M mlir/test/Integration/GPU/CUDA/sm90/tma_load_128x64_swizzle128b.mlir
M mlir/test/Integration/GPU/CUDA/sm90/tma_load_64x64_swizzle128b.mlir
M mlir/test/Integration/GPU/CUDA/sm90/tma_load_64x8_8x128_noswizzle.mlir
M mlir/test/Integration/GPU/CUDA/two-modules.mlir
M mlir/test/lib/Dialect/GPU/CMakeLists.txt
R mlir/test/lib/Dialect/GPU/TestLowerToNVVM.cpp
M mlir/tools/mlir-opt/mlir-opt.cpp
Log Message:
-----------
[mlir][gpu] Productize `test-lower-to-nvvm` as `gpu-lower-to-nvvm` (#75775)
The `test-lower-to-nvvm` pipeline serves as the common and proper
pipeline for nvvm+host compilation, and it's used across our CUDA
integration tests.
This PR updates the `test-lower-to-nvvm` pipeline to `gpu-lower-to-nvvm`
and moves it within `InitAllPasses.h`. The aim is to call it from
Python, also having a standardize compilation process for nvvm.
More information about the All-commits
mailing list