[llvm-branch-commits] [clang] [compiler-rt] [flang] [libc] [llvm] [mlir] AMDGPU: Create pseudo to real mapping for flat/buffer atomic fmin/fmax (PR #95591)

Matt Arsenault via llvm-branch-commits llvm-branch-commits at lists.llvm.org
Mon Jun 17 23:40:52 PDT 2024


Hana =?utf-8?q?Dusíková?= <hanicka at hanicka.net>,Shivam Gupta
 <shivam98.tkg at gmail.com>,Matt Arsenault <Matthew.Arsenault at amd.com>,Matt
 Arsenault <Matthew.Arsenault at amd.com>,Joseph Huber <huberjn at outlook.com>,Alexander
 Shaposhnikov,Abid Qadeer <haqadeer at amd.com>,Louis Dionne
 <ldionne.2 at gmail.com>,Florian Mayer <fmayer at google.com>,Peiming Liu
 <peiming at google.com>,Fabian Mora <fmora.dev at gmail.com>,Joseph Huber
 <huberjn at outlook.com>,lntue <35648136+lntue at users.noreply.github.com>,Joshua
 Baehring <98630690+JoshuaMBa at users.noreply.github.com>,Mircea Trofin
 <mtrofin at google.com>,Fabian Mora <fmora.dev at gmail.com>,Matt Arsenault
 <Matthew.Arsenault at amd.com>
Message-ID:
In-Reply-To: <llvm.org/llvm/llvm-project/pull/95591 at github.com>


https://github.com/arsenm updated https://github.com/llvm/llvm-project/pull/95591

>From 13d983e730297ad454d53a0a97e1f72499b489f1 Mon Sep 17 00:00:00 2001
From: Matthias Springer <me at m-sp.org>
Date: Mon, 17 Jun 2024 19:56:40 +0200
Subject: [PATCH 01/26] [mlir][Transforms][NFC] Dialect Conversion: Resolve
 insertion point TODO (#95653)

Remove a TODO in the dialect conversion code base when materializing
unresolved conversions:
```
// FIXME: Determine a suitable insertion location when there are multiple
// inputs.
```

The implementation used to select an insertion point as follows:
- If the cast has exactly one operand: right after the definition of the
SSA value.
- Otherwise: right before the cast op.

However, it is not necessary to change the insertion point. Unresolved
materializations (`UnrealizedConversionCastOp`) are built during
`buildUnresolvedArgumentMaterialization` or
`buildUnresolvedTargetMaterialization`. In the former case, the op is
inserted at the beginning of the block. In the latter case, only one
operand is supported in the dialect conversion, and the op is inserted
right after the definition of the SSA value. I.e., the
`UnrealizedConversionCastOp` is already inserted at the right place and
it is not necessary to change the insertion point for the resolved
materialization op.

Note: The IR change changes slightly because the
`unrealized_conversion_cast` ops at the beginning of a block are no
longer doubly-inverted (by setting the insertion to the beginning of the
block when inserting the `unrealized_conversion_cast` and again when
inserting the resolved conversion op). All affected test cases were
fixed by using `CHECK-DAG` instead of `CHECK`.

Also improve the quality of multiple test cases that did not check for
the correct operands.

Note: This commit is in preparation of decoupling the
argument/source/target materialization logic of the type converter from
the dialect conversion (to reduce its complexity and make that
functionality usable from a new dialect conversion driver).
---
 .../Transforms/Utils/DialectConversion.cpp    |  8 +---
 .../Conversion/ArithToLLVM/arith-to-llvm.mlir | 15 ++++---
 .../convert-nd-vector-to-llvmir.mlir          |  6 +--
 .../ArithToSPIRV/arith-to-spirv.mlir          |  4 +-
 .../Conversion/IndexToLLVM/index-to-llvm.mlir | 18 ++++----
 .../IndexToSPIRV/index-to-spirv.mlir          | 12 ++---
 .../MathToSPIRV/math-to-core-spirv.mlir       |  4 +-
 .../convert-dynamic-memref-ops.mlir           |  7 +--
 .../convert-static-memref-ops.mlir            | 24 +++++-----
 .../MemRefToLLVM/memref-to-llvm.mlir          | 10 ++---
 .../MemRefToSPIRV/bitwidth-emulation.mlir     |  4 +-
 .../MemRefToSPIRV/memref-to-spirv.mlir        | 44 +++++++++++--------
 .../Conversion/NVGPUToNVVM/nvgpu-to-nvvm.mlir | 30 +++++++------
 .../VectorToLLVM/vector-scalable-memcpy.mlir  |  4 +-
 .../VectorToLLVM/vector-to-llvm.mlir          |  8 ++--
 .../VectorToSPIRV/vector-to-spirv.mlir        | 16 +++----
 mlir/test/Dialect/SCF/bufferize.mlir          |  4 +-
 mlir/test/Dialect/Vector/linearize.mlir       | 20 ++++-----
 18 files changed, 122 insertions(+), 116 deletions(-)

diff --git a/mlir/lib/Transforms/Utils/DialectConversion.cpp b/mlir/lib/Transforms/Utils/DialectConversion.cpp
index 2f0efe1b1e454..1c0cb128aeabe 100644
--- a/mlir/lib/Transforms/Utils/DialectConversion.cpp
+++ b/mlir/lib/Transforms/Utils/DialectConversion.cpp
@@ -2857,13 +2857,7 @@ static LogicalResult legalizeUnresolvedMaterialization(
 
   // Try to materialize the conversion.
   if (const TypeConverter *converter = mat.getConverter()) {
-    // FIXME: Determine a suitable insertion location when there are multiple
-    // inputs.
-    if (inputOperands.size() == 1)
-      rewriter.setInsertionPointAfterValue(inputOperands.front());
-    else
-      rewriter.setInsertionPoint(op);
-
+    rewriter.setInsertionPoint(op);
     Value newMaterialization;
     switch (mat.getMaterializationKind()) {
     case MaterializationKind::Argument:
diff --git a/mlir/test/Conversion/ArithToLLVM/arith-to-llvm.mlir b/mlir/test/Conversion/ArithToLLVM/arith-to-llvm.mlir
index 56ae930e6d627..d3bdbe89a5487 100644
--- a/mlir/test/Conversion/ArithToLLVM/arith-to-llvm.mlir
+++ b/mlir/test/Conversion/ArithToLLVM/arith-to-llvm.mlir
@@ -478,9 +478,10 @@ func.func @mului_extended_vector1d(%arg0: vector<3xi64>, %arg1: vector<3xi64>) -
 // -----
 
 // CHECK-LABEL: func @cmpf_2dvector(
+//  CHECK-SAME:     %[[OARG0:.*]]: vector<4x3xf32>, %[[OARG1:.*]]: vector<4x3xf32>)
 func.func @cmpf_2dvector(%arg0 : vector<4x3xf32>, %arg1 : vector<4x3xf32>) {
-  // CHECK: %[[ARG0:.*]] = builtin.unrealized_conversion_cast
-  // CHECK: %[[ARG1:.*]] = builtin.unrealized_conversion_cast
+  // CHECK-DAG: %[[ARG0:.*]] = builtin.unrealized_conversion_cast %[[OARG0]]
+  // CHECK-DAG: %[[ARG1:.*]] = builtin.unrealized_conversion_cast %[[OARG1]]
   // CHECK: %[[EXTRACT1:.*]] = llvm.extractvalue %[[ARG0]][0] : !llvm.array<4 x vector<3xf32>>
   // CHECK: %[[EXTRACT2:.*]] = llvm.extractvalue %[[ARG1]][0] : !llvm.array<4 x vector<3xf32>>
   // CHECK: %[[CMP:.*]] = llvm.fcmp "olt" %[[EXTRACT1]], %[[EXTRACT2]] : vector<3xf32>
@@ -492,9 +493,10 @@ func.func @cmpf_2dvector(%arg0 : vector<4x3xf32>, %arg1 : vector<4x3xf32>) {
 // -----
 
 // CHECK-LABEL: func @cmpi_0dvector(
+//  CHECK-SAME:     %[[OARG0:.*]]: vector<i32>, %[[OARG1:.*]]: vector<i32>)
 func.func @cmpi_0dvector(%arg0 : vector<i32>, %arg1 : vector<i32>) {
-  // CHECK: %[[ARG0:.*]] = builtin.unrealized_conversion_cast
-  // CHECK: %[[ARG1:.*]] = builtin.unrealized_conversion_cast
+  // CHECK-DAG: %[[ARG0:.*]] = builtin.unrealized_conversion_cast %[[OARG0]]
+  // CHECK-DAG: %[[ARG1:.*]] = builtin.unrealized_conversion_cast %[[OARG1]]
   // CHECK: %[[CMP:.*]] = llvm.icmp "ult" %[[ARG0]], %[[ARG1]] : vector<1xi32>
   %0 = arith.cmpi ult, %arg0, %arg1 : vector<i32>
   func.return
@@ -503,9 +505,10 @@ func.func @cmpi_0dvector(%arg0 : vector<i32>, %arg1 : vector<i32>) {
 // -----
 
 // CHECK-LABEL: func @cmpi_2dvector(
+//  CHECK-SAME:     %[[OARG0:.*]]: vector<4x3xi32>, %[[OARG1:.*]]: vector<4x3xi32>)
 func.func @cmpi_2dvector(%arg0 : vector<4x3xi32>, %arg1 : vector<4x3xi32>) {
-  // CHECK: %[[ARG0:.*]] = builtin.unrealized_conversion_cast
-  // CHECK: %[[ARG1:.*]] = builtin.unrealized_conversion_cast
+  // CHECK-DAG: %[[ARG0:.*]] = builtin.unrealized_conversion_cast %[[OARG0]]
+  // CHECK-DAG: %[[ARG1:.*]] = builtin.unrealized_conversion_cast %[[OARG1]]
   // CHECK: %[[EXTRACT1:.*]] = llvm.extractvalue %[[ARG0]][0] : !llvm.array<4 x vector<3xi32>>
   // CHECK: %[[EXTRACT2:.*]] = llvm.extractvalue %[[ARG1]][0] : !llvm.array<4 x vector<3xi32>>
   // CHECK: %[[CMP:.*]] = llvm.icmp "ult" %[[EXTRACT1]], %[[EXTRACT2]] : vector<3xi32>
diff --git a/mlir/test/Conversion/ArithToLLVM/convert-nd-vector-to-llvmir.mlir b/mlir/test/Conversion/ArithToLLVM/convert-nd-vector-to-llvmir.mlir
index 63989347567b5..b234cbbb35f32 100644
--- a/mlir/test/Conversion/ArithToLLVM/convert-nd-vector-to-llvmir.mlir
+++ b/mlir/test/Conversion/ArithToLLVM/convert-nd-vector-to-llvmir.mlir
@@ -199,9 +199,9 @@ func.func @bitcast_2d(%arg0: vector<2x4xf32>) {
 
 // CHECK-LABEL: func @select_2d(
 func.func @select_2d(%arg0 : vector<4x3xi1>, %arg1 : vector<4x3xi32>, %arg2 : vector<4x3xi32>) {
-  // CHECK: %[[ARG0:.*]] = builtin.unrealized_conversion_cast %arg0
-  // CHECK: %[[ARG1:.*]] = builtin.unrealized_conversion_cast %arg1
-  // CHECK: %[[ARG2:.*]] = builtin.unrealized_conversion_cast %arg2
+  // CHECK-DAG: %[[ARG0:.*]] = builtin.unrealized_conversion_cast %arg0
+  // CHECK-DAG: %[[ARG1:.*]] = builtin.unrealized_conversion_cast %arg1
+  // CHECK-DAG: %[[ARG2:.*]] = builtin.unrealized_conversion_cast %arg2
   // CHECK: %[[EXTRACT1:.*]] = llvm.extractvalue %[[ARG0]][0] : !llvm.array<4 x vector<3xi1>>
   // CHECK: %[[EXTRACT2:.*]] = llvm.extractvalue %[[ARG1]][0] : !llvm.array<4 x vector<3xi32>>
   // CHECK: %[[EXTRACT3:.*]] = llvm.extractvalue %[[ARG2]][0] : !llvm.array<4 x vector<3xi32>>
diff --git a/mlir/test/Conversion/ArithToSPIRV/arith-to-spirv.mlir b/mlir/test/Conversion/ArithToSPIRV/arith-to-spirv.mlir
index ae47ae36ca51c..beb2c8d2d242c 100644
--- a/mlir/test/Conversion/ArithToSPIRV/arith-to-spirv.mlir
+++ b/mlir/test/Conversion/ArithToSPIRV/arith-to-spirv.mlir
@@ -60,8 +60,8 @@ func.func @index_scalar(%lhs: index, %rhs: index) {
 // CHECK-LABEL: @index_scalar_srem
 // CHECK-SAME: (%[[A:.+]]: index, %[[B:.+]]: index)
 func.func @index_scalar_srem(%lhs: index, %rhs: index) {
-  // CHECK: %[[LHS:.+]] = builtin.unrealized_conversion_cast %[[A]] : index to i32
-  // CHECK: %[[RHS:.+]] = builtin.unrealized_conversion_cast %[[B]] : index to i32
+  // CHECK-DAG: %[[LHS:.+]] = builtin.unrealized_conversion_cast %[[A]] : index to i32
+  // CHECK-DAG: %[[RHS:.+]] = builtin.unrealized_conversion_cast %[[B]] : index to i32
   // CHECK: %[[LABS:.+]] = spirv.GL.SAbs %[[LHS]] : i32
   // CHECK: %[[RABS:.+]] = spirv.GL.SAbs %[[RHS]] : i32
   // CHECK:  %[[ABS:.+]] = spirv.UMod %[[LABS]], %[[RABS]] : i32
diff --git a/mlir/test/Conversion/IndexToLLVM/index-to-llvm.mlir b/mlir/test/Conversion/IndexToLLVM/index-to-llvm.mlir
index 1b13ebb38dc9e..26abb3bdc23a1 100644
--- a/mlir/test/Conversion/IndexToLLVM/index-to-llvm.mlir
+++ b/mlir/test/Conversion/IndexToLLVM/index-to-llvm.mlir
@@ -50,8 +50,8 @@ func.func @trivial_ops(%a: index, %b: index) {
 // CHECK-LABEL: @ceildivs
 // CHECK-SAME: %[[NI:.*]]: index, %[[MI:.*]]: index
 func.func @ceildivs(%n: index, %m: index) -> index {
-  // CHECK: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
-  // CHECK: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
+  // CHECK-DAG: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
+  // CHECK-DAG: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
   // CHECK: %[[ZERO:.*]] = llvm.mlir.constant(0 :
   // CHECK: %[[POS_ONE:.*]] = llvm.mlir.constant(1 :
   // CHECK: %[[NEG_ONE:.*]] = llvm.mlir.constant(-1 :
@@ -82,8 +82,8 @@ func.func @ceildivs(%n: index, %m: index) -> index {
 // CHECK-LABEL: @ceildivu
 // CHECK-SAME: %[[NI:.*]]: index, %[[MI:.*]]: index
 func.func @ceildivu(%n: index, %m: index) -> index {
-  // CHECK: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
-  // CHECK: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
+  // CHECK-DAG: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
+  // CHECK-DAG: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
   // CHECK: %[[ZERO:.*]] = llvm.mlir.constant(0 :
   // CHECK: %[[ONE:.*]] = llvm.mlir.constant(1 :
 
@@ -103,11 +103,11 @@ func.func @ceildivu(%n: index, %m: index) -> index {
 // CHECK-LABEL: @floordivs
 // CHECK-SAME: %[[NI:.*]]: index, %[[MI:.*]]: index
 func.func @floordivs(%n: index, %m: index) -> index {
-  // CHECK: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
-  // CHECK: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
-  // CHECK: %[[ZERO:.*]] = llvm.mlir.constant(0 :
-  // CHECK: %[[POS_ONE:.*]] = llvm.mlir.constant(1 :
-  // CHECK: %[[NEG_ONE:.*]] = llvm.mlir.constant(-1 :
+  // CHECK-DAG: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
+  // CHECK-DAG: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
+  // CHECK-DAG: %[[ZERO:.*]] = llvm.mlir.constant(0 :
+  // CHECK-DAG: %[[POS_ONE:.*]] = llvm.mlir.constant(1 :
+  // CHECK-DAG: %[[NEG_ONE:.*]] = llvm.mlir.constant(-1 :
 
   // CHECK: %[[M_NEG:.*]] = llvm.icmp "slt" %[[M]], %[[ZERO]]
   // CHECK: %[[X:.*]] = llvm.select %[[M_NEG]], %[[POS_ONE]], %[[NEG_ONE]]
diff --git a/mlir/test/Conversion/IndexToSPIRV/index-to-spirv.mlir b/mlir/test/Conversion/IndexToSPIRV/index-to-spirv.mlir
index 53dc896e98c7d..7b26f4fc13696 100644
--- a/mlir/test/Conversion/IndexToSPIRV/index-to-spirv.mlir
+++ b/mlir/test/Conversion/IndexToSPIRV/index-to-spirv.mlir
@@ -67,8 +67,8 @@ func.func @constant_ops() {
 // CHECK-LABEL: @ceildivs
 // CHECK-SAME: %[[NI:.*]]: index, %[[MI:.*]]: index
 func.func @ceildivs(%n: index, %m: index) -> index {
-  // CHECK: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
-  // CHECK: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
+  // CHECK-DAG: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
+  // CHECK-DAG: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
   // CHECK: %[[ZERO:.*]] = spirv.Constant 0
   // CHECK: %[[POS_ONE:.*]] = spirv.Constant 1
   // CHECK: %[[NEG_ONE:.*]] = spirv.Constant -1
@@ -99,8 +99,8 @@ func.func @ceildivs(%n: index, %m: index) -> index {
 // CHECK-LABEL: @ceildivu
 // CHECK-SAME: %[[NI:.*]]: index, %[[MI:.*]]: index
 func.func @ceildivu(%n: index, %m: index) -> index {
-  // CHECK: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
-  // CHECK: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
+  // CHECK-DAG: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
+  // CHECK-DAG: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
   // CHECK: %[[ZERO:.*]] = spirv.Constant 0
   // CHECK: %[[ONE:.*]] = spirv.Constant 1
 
@@ -120,8 +120,8 @@ func.func @ceildivu(%n: index, %m: index) -> index {
 // CHECK-LABEL: @floordivs
 // CHECK-SAME: %[[NI:.*]]: index, %[[MI:.*]]: index
 func.func @floordivs(%n: index, %m: index) -> index {
-  // CHECK: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
-  // CHECK: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
+  // CHECK-DAG: %[[N:.*]] = builtin.unrealized_conversion_cast %[[NI]]
+  // CHECK-DAG: %[[M:.*]] = builtin.unrealized_conversion_cast %[[MI]]
   // CHECK: %[[ZERO:.*]] = spirv.Constant 0
   // CHECK: %[[POS_ONE:.*]] = spirv.Constant 1
   // CHECK: %[[NEG_ONE:.*]] = spirv.Constant -1
diff --git a/mlir/test/Conversion/MathToSPIRV/math-to-core-spirv.mlir b/mlir/test/Conversion/MathToSPIRV/math-to-core-spirv.mlir
index f0119afa42f69..586460fd58180 100644
--- a/mlir/test/Conversion/MathToSPIRV/math-to-core-spirv.mlir
+++ b/mlir/test/Conversion/MathToSPIRV/math-to-core-spirv.mlir
@@ -78,8 +78,8 @@ func.func @copy_sign_vector_0D(%value: vector<1xf16>, %sign: vector<1xf16>) -> v
 
 // CHECK-LABEL: func @copy_sign_vector_0D
 //  CHECK-SAME: (%[[VALUE:.+]]: vector<1xf16>, %[[SIGN:.+]]: vector<1xf16>)
-//       CHECK:   %[[CASTVAL:.+]] = builtin.unrealized_conversion_cast %[[VALUE]] : vector<1xf16> to f16
-//       CHECK:   %[[CASTSIGN:.+]] = builtin.unrealized_conversion_cast %[[SIGN]] : vector<1xf16> to f16
+//   CHECK-DAG:   %[[CASTVAL:.+]] = builtin.unrealized_conversion_cast %[[VALUE]] : vector<1xf16> to f16
+//   CHECK-DAG:   %[[CASTSIGN:.+]] = builtin.unrealized_conversion_cast %[[SIGN]] : vector<1xf16> to f16
 //       CHECK:   %[[SMASK:.+]] = spirv.Constant -32768 : i16
 //       CHECK:   %[[VMASK:.+]] = spirv.Constant 32767 : i16
 //       CHECK:   %[[VCAST:.+]] = spirv.Bitcast %[[CASTVAL]] : f16 to i16
diff --git a/mlir/test/Conversion/MemRefToLLVM/convert-dynamic-memref-ops.mlir b/mlir/test/Conversion/MemRefToLLVM/convert-dynamic-memref-ops.mlir
index 9d8f4266adf27..ebfc3b95d6eff 100644
--- a/mlir/test/Conversion/MemRefToLLVM/convert-dynamic-memref-ops.mlir
+++ b/mlir/test/Conversion/MemRefToLLVM/convert-dynamic-memref-ops.mlir
@@ -506,14 +506,15 @@ func.func @memref_reinterpret_cast_unranked_to_dynamic_shape(%offset: index,
 
 // -----
 
-// CHECK-LABEL: @memref_reshape
+// CHECK-LABEL: @memref_reshape(
+//  CHECK-SAME:     %[[ARG0:.*]]: memref<2x3xf32>, %[[ARG1:.*]]: memref<?xindex>)
 func.func @memref_reshape(%input : memref<2x3xf32>, %shape : memref<?xindex>) {
   %output = memref.reshape %input(%shape)
                 : (memref<2x3xf32>, memref<?xindex>) -> memref<*xf32>
   return
 }
-// CHECK: [[INPUT:%.*]] = builtin.unrealized_conversion_cast %{{.*}} to [[INPUT_TY:!.*]]
-// CHECK: [[SHAPE:%.*]] = builtin.unrealized_conversion_cast %{{.*}} to [[SHAPE_TY:!.*]]
+// CHECK-DAG: [[INPUT:%.*]] = builtin.unrealized_conversion_cast %[[ARG0]] : {{.*}} to [[INPUT_TY:!.*]]
+// CHECK-DAG: [[SHAPE:%.*]] = builtin.unrealized_conversion_cast %[[ARG1]] : {{.*}} to [[SHAPE_TY:!.*]]
 // CHECK: [[RANK:%.*]] = llvm.extractvalue [[SHAPE]][3, 0] : [[SHAPE_TY]]
 // CHECK: [[UNRANKED_OUT_O:%.*]] = llvm.mlir.undef : !llvm.struct<(i64, ptr)>
 // CHECK: [[UNRANKED_OUT_1:%.*]] = llvm.insertvalue [[RANK]], [[UNRANKED_OUT_O]][0] : !llvm.struct<(i64, ptr)>
diff --git a/mlir/test/Conversion/MemRefToLLVM/convert-static-memref-ops.mlir b/mlir/test/Conversion/MemRefToLLVM/convert-static-memref-ops.mlir
index f1600d43e7bfb..96cf2264e9c2f 100644
--- a/mlir/test/Conversion/MemRefToLLVM/convert-static-memref-ops.mlir
+++ b/mlir/test/Conversion/MemRefToLLVM/convert-static-memref-ops.mlir
@@ -115,13 +115,11 @@ func.func @zero_d_load(%arg0: memref<f32>) -> f32 {
 
 // -----
 
-// CHECK-LABEL: func @static_load
-// CHECK:         %[[MEMREF:.*]]: memref<10x42xf32>,
-// CHECK:         %[[I:.*]]: index,
-// CHECK:         %[[J:.*]]: index)
+// CHECK-LABEL: func @static_load(
+// CHECK-SAME:    %[[MEMREF:.*]]: memref<10x42xf32>, %[[I:.*]]: index, %[[J:.*]]: index)
 func.func @static_load(%static : memref<10x42xf32>, %i : index, %j : index) {
-// CHECK:  %[[II:.*]] = builtin.unrealized_conversion_cast %[[I]]
-// CHECK:  %[[JJ:.*]] = builtin.unrealized_conversion_cast %[[J]]
+// CHECK-DAG:  %[[II:.*]] = builtin.unrealized_conversion_cast %[[I]]
+// CHECK-DAG:  %[[JJ:.*]] = builtin.unrealized_conversion_cast %[[J]]
 // CHECK:  %[[ptr:.*]] = llvm.extractvalue %{{.*}}[1] : !llvm.struct<(ptr, ptr, i64, array<2 x i64>, array<2 x i64>)>
 // CHECK:  %[[st0:.*]] = llvm.mlir.constant(42 : index) : i64
 // CHECK:  %[[offI:.*]] = llvm.mul %[[II]], %[[st0]] : i64
@@ -148,8 +146,8 @@ func.func @zero_d_store(%arg0: memref<f32>, %arg1: f32) {
 // CHECK:         %[[MEMREF:.*]]: memref<10x42xf32>,
 // CHECK-SAME:    %[[I:.*]]: index, %[[J:.*]]: index,
 func.func @static_store(%static : memref<10x42xf32>, %i : index, %j : index, %val : f32) {
-// CHECK: %[[II:.*]] = builtin.unrealized_conversion_cast %[[I]]
-// CHECK: %[[JJ:.*]] = builtin.unrealized_conversion_cast %[[J]]
+// CHECK-DAG: %[[II:.*]] = builtin.unrealized_conversion_cast %[[I]]
+// CHECK-DAG: %[[JJ:.*]] = builtin.unrealized_conversion_cast %[[J]]
 // CHECK: %[[ptr:.*]] = llvm.extractvalue %{{.*}}[1] : !llvm.struct<(ptr, ptr, i64, array<2 x i64>, array<2 x i64>)>
 // CHECK: %[[st0:.*]] = llvm.mlir.constant(42 : index) : i64
 // CHECK: %[[offI:.*]] = llvm.mul %[[II]], %[[st0]] : i64
@@ -205,7 +203,7 @@ module attributes { dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<index, 32>> } {
   func.func @address() {
     %c1 = arith.constant 1 : index
     %0 = memref.alloc(%c1) : memref<? x vector<2xf32>>
-    // CHECK: %[[CST_S:.*]] = arith.constant 1 : index
+    // CHECK-DAG: %[[CST_S:.*]] = arith.constant 1 : index
     // CHECK: %[[CST:.*]] = builtin.unrealized_conversion_cast
     // CHECK: llvm.mlir.zero
     // CHECK: llvm.getelementptr %{{.*}}[[CST]]
@@ -269,8 +267,8 @@ func.func @memref.reshape(%arg0: memref<4x5x6xf32>) -> memref<2x6x20xf32> {
 // CHECK-LABEL: func @memref.reshape.dynamic.dim
 // CHECK-SAME:    %[[arg:.*]]: memref<?x?x?xf32>, %[[shape:.*]]: memref<4xi64>) -> memref<?x?x12x32xf32>
 func.func @memref.reshape.dynamic.dim(%arg: memref<?x?x?xf32>, %shape: memref<4xi64>) -> memref<?x?x12x32xf32> {
-  // CHECK: %[[arg_cast:.*]] = builtin.unrealized_conversion_cast %[[arg]] : memref<?x?x?xf32> to !llvm.struct<(ptr, ptr, i64, array<3 x i64>, array<3 x i64>)>
-  // CHECK: %[[shape_cast:.*]] = builtin.unrealized_conversion_cast %[[shape]] : memref<4xi64> to !llvm.struct<(ptr, ptr, i64, array<1 x i64>, array<1 x i64>)>
+  // CHECK-DAG: %[[arg_cast:.*]] = builtin.unrealized_conversion_cast %[[arg]] : memref<?x?x?xf32> to !llvm.struct<(ptr, ptr, i64, array<3 x i64>, array<3 x i64>)>
+  // CHECK-DAG: %[[shape_cast:.*]] = builtin.unrealized_conversion_cast %[[shape]] : memref<4xi64> to !llvm.struct<(ptr, ptr, i64, array<1 x i64>, array<1 x i64>)>
   // CHECK: %[[undef:.*]] = llvm.mlir.undef : !llvm.struct<(ptr, ptr, i64, array<4 x i64>, array<4 x i64>)>
   // CHECK: %[[alloc_ptr:.*]] = llvm.extractvalue %[[arg_cast]][0] : !llvm.struct<(ptr, ptr, i64, array<3 x i64>, array<3 x i64>)>
   // CHECK: %[[align_ptr:.*]] = llvm.extractvalue %[[arg_cast]][1] : !llvm.struct<(ptr, ptr, i64, array<3 x i64>, array<3 x i64>)>
@@ -318,8 +316,8 @@ func.func @memref.reshape.dynamic.dim(%arg: memref<?x?x?xf32>, %shape: memref<4x
 // CHECK-LABEL: func @memref.reshape_index
 // CHECK-SAME:    %[[arg:.*]]: memref<?x?xi32>, %[[shape:.*]]: memref<1xindex>
 func.func @memref.reshape_index(%arg0: memref<?x?xi32>, %shape: memref<1xindex>) ->  memref<?xi32> {
-  // CHECK: %[[arg_cast:.*]] = builtin.unrealized_conversion_cast %[[arg]] : memref<?x?xi32> to !llvm.struct<(ptr, ptr, i64, array<2 x i64>, array<2 x i64>)>
-  // CHECK: %[[shape_cast:.*]] = builtin.unrealized_conversion_cast %[[shape]] : memref<1xindex> to !llvm.struct<(ptr, ptr, i64, array<1 x i64>, array<1 x i64>)>
+  // CHECK-DAG: %[[arg_cast:.*]] = builtin.unrealized_conversion_cast %[[arg]] : memref<?x?xi32> to !llvm.struct<(ptr, ptr, i64, array<2 x i64>, array<2 x i64>)>
+  // CHECK-DAG: %[[shape_cast:.*]] = builtin.unrealized_conversion_cast %[[shape]] : memref<1xindex> to !llvm.struct<(ptr, ptr, i64, array<1 x i64>, array<1 x i64>)>
   // CHECK: %[[undef:.*]] = llvm.mlir.undef : !llvm.struct<(ptr, ptr, i64, array<1 x i64>, array<1 x i64>)>
   // CHECK: %[[alloc_ptr:.*]] = llvm.extractvalue %[[arg_cast]][0] : !llvm.struct<(ptr, ptr, i64, array<2 x i64>, array<2 x i64>)>
   // CHECK: %[[align_ptr:.*]] = llvm.extractvalue %[[arg_cast]][1] : !llvm.struct<(ptr, ptr, i64, array<2 x i64>, array<2 x i64>)>
diff --git a/mlir/test/Conversion/MemRefToLLVM/memref-to-llvm.mlir b/mlir/test/Conversion/MemRefToLLVM/memref-to-llvm.mlir
index 882804132e66d..9dc22abf143bf 100644
--- a/mlir/test/Conversion/MemRefToLLVM/memref-to-llvm.mlir
+++ b/mlir/test/Conversion/MemRefToLLVM/memref-to-llvm.mlir
@@ -10,9 +10,9 @@
 // CHECK-LABEL: func @view(
 // CHECK: %[[ARG0F:.*]]: index, %[[ARG1F:.*]]: index, %[[ARG2F:.*]]: index
 func.func @view(%arg0 : index, %arg1 : index, %arg2 : index) {
-  // CHECK: %[[ARG2:.*]] = builtin.unrealized_conversion_cast %[[ARG2F:.*]]
-  // CHECK: %[[ARG0:.*]] = builtin.unrealized_conversion_cast %[[ARG0F:.*]]
-  // CHECK: %[[ARG1:.*]] = builtin.unrealized_conversion_cast %[[ARG1F:.*]]
+  // CHECK-DAG: %[[ARG2:.*]] = builtin.unrealized_conversion_cast %[[ARG2F]]
+  // CHECK-DAG: %[[ARG0:.*]] = builtin.unrealized_conversion_cast %[[ARG0F]]
+  // CHECK-DAG: %[[ARG1:.*]] = builtin.unrealized_conversion_cast %[[ARG1F]]
   // CHECK: llvm.mlir.constant(2048 : index) : i64
   // CHECK: llvm.mlir.undef : !llvm.struct<(ptr, ptr, i64, array<1 x i64>, array<1 x i64>)>
   %0 = memref.alloc() : memref<2048xi8>
@@ -408,8 +408,8 @@ func.func @atomic_rmw_with_offset(%I : memref<10xi32, strided<[1], offset: 5>>,
 // CHECK-SAME:   %[[ARG0:.+]]: memref<10xi32, strided<[1], offset: 5>>
 // CHECK-SAME:   %[[ARG1:.+]]: i32
 // CHECK-SAME:   %[[ARG2:.+]]: index
-// CHECK:        %[[MEMREF_STRUCT:.+]] = builtin.unrealized_conversion_cast %[[ARG0]] : memref<10xi32, strided<[1], offset: 5>> to !llvm.struct<(ptr, ptr, i64, array<1 x i64>, array<1 x i64>)>
-// CHECK:        %[[INDEX:.+]] = builtin.unrealized_conversion_cast %[[ARG2]] : index to i64
+// CHECK-DAG:    %[[MEMREF_STRUCT:.+]] = builtin.unrealized_conversion_cast %[[ARG0]] : memref<10xi32, strided<[1], offset: 5>> to !llvm.struct<(ptr, ptr, i64, array<1 x i64>, array<1 x i64>)>
+// CHECK-DAG:    %[[INDEX:.+]] = builtin.unrealized_conversion_cast %[[ARG2]] : index to i64
 // CHECK:        %[[BASE_PTR:.+]] = llvm.extractvalue %[[MEMREF_STRUCT]][1] : !llvm.struct<(ptr, ptr, i64, array<1 x i64>, array<1 x i64>)> 
 // CHECK:        %[[OFFSET:.+]] = llvm.mlir.constant(5 : index) : i64
 // CHECK:        %[[OFFSET_PTR:.+]] = llvm.getelementptr %[[BASE_PTR]][%[[OFFSET]]] : (!llvm.ptr, i64) -> !llvm.ptr, i32
diff --git a/mlir/test/Conversion/MemRefToSPIRV/bitwidth-emulation.mlir b/mlir/test/Conversion/MemRefToSPIRV/bitwidth-emulation.mlir
index 52ed14e8cce23..bb003e11993f5 100644
--- a/mlir/test/Conversion/MemRefToSPIRV/bitwidth-emulation.mlir
+++ b/mlir/test/Conversion/MemRefToSPIRV/bitwidth-emulation.mlir
@@ -196,8 +196,8 @@ func.func @load_i4(%arg0: memref<?xi4, #spirv.storage_class<StorageBuffer>>, %i:
 
 // CHECK-LABEL: @store_i4
 func.func @store_i4(%arg0: memref<?xi4, #spirv.storage_class<StorageBuffer>>, %value: i4, %i: index) {
-  // CHECK: %[[VAL:.+]] = builtin.unrealized_conversion_cast %{{.+}} : i4 to i32
-  // CHECK: %[[INDEX:.+]] = builtin.unrealized_conversion_cast %{{.+}} : index to i32
+  // CHECK-DAG: %[[VAL:.+]] = builtin.unrealized_conversion_cast %{{.+}} : i4 to i32
+  // CHECK-DAG: %[[INDEX:.+]] = builtin.unrealized_conversion_cast %{{.+}} : index to i32
   // CHECK: %[[ZERO:.+]] = spirv.Constant 0 : i32
   // CHECK: %[[EIGHT:.+]] = spirv.Constant 8 : i32
   // CHECK: %[[FOUR:.+]] = spirv.Constant 4 : i32
diff --git a/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir b/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir
index 10c03a270005f..6dd5b1988e2a2 100644
--- a/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir
+++ b/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir
@@ -16,10 +16,11 @@ module attributes {
       #spirv.resource_limits<>>
 } {
 
-// CHECK-LABEL: @load_store_zero_rank_float
+// CHECK-LABEL: @load_store_zero_rank_float(
+//  CHECK-SAME:     %[[OARG0:.*]]: memref{{.*}}, %[[OARG1:.*]]: memref
 func.func @load_store_zero_rank_float(%arg0: memref<f32, #spirv.storage_class<StorageBuffer>>, %arg1: memref<f32, #spirv.storage_class<StorageBuffer>>) {
-  //      CHECK: [[ARG0:%.*]] = builtin.unrealized_conversion_cast {{.+}} : memref<f32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<1 x f32, stride=4> [0])>, StorageBuffer>
-  //      CHECK: [[ARG1:%.*]] = builtin.unrealized_conversion_cast {{.+}} : memref<f32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<1 x f32, stride=4> [0])>, StorageBuffer>
+  //  CHECK-DAG: [[ARG0:%.*]] = builtin.unrealized_conversion_cast %[[OARG0]] : memref<f32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<1 x f32, stride=4> [0])>, StorageBuffer>
+  //  CHECK-DAG: [[ARG1:%.*]] = builtin.unrealized_conversion_cast %[[OARG1]] : memref<f32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<1 x f32, stride=4> [0])>, StorageBuffer>
   //      CHECK: [[ZERO:%.*]] = spirv.Constant 0 : i32
   //      CHECK: spirv.AccessChain [[ARG0]][
   // CHECK-SAME: [[ZERO]], [[ZERO]]
@@ -35,9 +36,10 @@ func.func @load_store_zero_rank_float(%arg0: memref<f32, #spirv.storage_class<St
 }
 
 // CHECK-LABEL: @load_store_zero_rank_int
+//  CHECK-SAME:     %[[OARG0:.*]]: memref{{.*}}, %[[OARG1:.*]]: memref
 func.func @load_store_zero_rank_int(%arg0: memref<i32, #spirv.storage_class<StorageBuffer>>, %arg1: memref<i32, #spirv.storage_class<StorageBuffer>>) {
-  //      CHECK: [[ARG0:%.*]] = builtin.unrealized_conversion_cast {{.+}} : memref<i32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<1 x i32, stride=4> [0])>, StorageBuffer>
-  //      CHECK: [[ARG1:%.*]] = builtin.unrealized_conversion_cast {{.+}} : memref<i32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<1 x i32, stride=4> [0])>, StorageBuffer>
+  //  CHECK-DAG: [[ARG0:%.*]] = builtin.unrealized_conversion_cast %[[OARG0]] : memref<i32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<1 x i32, stride=4> [0])>, StorageBuffer>
+  //  CHECK-DAG: [[ARG1:%.*]] = builtin.unrealized_conversion_cast %[[OARG1]] : memref<i32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<1 x i32, stride=4> [0])>, StorageBuffer>
   //      CHECK: [[ZERO:%.*]] = spirv.Constant 0 : i32
   //      CHECK: spirv.AccessChain [[ARG0]][
   // CHECK-SAME: [[ZERO]], [[ZERO]]
@@ -53,9 +55,10 @@ func.func @load_store_zero_rank_int(%arg0: memref<i32, #spirv.storage_class<Stor
 }
 
 // CHECK-LABEL: func @load_store_unknown_dim
+//  CHECK-SAME:     %[[OARG0:.*]]: index, %[[OARG1:.*]]: memref{{.*}}, %[[OARG2:.*]]: memref
 func.func @load_store_unknown_dim(%i: index, %source: memref<?xi32, #spirv.storage_class<StorageBuffer>>, %dest: memref<?xi32, #spirv.storage_class<StorageBuffer>>) {
-  // CHECK: %[[SRC:.+]] = builtin.unrealized_conversion_cast {{.+}} : memref<?xi32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.rtarray<i32, stride=4> [0])>, StorageBuffer>
-  // CHECK: %[[DST:.+]] = builtin.unrealized_conversion_cast {{.+}} : memref<?xi32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.rtarray<i32, stride=4> [0])>, StorageBuffer>
+  // CHECK-DAG: %[[SRC:.+]] = builtin.unrealized_conversion_cast %[[OARG1]] : memref<?xi32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.rtarray<i32, stride=4> [0])>, StorageBuffer>
+  // CHECK-DAG: %[[DST:.+]] = builtin.unrealized_conversion_cast %[[OARG2]] : memref<?xi32, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.rtarray<i32, stride=4> [0])>, StorageBuffer>
   // CHECK: %[[AC0:.+]] = spirv.AccessChain %[[SRC]]
   // CHECK: spirv.Load "StorageBuffer" %[[AC0]]
   %0 = memref.load %source[%i] : memref<?xi32, #spirv.storage_class<StorageBuffer>>
@@ -173,9 +176,10 @@ module attributes {
 } {
 
 // CHECK-LABEL: @load_store_zero_rank_float
+//  CHECK-SAME:     %[[OARG0:.*]]: memref{{.*}}, %[[OARG1:.*]]: memref
 func.func @load_store_zero_rank_float(%arg0: memref<f32, #spirv.storage_class<CrossWorkgroup>>, %arg1: memref<f32, #spirv.storage_class<CrossWorkgroup>>) {
-  //      CHECK: [[ARG0:%.*]] = builtin.unrealized_conversion_cast {{.+}} : memref<f32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<!spirv.array<1 x f32>, CrossWorkgroup>
-  //      CHECK: [[ARG1:%.*]] = builtin.unrealized_conversion_cast {{.+}} : memref<f32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<!spirv.array<1 x f32>, CrossWorkgroup>
+  //  CHECK-DAG: [[ARG0:%.*]] = builtin.unrealized_conversion_cast %[[OARG0]] : memref<f32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<!spirv.array<1 x f32>, CrossWorkgroup>
+  //  CHECK-DAG: [[ARG1:%.*]] = builtin.unrealized_conversion_cast %[[OARG1]] : memref<f32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<!spirv.array<1 x f32>, CrossWorkgroup>
   //      CHECK: [[ZERO:%.*]] = spirv.Constant 0 : i32
   //      CHECK: spirv.AccessChain [[ARG0]][
   // CHECK-SAME: [[ZERO]]
@@ -191,9 +195,10 @@ func.func @load_store_zero_rank_float(%arg0: memref<f32, #spirv.storage_class<Cr
 }
 
 // CHECK-LABEL: @load_store_zero_rank_int
+//  CHECK-SAME:     %[[OARG0:.*]]: memref{{.*}}, %[[OARG1:.*]]: memref
 func.func @load_store_zero_rank_int(%arg0: memref<i32, #spirv.storage_class<CrossWorkgroup>>, %arg1: memref<i32, #spirv.storage_class<CrossWorkgroup>>) {
-  //      CHECK: [[ARG0:%.*]] = builtin.unrealized_conversion_cast {{.+}} : memref<i32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<!spirv.array<1 x i32>, CrossWorkgroup>
-  //      CHECK: [[ARG1:%.*]] = builtin.unrealized_conversion_cast {{.+}} : memref<i32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<!spirv.array<1 x i32>, CrossWorkgroup>
+  //  CHECK-DAG: [[ARG0:%.*]] = builtin.unrealized_conversion_cast %[[OARG0]] : memref<i32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<!spirv.array<1 x i32>, CrossWorkgroup>
+  //  CHECK-DAG: [[ARG1:%.*]] = builtin.unrealized_conversion_cast %[[OARG1]] : memref<i32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<!spirv.array<1 x i32>, CrossWorkgroup>
   //      CHECK: [[ZERO:%.*]] = spirv.Constant 0 : i32
   //      CHECK: spirv.AccessChain [[ARG0]][
   // CHECK-SAME: [[ZERO]]
@@ -209,9 +214,10 @@ func.func @load_store_zero_rank_int(%arg0: memref<i32, #spirv.storage_class<Cros
 }
 
 // CHECK-LABEL: func @load_store_unknown_dim
+//  CHECK-SAME:     %[[OARG0:.*]]: index, %[[OARG1:.*]]: memref{{.*}}, %[[OARG2:.*]]: memref
 func.func @load_store_unknown_dim(%i: index, %source: memref<?xi32, #spirv.storage_class<CrossWorkgroup>>, %dest: memref<?xi32, #spirv.storage_class<CrossWorkgroup>>) {
-  // CHECK: %[[SRC:.+]] = builtin.unrealized_conversion_cast {{.+}} : memref<?xi32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<i32, CrossWorkgroup>
-  // CHECK: %[[DST:.+]] = builtin.unrealized_conversion_cast {{.+}} : memref<?xi32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<i32, CrossWorkgroup>
+  // CHECK-DAG: %[[SRC:.+]] = builtin.unrealized_conversion_cast %[[OARG1]] : memref<?xi32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<i32, CrossWorkgroup>
+  // CHECK-DAG: %[[DST:.+]] = builtin.unrealized_conversion_cast %[[OARG2]] : memref<?xi32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<i32, CrossWorkgroup>
   // CHECK: %[[AC0:.+]] = spirv.PtrAccessChain %[[SRC]]
   // CHECK: spirv.Load "CrossWorkgroup" %[[AC0]]
   %0 = memref.load %source[%i] : memref<?xi32, #spirv.storage_class<CrossWorkgroup>>
@@ -328,8 +334,8 @@ module attributes {
 // CHECK-LABEL: func.func @reinterpret_cast
 //  CHECK-SAME:  (%[[MEM:.*]]: memref<?xf32, #spirv.storage_class<CrossWorkgroup>>, %[[OFF:.*]]: index)
 func.func @reinterpret_cast(%arg: memref<?xf32, #spirv.storage_class<CrossWorkgroup>>, %arg1: index) -> memref<?xf32, strided<[1], offset: ?>, #spirv.storage_class<CrossWorkgroup>> {
-//       CHECK:  %[[MEM1:.*]] = builtin.unrealized_conversion_cast %[[MEM]] : memref<?xf32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<f32, CrossWorkgroup>
-//       CHECK:  %[[OFF1:.*]] = builtin.unrealized_conversion_cast %[[OFF]] : index to i32
+//   CHECK-DAG:  %[[MEM1:.*]] = builtin.unrealized_conversion_cast %[[MEM]] : memref<?xf32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<f32, CrossWorkgroup>
+//   CHECK-DAG:  %[[OFF1:.*]] = builtin.unrealized_conversion_cast %[[OFF]] : index to i32
 //       CHECK:  %[[RET:.*]] = spirv.InBoundsPtrAccessChain %[[MEM1]][%[[OFF1]]] : !spirv.ptr<f32, CrossWorkgroup>, i32
 //       CHECK:  %[[RET1:.*]] = builtin.unrealized_conversion_cast %[[RET]] : !spirv.ptr<f32, CrossWorkgroup> to memref<?xf32, strided<[1], offset: ?>, #spirv.storage_class<CrossWorkgroup>>
 //       CHECK:  return %[[RET1]]
@@ -340,8 +346,8 @@ func.func @reinterpret_cast(%arg: memref<?xf32, #spirv.storage_class<CrossWorkgr
 // CHECK-LABEL: func.func @reinterpret_cast_0
 //  CHECK-SAME:  (%[[MEM:.*]]: memref<?xf32, #spirv.storage_class<CrossWorkgroup>>)
 func.func @reinterpret_cast_0(%arg: memref<?xf32, #spirv.storage_class<CrossWorkgroup>>) -> memref<?xf32, strided<[1], offset: ?>, #spirv.storage_class<CrossWorkgroup>> {
-//       CHECK:  %[[MEM1:.*]] = builtin.unrealized_conversion_cast %[[MEM]] : memref<?xf32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<f32, CrossWorkgroup>
-//       CHECK:  %[[RET:.*]] = builtin.unrealized_conversion_cast %[[MEM1]] : !spirv.ptr<f32, CrossWorkgroup> to memref<?xf32, strided<[1], offset: ?>, #spirv.storage_class<CrossWorkgroup>>
+//   CHECK-DAG:  %[[MEM1:.*]] = builtin.unrealized_conversion_cast %[[MEM]] : memref<?xf32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<f32, CrossWorkgroup>
+//   CHECK-DAG:  %[[RET:.*]] = builtin.unrealized_conversion_cast %[[MEM1]] : !spirv.ptr<f32, CrossWorkgroup> to memref<?xf32, strided<[1], offset: ?>, #spirv.storage_class<CrossWorkgroup>>
 //       CHECK:  return %[[RET]]
   %ret = memref.reinterpret_cast %arg to offset: [0], sizes: [10], strides: [1] : memref<?xf32, #spirv.storage_class<CrossWorkgroup>> to memref<?xf32, strided<[1], offset: ?>, #spirv.storage_class<CrossWorkgroup>>
   return %ret : memref<?xf32, strided<[1], offset: ?>, #spirv.storage_class<CrossWorkgroup>>
@@ -375,8 +381,8 @@ module attributes {
 // CHECK-LABEL: func.func @cast
 //  CHECK-SAME:  (%[[MEM:.*]]: memref<4x?xf32, #spirv.storage_class<CrossWorkgroup>>)
 func.func @cast(%arg: memref<4x?xf32, #spirv.storage_class<CrossWorkgroup>>) -> memref<?x4xf32, #spirv.storage_class<CrossWorkgroup>> {
-//       CHECK:  %[[MEM1:.*]] = builtin.unrealized_conversion_cast %[[MEM]] : memref<4x?xf32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<f32, CrossWorkgroup>
-//       CHECK:  %[[MEM2:.*]] = builtin.unrealized_conversion_cast %[[MEM1]] : !spirv.ptr<f32, CrossWorkgroup> to memref<?x4xf32, #spirv.storage_class<CrossWorkgroup>>
+//   CHECK-DAG:  %[[MEM1:.*]] = builtin.unrealized_conversion_cast %[[MEM]] : memref<4x?xf32, #spirv.storage_class<CrossWorkgroup>> to !spirv.ptr<f32, CrossWorkgroup>
+//   CHECK-DAG:  %[[MEM2:.*]] = builtin.unrealized_conversion_cast %[[MEM1]] : !spirv.ptr<f32, CrossWorkgroup> to memref<?x4xf32, #spirv.storage_class<CrossWorkgroup>>
 //       CHECK:  return %[[MEM2]]
   %ret = memref.cast %arg : memref<4x?xf32, #spirv.storage_class<CrossWorkgroup>> to memref<?x4xf32, #spirv.storage_class<CrossWorkgroup>>
   return %ret : memref<?x4xf32, #spirv.storage_class<CrossWorkgroup>>
diff --git a/mlir/test/Conversion/NVGPUToNVVM/nvgpu-to-nvvm.mlir b/mlir/test/Conversion/NVGPUToNVVM/nvgpu-to-nvvm.mlir
index 73d2367915284..86a552c03a473 100644
--- a/mlir/test/Conversion/NVGPUToNVVM/nvgpu-to-nvvm.mlir
+++ b/mlir/test/Conversion/NVGPUToNVVM/nvgpu-to-nvvm.mlir
@@ -275,8 +275,8 @@ func.func @async_cp_i4(
 // CHECK-SAME: %[[IDX:[a-zA-Z0-9_]+]]: index, %[[SRCELEMENTS:[a-zA-Z0-9_]+]]: index
 func.func @async_cp_zfill_f32_align4(
   %src: memref<128x128xf32>, %dst: memref<3x16x128xf32, 3>, %i : index, %srcElements : index) {
-  // CHECK: %[[IDX1:.*]] = builtin.unrealized_conversion_cast %[[IDX]] : index to i64
-  // CHECK: %[[SRC1:.*]] = builtin.unrealized_conversion_cast %[[SRCELEMENTS]] : index to i64
+  // CHECK-DAG: %[[IDX1:.*]] = builtin.unrealized_conversion_cast %[[IDX]] : index to i64
+  // CHECK-DAG: %[[SRC1:.*]] = builtin.unrealized_conversion_cast %[[SRCELEMENTS]] : index to i64
   // CHECK-DAG: %[[BASEDST:.*]] = llvm.extractvalue %{{.*}}[1] : !llvm.struct<(ptr<3>, ptr<3>, i64, array<3 x i64>, array<3 x i64>)>                                   
   // CHECK-DAG: %[[S2048:.*]] = llvm.mlir.constant(2048 : index) : i64
   // CHECK-DAG: %[[LI1:.*]] = llvm.mul %[[IDX1]], %[[S2048]] : i64
@@ -310,8 +310,8 @@ func.func @async_cp_zfill_f32_align4(
 // CHECK-SAME: %[[IDX:[a-zA-Z0-9_]+]]: index, %[[SRCELEMENTS:[a-zA-Z0-9_]+]]: index)
 func.func @async_cp_zfill_f32_align1(
   %src: memref<128x128xf32>, %dst: memref<3x16x128xf32, 3>, %i : index, %srcElements : index) {
-    // CHECK: %[[IDX1:.*]] = builtin.unrealized_conversion_cast %[[IDX]] : index to i64
-  // CHECK: %[[SRC1:.*]] = builtin.unrealized_conversion_cast %[[SRCELEMENTS]] : index to i64
+  // CHECK-DAG: %[[IDX1:.*]] = builtin.unrealized_conversion_cast %[[IDX]] : index to i64
+  // CHECK-DAG: %[[SRC1:.*]] = builtin.unrealized_conversion_cast %[[SRCELEMENTS]] : index to i64
   // CHECK-DAG: %[[BASEDST:.*]] = llvm.extractvalue %{{.*}}[1] : !llvm.struct<(ptr<3>, ptr<3>, i64, array<3 x i64>, array<3 x i64>)>                                   
   // CHECK-DAG: %[[S2048:.*]] = llvm.mlir.constant(2048 : index) : i64
   // CHECK-DAG: %[[LI1:.*]] = llvm.mul %[[IDX1]], %[[S2048]] : i64
@@ -532,8 +532,11 @@ func.func @mbarrier_nocomplete() {
   func.return 
 }
 
-// CHECK-LABEL: func @mbarrier_wait
+// CHECK-LABEL: func @mbarrier_wait(
+//  CHECK-SAME:     %[[ARG0:.*]]: !nvgpu.mbarrier.group{{.*}}, %[[ARG1:.*]]: !nvgpu.mbarrier.token)
 func.func @mbarrier_wait(%barriers : !nvgpu.mbarrier.group<memorySpace = #gpu.address_space<workgroup>, num_barriers = 5>, %token : !tokenType) {
+// CHECK-DAG: %[[CARG0:.*]] = builtin.unrealized_conversion_cast %[[ARG0]]
+// CHECK-DAG: %[[CARG1:.*]] = builtin.unrealized_conversion_cast %[[ARG1]]
   %c0 = arith.constant 0 : index
   %c1 = arith.constant 1 : index
   %n = arith.constant 100 : index
@@ -545,8 +548,9 @@ func.func @mbarrier_wait(%barriers : !nvgpu.mbarrier.group<memorySpace = #gpu.ad
 // CHECK: scf.for %[[i:.*]] =
 // CHECK: %[[S2:.+]] = arith.remui %[[i]], %[[c5]] : index
 // CHECK: %[[S3:.+]] = builtin.unrealized_conversion_cast %[[S2]] : index to i64
-// CHECK: %[[S4:.+]] = llvm.extractvalue %0[1] : !llvm.struct<(ptr<3>, ptr<3>, i64, array<1 x i64>, array<1 x i64>)> 
+// CHECK: %[[S4:.+]] = llvm.extractvalue %[[CARG0]][1] : !llvm.struct<(ptr<3>, ptr<3>, i64, array<1 x i64>, array<1 x i64>)> 
 // CHECK: %[[S5:.+]] = llvm.getelementptr %[[S4]][%[[S3]]] : (!llvm.ptr<3>, i64) -> !llvm.ptr<3>, i64
+// CHECK: nvvm.mbarrier.test.wait.shared {{.*}}, %[[CARG1]]
     %mbarId = arith.remui %i, %numBarriers : index
     %isDone = nvgpu.mbarrier.test.wait %barriers[%mbarId], %token : !nvgpu.mbarrier.group<memorySpace = #gpu.address_space<workgroup>, num_barriers = 5>, !tokenType
   }
@@ -871,9 +875,9 @@ func.func @warpgroup_mma_128_128_64(
       %descB: !nvgpu.warpgroup.descriptor<tensor = memref<64x128xf16, 3>>, 
       %acc: !nvgpu.warpgroup.accumulator<fragmented = vector<128x128xf32>>) 
 {
-// CHECK: %[[S0:.+]] = builtin.unrealized_conversion_cast %[[arg0]] : !nvgpu.warpgroup.descriptor<tensor = memref<128x64xf16, 3>> to i64
-// CHECK: %[[S1:.+]] = builtin.unrealized_conversion_cast %[[arg1]] : !nvgpu.warpgroup.descriptor<tensor = memref<64x128xf16, 3>> to i64
-// CHECK: %[[ARG:.+]] = builtin.unrealized_conversion_cast %[[arg2]] : !nvgpu.warpgroup.accumulator<fragmented = vector<128x128xf32>> to !llvm.struct<(struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>, struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>)>
+// CHECK-DAG: %[[S0:.+]] = builtin.unrealized_conversion_cast %[[arg0]] : !nvgpu.warpgroup.descriptor<tensor = memref<128x64xf16, 3>> to i64
+// CHECK-DAG: %[[S1:.+]] = builtin.unrealized_conversion_cast %[[arg1]] : !nvgpu.warpgroup.descriptor<tensor = memref<64x128xf16, 3>> to i64
+// CHECK-DAG: %[[ARG:.+]] = builtin.unrealized_conversion_cast %[[arg2]] : !nvgpu.warpgroup.accumulator<fragmented = vector<128x128xf32>> to !llvm.struct<(struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>, struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>)>
 // CHECK: nvvm.wgmma.fence.aligned
 // CHECK: %[[UD:.+]] =  llvm.mlir.undef : !llvm.struct<(struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>, struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>)>
 // CHECK: %[[S2:.+]] = llvm.extractvalue %[[ARG]][0] : !llvm.struct<(struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>, struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>)> 
@@ -1280,9 +1284,9 @@ func.func @warpgroup_matrix_multiply_m128n128k64(
     to memref<128x128xf32,3>
 
 
-// CHECK: %[[S0:.+]] = builtin.unrealized_conversion_cast %[[arg0]] : !nvgpu.warpgroup.descriptor<tensor = memref<128x64xf16, 3>> to i64
-// CHECK: %[[S1:.+]] = builtin.unrealized_conversion_cast %[[arg1]] : !nvgpu.warpgroup.descriptor<tensor = memref<64x128xf16, 3>> to i64
-// CHECK: %[[S2:.+]] = builtin.unrealized_conversion_cast %[[arg2]] : memref<128x128xf32, 3> to !llvm.struct<(ptr<3>, ptr<3>, i64, array<2 x i64>, array<2 x i64>)>
+// CHECK-DAG: %[[S0:.+]] = builtin.unrealized_conversion_cast %[[arg0]] : !nvgpu.warpgroup.descriptor<tensor = memref<128x64xf16, 3>> to i64
+// CHECK-DAG: %[[S1:.+]] = builtin.unrealized_conversion_cast %[[arg1]] : !nvgpu.warpgroup.descriptor<tensor = memref<64x128xf16, 3>> to i64
+// CHECK-DAG: %[[S2:.+]] = builtin.unrealized_conversion_cast %[[arg2]] : memref<128x128xf32, 3> to !llvm.struct<(ptr<3>, ptr<3>, i64, array<2 x i64>, array<2 x i64>)>
 // CHECK: %[[S3:.+]] = llvm.mlir.constant(0.000000e+00 : f32) : f32
 // CHECK: %[[S4:.+]] = llvm.mlir.undef : !llvm.struct<(struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>, struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>)>
 // CHECK: %[[S5:.+]] = llvm.extractvalue %[[S4]][0] : !llvm.struct<(struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>, struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>)> 
@@ -1296,7 +1300,7 @@ func.func @warpgroup_matrix_multiply_m128n128k64(
 // CHECK: nvvm.wgmma.fence.aligned
 // CHECK: %[[S137:.+]] = llvm.mlir.undef : !llvm.struct<(struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>, struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>)>
 // CHECK: %[[S138:.+]] = llvm.extractvalue %136[0] : !llvm.struct<(struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>, struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>)> 
-// CHECK: %[[S139:.+]] = nvvm.wgmma.mma_async %0, %1, %[[S138]], <m = 64, n = 128, k = 16>, D[<f32>, <one>, <wrapped>], A[<f16>, <one>, <row>], B[<f16>, <one>, <row>] : !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)> -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>
+// CHECK: %[[S139:.+]] = nvvm.wgmma.mma_async %[[S0]], %1, %[[S138]], <m = 64, n = 128, k = 16>, D[<f32>, <one>, <wrapped>], A[<f16>, <one>, <row>], B[<f16>, <one>, <row>] : !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)> -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32, f32)>
 // CHECK: nvvm.wgmma.mma_async
 // CHECK: nvvm.wgmma.mma_async
 // CHECK: %[[S154:.+]] = nvvm.wgmma.mma_async
diff --git a/mlir/test/Conversion/VectorToLLVM/vector-scalable-memcpy.mlir b/mlir/test/Conversion/VectorToLLVM/vector-scalable-memcpy.mlir
index 811b10721bf28..80e6caa05db5e 100644
--- a/mlir/test/Conversion/VectorToLLVM/vector-scalable-memcpy.mlir
+++ b/mlir/test/Conversion/VectorToLLVM/vector-scalable-memcpy.mlir
@@ -6,8 +6,8 @@ func.func @vector_scalable_memcopy(%src : memref<?xf32>, %dst : memref<?xf32>, %
   %c4 = arith.constant 4 : index
   %vs = vector.vscale
   %step = arith.muli %c4, %vs : index
-  // CHECK: [[SRCMRS:%[0-9]+]] = builtin.unrealized_conversion_cast [[SRC]] : memref<?xf32> to !llvm.struct<(ptr
-  // CHECK: [[DSTMRS:%[0-9]+]] = builtin.unrealized_conversion_cast [[DST]] : memref<?xf32> to !llvm.struct<(ptr
+  // CHECK-DAG: [[SRCMRS:%[0-9]+]] = builtin.unrealized_conversion_cast [[SRC]] : memref<?xf32> to !llvm.struct<(ptr
+  // CHECK-DAG: [[DSTMRS:%[0-9]+]] = builtin.unrealized_conversion_cast [[DST]] : memref<?xf32> to !llvm.struct<(ptr
   // CHECK: scf.for [[LOOPIDX:%arg[0-9]+]] = {{.*}}
   scf.for %i0 = %c0 to %size step %step {
     // CHECK: [[DATAIDX:%[0-9]+]] = builtin.unrealized_conversion_cast [[LOOPIDX]] : index to i64
diff --git a/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir b/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
index 54dcf07053906..bf4281ebcdec9 100644
--- a/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
+++ b/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
@@ -1262,8 +1262,8 @@ func.func @insert_strided_slice_scalable(%arg0 : vector<1x1x[4]xi32>, %arg1: vec
 // CHECK-SAME:      %[[ARG_0:.*]]: vector<1x1x[4]xi32>,
 // CHECK-SAME:      %[[ARG_1:.*]]: vector<1x4x[4]xi32>) -> vector<1x4x[4]xi32> {
 
-//      CHECK:      %[[CAST_1:.*]] = builtin.unrealized_conversion_cast %[[ARG_0]] : vector<1x1x[4]xi32> to !llvm.array<1 x array<1 x vector<[4]xi32>>>
-//      CHECK:      %[[CAST_2:.*]] = builtin.unrealized_conversion_cast %[[ARG_1]] : vector<1x4x[4]xi32> to !llvm.array<1 x array<4 x vector<[4]xi32>>>
+//  CHECK-DAG:      %[[CAST_1:.*]] = builtin.unrealized_conversion_cast %[[ARG_0]] : vector<1x1x[4]xi32> to !llvm.array<1 x array<1 x vector<[4]xi32>>>
+//  CHECK-DAG:      %[[CAST_2:.*]] = builtin.unrealized_conversion_cast %[[ARG_1]] : vector<1x4x[4]xi32> to !llvm.array<1 x array<4 x vector<[4]xi32>>>
 
 //      CHECK:      %[[EXT_1:.*]] = llvm.extractvalue %[[CAST_2]][0] : !llvm.array<1 x array<4 x vector<[4]xi32>>>
 //      CHECK:      %[[EXT_2:.*]] = llvm.extractvalue %[[CAST_1]][0, 0] : !llvm.array<1 x array<1 x vector<[4]xi32>>>
@@ -2491,8 +2491,8 @@ func.func @make_fixed_vector_of_scalable_vector(%f : f64) -> vector<3x[2]xf64>
 // CHECK-LABEL: @vector_interleave_0d
 //  CHECK-SAME:     %[[LHS:.*]]: vector<i8>, %[[RHS:.*]]: vector<i8>)
 func.func @vector_interleave_0d(%a: vector<i8>, %b: vector<i8>) -> vector<2xi8> {
-  // CHECK: %[[LHS_RANK1:.*]] = builtin.unrealized_conversion_cast %[[LHS]] : vector<i8> to vector<1xi8>
-  // CHECK: %[[RHS_RANK1:.*]] = builtin.unrealized_conversion_cast %[[RHS]] : vector<i8> to vector<1xi8>
+  // CHECK-DAG: %[[LHS_RANK1:.*]] = builtin.unrealized_conversion_cast %[[LHS]] : vector<i8> to vector<1xi8>
+  // CHECK-DAG: %[[RHS_RANK1:.*]] = builtin.unrealized_conversion_cast %[[RHS]] : vector<i8> to vector<1xi8>
   // CHECK: %[[ZIP:.*]] = llvm.shufflevector %[[LHS_RANK1]], %[[RHS_RANK1]] [0, 1] : vector<1xi8>
   // CHECK: return %[[ZIP]]
   %0 = vector.interleave %a, %b : vector<i8> -> vector<2xi8>
diff --git a/mlir/test/Conversion/VectorToSPIRV/vector-to-spirv.mlir b/mlir/test/Conversion/VectorToSPIRV/vector-to-spirv.mlir
index 6c6a9a1d0c6c5..0d67851dfe41d 100644
--- a/mlir/test/Conversion/VectorToSPIRV/vector-to-spirv.mlir
+++ b/mlir/test/Conversion/VectorToSPIRV/vector-to-spirv.mlir
@@ -401,8 +401,8 @@ func.func @splat_size1_vector(%f : f32) -> vector<1xf32> {
 
 // CHECK-LABEL:  func @shuffle
 //  CHECK-SAME:  %[[ARG0:.+]]: vector<1xf32>, %[[ARG1:.+]]: vector<1xf32>
-//       CHECK:    %[[V0:.+]] = builtin.unrealized_conversion_cast %[[ARG0]]
-//       CHECK:    %[[V1:.+]] = builtin.unrealized_conversion_cast %[[ARG1]]
+//   CHECK-DAG:    %[[V0:.+]] = builtin.unrealized_conversion_cast %[[ARG0]]
+//   CHECK-DAG:    %[[V1:.+]] = builtin.unrealized_conversion_cast %[[ARG1]]
 //       CHECK:    spirv.CompositeConstruct %[[V0]], %[[V1]], %[[V1]], %[[V0]] : (f32, f32, f32, f32) -> vector<4xf32>
 func.func @shuffle(%v0 : vector<1xf32>, %v1: vector<1xf32>) -> vector<4xf32> {
   %shuffle = vector.shuffle %v0, %v1 [0, 1, 1, 0] : vector<1xf32>, vector<1xf32>
@@ -413,8 +413,8 @@ func.func @shuffle(%v0 : vector<1xf32>, %v1: vector<1xf32>) -> vector<4xf32> {
 
 // CHECK-LABEL:  func @shuffle_index_vector
 //  CHECK-SAME:  %[[ARG0:.+]]: vector<1xindex>, %[[ARG1:.+]]: vector<1xindex>
-//       CHECK:    %[[V0:.+]] = builtin.unrealized_conversion_cast %[[ARG0]]
-//       CHECK:    %[[V1:.+]] = builtin.unrealized_conversion_cast %[[ARG1]]
+//   CHECK-DAG:    %[[V0:.+]] = builtin.unrealized_conversion_cast %[[ARG0]]
+//   CHECK-DAG:    %[[V1:.+]] = builtin.unrealized_conversion_cast %[[ARG1]]
 //       CHECK:    spirv.CompositeConstruct %[[V0]], %[[V1]], %[[V1]], %[[V0]] : (i32, i32, i32, i32) -> vector<4xi32>
 func.func @shuffle_index_vector(%v0 : vector<1xindex>, %v1: vector<1xindex>) -> vector<4xindex> {
   %shuffle = vector.shuffle %v0, %v1 [0, 1, 1, 0] : vector<1xindex>, vector<1xindex>
@@ -472,8 +472,8 @@ func.func @shuffle(%v0 : vector<3xi32>, %v1: vector<1xi32>) -> vector<3xi32> {
 
 // CHECK-LABEL:  func @shuffle
 //  CHECK-SAME:  %[[ARG0:.+]]: vector<1xi32>, %[[ARG1:.+]]: vector<1xi32>
-//       CHECK:    %[[V0:.+]] = builtin.unrealized_conversion_cast %[[ARG0]] : vector<1xi32> to i32
-//       CHECK:    %[[V1:.+]] = builtin.unrealized_conversion_cast %[[ARG1]] : vector<1xi32> to i32
+//   CHECK-DAG:    %[[V0:.+]] = builtin.unrealized_conversion_cast %[[ARG0]] : vector<1xi32> to i32
+//   CHECK-DAG:    %[[V1:.+]] = builtin.unrealized_conversion_cast %[[ARG1]] : vector<1xi32> to i32
 //       CHECK:    %[[RES:.+]] = spirv.CompositeConstruct %[[V0]], %[[V1]] : (i32, i32) -> vector<2xi32>
 //       CHECK:    return %[[RES]]
 func.func @shuffle(%v0 : vector<1xi32>, %v1: vector<1xi32>) -> vector<2xi32> {
@@ -496,8 +496,8 @@ func.func @interleave(%a: vector<2xf32>, %b: vector<2xf32>) -> vector<4xf32> {
 
 // CHECK-LABEL: func @interleave_size1
 // CHECK-SAME: (%[[ARG0:.+]]: vector<1xf32>, %[[ARG1:.+]]: vector<1xf32>)
-//       CHECK: %[[V0:.*]] = builtin.unrealized_conversion_cast %[[ARG0]] : vector<1xf32> to f32
-//       CHECK: %[[V1:.*]] = builtin.unrealized_conversion_cast %[[ARG1]] : vector<1xf32> to f32
+//  CHECK-DAG: %[[V0:.*]] = builtin.unrealized_conversion_cast %[[ARG0]] : vector<1xf32> to f32
+//  CHECK-DAG: %[[V1:.*]] = builtin.unrealized_conversion_cast %[[ARG1]] : vector<1xf32> to f32
 //       CHECK: %[[RES:.*]] = spirv.CompositeConstruct %[[V0]], %[[V1]] : (f32, f32) -> vector<2xf32>
 //       CHECK: return %[[RES]]
 func.func @interleave_size1(%a: vector<1xf32>, %b: vector<1xf32>) -> vector<2xf32> {
diff --git a/mlir/test/Dialect/SCF/bufferize.mlir b/mlir/test/Dialect/SCF/bufferize.mlir
index 6193101e9264d..ff1612310255a 100644
--- a/mlir/test/Dialect/SCF/bufferize.mlir
+++ b/mlir/test/Dialect/SCF/bufferize.mlir
@@ -4,8 +4,8 @@
 // CHECK-SAME:             %[[PRED:.*]]: i1,
 // CHECK-SAME:             %[[TRUE_TENSOR:.*]]: tensor<?xf32>,
 // CHECK-SAME:             %[[FALSE_TENSOR:.*]]: tensor<?xf32>) -> tensor<?xf32> {
-// CHECK:           %[[TRUE_MEMREF:.*]] = bufferization.to_memref %[[TRUE_TENSOR]] : memref<?xf32>
-// CHECK:           %[[FALSE_MEMREF:.*]] = bufferization.to_memref %[[FALSE_TENSOR]] : memref<?xf32>
+// CHECK-DAG:       %[[TRUE_MEMREF:.*]] = bufferization.to_memref %[[TRUE_TENSOR]] : memref<?xf32>
+// CHECK-DAG:       %[[FALSE_MEMREF:.*]] = bufferization.to_memref %[[FALSE_TENSOR]] : memref<?xf32>
 // CHECK:           %[[RESULT_MEMREF:.*]] = scf.if %[[PRED]] -> (memref<?xf32>) {
 // CHECK:             scf.yield %[[TRUE_MEMREF]] : memref<?xf32>
 // CHECK:           } else {
diff --git a/mlir/test/Dialect/Vector/linearize.mlir b/mlir/test/Dialect/Vector/linearize.mlir
index 31a59b809a74b..ec3806ca0f204 100644
--- a/mlir/test/Dialect/Vector/linearize.mlir
+++ b/mlir/test/Dialect/Vector/linearize.mlir
@@ -35,8 +35,8 @@ func.func @test_linearize(%arg0: vector<2x2xf32>) -> vector<2x2xf32> {
 // ALL-LABEL: test_partial_linearize
 // ALL-SAME: (%[[ORIG_ARG:.*]]: vector<2x2xf32>, %[[ORIG_ARG2:.*]]: vector<4x4xf32>)
 func.func @test_partial_linearize(%arg0: vector<2x2xf32>, %arg1: vector<4x4xf32>) -> vector<2x2xf32> {
-  // DEFAULT: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<2x2xf32> to vector<4xf32>
-  // DEFAULT: %[[ARG2:.*]] = vector.shape_cast %[[ORIG_ARG2]] : vector<4x4xf32> to vector<16xf32>
+  // DEFAULT-DAG: %[[ARG:.*]] = vector.shape_cast %[[ORIG_ARG]] : vector<2x2xf32> to vector<4xf32>
+  // DEFAULT-DAG: %[[ARG2:.*]] = vector.shape_cast %[[ORIG_ARG2]] : vector<4x4xf32> to vector<16xf32>
   // DEFAULT: %[[CST:.*]] = arith.constant dense<{{.*}}> : vector<4xf32>
   // DEFAULT: %[[RES:.*]] = vector.shape_cast %[[CST]] : vector<4xf32> to vector<2x2xf32>
 
@@ -204,15 +204,15 @@ func.func @test_extract_strided_slice_2(%arg0 : vector<2x8x2xf32>) -> vector<1x4
 // ALL-LABEL: test_vector_shuffle
 // ALL-SAME: (%[[ORIG_ARG0:.*]]: vector<4x2xf32>, %[[ORIG_ARG1:.*]]: vector<4x2xf32>) -> vector<8x2xf32> {
 func.func @test_vector_shuffle(%arg0: vector<4x2xf32>, %arg1: vector<4x2xf32>) -> vector<8x2xf32> {
-  // DEFAULT: %[[ARG0:.*]] = vector.shape_cast %[[ORIG_ARG0]] : vector<4x2xf32> to vector<8xf32>
-  // DEFAULT: %[[ARG1:.*]] = vector.shape_cast %[[ORIG_ARG1]] : vector<4x2xf32> to vector<8xf32>
+  // DEFAULT-DAG: %[[ARG0:.*]] = vector.shape_cast %[[ORIG_ARG0]] : vector<4x2xf32> to vector<8xf32>
+  // DEFAULT-DAG: %[[ARG1:.*]] = vector.shape_cast %[[ORIG_ARG1]] : vector<4x2xf32> to vector<8xf32>
   // DEFAULT: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG0]], %[[ARG1]]
   // DEFAULT-SAME: [0, 1, 8, 9, 2, 3, 10, 11, 4, 5, 12, 13, 6, 7, 14, 15] : vector<8xf32>, vector<8xf32>
   // DEFAULT: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<16xf32> to vector<8x2xf32>
   // DEFAULT: return %[[RES]] : vector<8x2xf32>
 
-  // BW-128: %[[ARG0:.*]] = vector.shape_cast %[[ORIG_ARG0]] : vector<4x2xf32> to vector<8xf32>
-  // BW-128: %[[ARG1:.*]] = vector.shape_cast %[[ORIG_ARG1]] : vector<4x2xf32> to vector<8xf32>
+  // BW-128-DAG: %[[ARG0:.*]] = vector.shape_cast %[[ORIG_ARG0]] : vector<4x2xf32> to vector<8xf32>
+  // BW-128-DAG: %[[ARG1:.*]] = vector.shape_cast %[[ORIG_ARG1]] : vector<4x2xf32> to vector<8xf32>
   // BW-128: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG0]], %[[ARG1]]
   // BW-128-SAME: [0, 1, 8, 9, 2, 3, 10, 11, 4, 5, 12, 13, 6, 7, 14, 15] : vector<8xf32>, vector<8xf32>
   // BW-128: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<16xf32> to vector<8x2xf32>
@@ -250,8 +250,8 @@ func.func @test_vector_extract(%arg0: vector<2x8x2xf32>) -> vector<8x2xf32> {
 // ALL-LABEL: test_vector_insert
 // ALL-SAME: (%[[DEST:.*]]: vector<2x8x4xf32>, %[[SRC:.*]]: vector<8x4xf32>) -> vector<2x8x4xf32> {
 func.func @test_vector_insert(%arg0: vector<2x8x4xf32>, %arg1: vector<8x4xf32>) -> vector<2x8x4xf32> {
-  // DEFAULT: %[[ARG_SRC:.*]] = vector.shape_cast %[[SRC]] : vector<8x4xf32> to vector<32xf32>
-  // DEFAULT: %[[ARG_DEST:.*]] = vector.shape_cast %[[DEST]] : vector<2x8x4xf32> to vector<64xf32>
+  // DEFAULT-DAG: %[[ARG_SRC:.*]] = vector.shape_cast %[[SRC]] : vector<8x4xf32> to vector<32xf32>
+  // DEFAULT-DAG: %[[ARG_DEST:.*]] = vector.shape_cast %[[DEST]] : vector<2x8x4xf32> to vector<64xf32>
   // DEFAULT: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG_DEST]], %[[ARG_SRC]]
   // DEFAULT-SAME: [64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87,
   // DEFAULT-SAME: 88, 89, 90, 91, 92, 93, 94, 95, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48,
@@ -259,8 +259,8 @@ func.func @test_vector_insert(%arg0: vector<2x8x4xf32>, %arg1: vector<8x4xf32>)
   // DEFAULT: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<64xf32> to vector<2x8x4xf32>
   // DEFAULT: return %[[RES]] : vector<2x8x4xf32>
 
-  // BW-128: %[[ARG_SRC:.*]] = vector.shape_cast %[[SRC]] : vector<8x4xf32> to vector<32xf32>
-  // BW-128: %[[ARG_DEST:.*]] = vector.shape_cast %[[DEST]] : vector<2x8x4xf32> to vector<64xf32>
+  // BW-128-DAG: %[[ARG_SRC:.*]] = vector.shape_cast %[[SRC]] : vector<8x4xf32> to vector<32xf32>
+  // BW-128-DAG: %[[ARG_DEST:.*]] = vector.shape_cast %[[DEST]] : vector<2x8x4xf32> to vector<64xf32>
   // BW-128: %[[SHUFFLE:.*]] = vector.shuffle %[[ARG_DEST]], %[[ARG_SRC]]
   // BW-128-SAME: [64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87,
   // BW-128-SAME: 88, 89, 90, 91, 92, 93, 94, 95, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48,

>From 15399890beb69f622ad0f04a544369fa7947d50b Mon Sep 17 00:00:00 2001
From: Angel Zhang <angel.zhang at amd.com>
Date: Mon, 17 Jun 2024 14:08:56 -0400
Subject: [PATCH 02/26] [mlir][spirv] Remove debug option from the `RUN`
 command in `vector-deinterleave.mlir` (#95820)

This PR is based on #95800. It removes a debug option from the `RUN`
command in `vector-deinterleave.mlir`.
---
 mlir/test/mlir-vulkan-runner/vector-deinterleave.mlir | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mlir/test/mlir-vulkan-runner/vector-deinterleave.mlir b/mlir/test/mlir-vulkan-runner/vector-deinterleave.mlir
index b4d4b9aa64b24..f2214ebc10262 100644
--- a/mlir/test/mlir-vulkan-runner/vector-deinterleave.mlir
+++ b/mlir/test/mlir-vulkan-runner/vector-deinterleave.mlir
@@ -1,6 +1,6 @@
 // RUN: mlir-vulkan-runner %s \
 // RUN:  --shared-libs=%vulkan-runtime-wrappers,%mlir_runner_utils \
-// RUN:  --entry-point-result=void --debug-only=dialect-conversion | FileCheck %s
+// RUN:  --entry-point-result=void | FileCheck %s
 
 // CHECK: [0, 2]
 // CHECK: [1, 3]

>From 996905d8152def16ca2fa1322367e493ac6eef5e Mon Sep 17 00:00:00 2001
From: Peiming Liu <peiming at google.com>
Date: Mon, 17 Jun 2024 11:35:23 -0700
Subject: [PATCH 03/26] Revert "[mlir][sparse] implement lowering rules for
 IterateOp." (#95826)

Reverts llvm/llvm-project#95286
---
 .../Transforms/SparseIterationToScf.cpp       | 121 +-----------------
 .../Transforms/Utils/SparseTensorIterator.cpp |  40 ------
 .../Transforms/Utils/SparseTensorIterator.h   |  26 +---
 .../SparseTensor/sparse_iteration_to_scf.mlir |  54 ++------
 4 files changed, 17 insertions(+), 224 deletions(-)

diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/SparseIterationToScf.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/SparseIterationToScf.cpp
index f57be49f21b8c..62887c75c872b 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/SparseIterationToScf.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/SparseIterationToScf.cpp
@@ -34,20 +34,6 @@ convertIterSpaceType(IterSpaceType itSp, SmallVectorImpl<Type> &fields) {
   return success();
 }
 
-static std::optional<LogicalResult>
-convertIteratorType(IteratorType itTp, SmallVectorImpl<Type> &fields) {
-  // The actually Iterator Values (that are updated every iteration).
-  auto idxTp = IndexType::get(itTp.getContext());
-  // TODO: handle batch dimension.
-  assert(itTp.getEncoding().getBatchLvlRank() == 0);
-  if (!itTp.isUnique()) {
-    // Segment high for non-unique iterator.
-    fields.push_back(idxTp);
-  }
-  fields.push_back(idxTp);
-  return success();
-}
-
 namespace {
 
 /// Sparse codegen rule for number of entries operator.
@@ -71,114 +57,10 @@ class ExtractIterSpaceConverter
   }
 };
 
-class SparseIterateOpConverter : public OneToNOpConversionPattern<IterateOp> {
-public:
-  using OneToNOpConversionPattern::OneToNOpConversionPattern;
-  LogicalResult
-  matchAndRewrite(IterateOp op, OpAdaptor adaptor,
-                  OneToNPatternRewriter &rewriter) const override {
-    if (!op.getCrdUsedLvls().empty())
-      return rewriter.notifyMatchFailure(
-          op, "non-empty coordinates list not implemented.");
-
-    Location loc = op.getLoc();
-
-    auto iterSpace = SparseIterationSpace::fromValues(
-        op.getIterSpace().getType(), adaptor.getIterSpace(), 0);
-
-    std::unique_ptr<SparseIterator> it =
-        iterSpace.extractIterator(rewriter, loc);
-
-    if (it->iteratableByFor()) {
-      auto [lo, hi] = it->genForCond(rewriter, loc);
-      Value step = constantIndex(rewriter, loc, 1);
-      SmallVector<Value> ivs;
-      for (ValueRange inits : adaptor.getInitArgs())
-        llvm::append_range(ivs, inits);
-      scf::ForOp forOp = rewriter.create<scf::ForOp>(loc, lo, hi, step, ivs);
-
-      Block *loopBody = op.getBody();
-      OneToNTypeMapping bodyTypeMapping(loopBody->getArgumentTypes());
-      if (failed(typeConverter->convertSignatureArgs(
-              loopBody->getArgumentTypes(), bodyTypeMapping)))
-        return failure();
-      rewriter.applySignatureConversion(loopBody, bodyTypeMapping);
-
-      forOp.getBody()->erase();
-      Region &dstRegion = forOp.getRegion();
-      rewriter.inlineRegionBefore(op.getRegion(), dstRegion, dstRegion.end());
-
-      auto yieldOp =
-          llvm::cast<sparse_tensor::YieldOp>(forOp.getBody()->getTerminator());
-
-      rewriter.setInsertionPointToEnd(forOp.getBody());
-      // replace sparse_tensor.yield with scf.yield.
-      rewriter.create<scf::YieldOp>(loc, yieldOp.getResults());
-      yieldOp.erase();
-
-      const OneToNTypeMapping &resultMapping = adaptor.getResultMapping();
-      rewriter.replaceOp(op, forOp.getResults(), resultMapping);
-    } else {
-      SmallVector<Value> ivs;
-      llvm::append_range(ivs, it->getCursor());
-      for (ValueRange inits : adaptor.getInitArgs())
-        llvm::append_range(ivs, inits);
-
-      assert(llvm::all_of(ivs, [](Value v) { return v != nullptr; }));
-
-      TypeRange types = ValueRange(ivs).getTypes();
-      auto whileOp = rewriter.create<scf::WhileOp>(loc, types, ivs);
-      SmallVector<Location> l(types.size(), op.getIterator().getLoc());
-
-      // Generates loop conditions.
-      Block *before = rewriter.createBlock(&whileOp.getBefore(), {}, types, l);
-      rewriter.setInsertionPointToStart(before);
-      ValueRange bArgs = before->getArguments();
-      auto [whileCond, remArgs] = it->genWhileCond(rewriter, loc, bArgs);
-      assert(remArgs.size() == adaptor.getInitArgs().size());
-      rewriter.create<scf::ConditionOp>(loc, whileCond, before->getArguments());
-
-      // Generates loop body.
-      Block *loopBody = op.getBody();
-      OneToNTypeMapping bodyTypeMapping(loopBody->getArgumentTypes());
-      if (failed(typeConverter->convertSignatureArgs(
-              loopBody->getArgumentTypes(), bodyTypeMapping)))
-        return failure();
-      rewriter.applySignatureConversion(loopBody, bodyTypeMapping);
-
-      Region &dstRegion = whileOp.getAfter();
-      // TODO: handle uses of coordinate!
-      rewriter.inlineRegionBefore(op.getRegion(), dstRegion, dstRegion.end());
-      ValueRange aArgs = whileOp.getAfterArguments();
-      auto yieldOp = llvm::cast<sparse_tensor::YieldOp>(
-          whileOp.getAfterBody()->getTerminator());
-
-      rewriter.setInsertionPointToEnd(whileOp.getAfterBody());
-
-      aArgs = it->linkNewScope(aArgs);
-      ValueRange nx = it->forward(rewriter, loc);
-      SmallVector<Value> yields;
-      llvm::append_range(yields, nx);
-      llvm::append_range(yields, yieldOp.getResults());
-
-      // replace sparse_tensor.yield with scf.yield.
-      yieldOp->erase();
-      rewriter.create<scf::YieldOp>(loc, yields);
-
-      const OneToNTypeMapping &resultMapping = adaptor.getResultMapping();
-      rewriter.replaceOp(
-          op, whileOp.getResults().drop_front(it->getCursor().size()),
-          resultMapping);
-    }
-    return success();
-  }
-};
-
 } // namespace
 
 mlir::SparseIterationTypeConverter::SparseIterationTypeConverter() {
   addConversion([](Type type) { return type; });
-  addConversion(convertIteratorType);
   addConversion(convertIterSpaceType);
 
   addSourceMaterialization([](OpBuilder &builder, IterSpaceType spTp,
@@ -192,6 +74,5 @@ mlir::SparseIterationTypeConverter::SparseIterationTypeConverter() {
 
 void mlir::populateLowerSparseIterationToSCFPatterns(
     TypeConverter &converter, RewritePatternSet &patterns) {
-  patterns.add<ExtractIterSpaceConverter, SparseIterateOpConverter>(
-      converter, patterns.getContext());
+  patterns.add<ExtractIterSpaceConverter>(converter, patterns.getContext());
 }
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
index ef95fcc84bd90..be8e15d6ae6f4 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
@@ -331,13 +331,6 @@ class TrivialIterator : public ConcreteIterator {
   TrivialIterator(const SparseTensorLevel &stl)
       : ConcreteIterator(stl, IterKind::kTrivial, /*itValCnt=*/1) {}
 
-  TrivialIterator(OpBuilder &b, Location l, const SparseTensorLevel &stl,
-                  Value posLo, Value posHi)
-      : ConcreteIterator(stl, IterKind::kTrivial, /*itValCnt=*/1), posLo(posLo),
-        posHi(posHi) {
-    seek(posLo);
-  }
-
   std::string getDebugInterfacePrefix() const override {
     return std::string("trivial<") + stl.toString() + ">";
   }
@@ -427,14 +420,6 @@ class DedupIterator : public ConcreteIterator {
       : ConcreteIterator(stl, IterKind::kDedup, /*itValCnt=*/2) {
     assert(!stl.isUnique());
   }
-
-  DedupIterator(OpBuilder &b, Location l, const SparseTensorLevel &stl,
-                Value posLo, Value posHi)
-      : ConcreteIterator(stl, IterKind::kDedup, /*itValCnt=*/2), posHi(posHi) {
-    assert(!stl.isUnique());
-    seek({posLo, genSegmentHigh(b, l, posLo)});
-  }
-
   // For LLVM-style RTTI.
   static bool classof(const SparseIterator *from) {
     return from->kind == IterKind::kDedup;
@@ -1547,11 +1532,6 @@ SparseIterationSpace mlir::sparse_tensor::SparseIterationSpace::fromValues(
   return space;
 }
 
-std::unique_ptr<SparseIterator>
-SparseIterationSpace::extractIterator(OpBuilder &b, Location l) const {
-  return makeSimpleIterator(b, l, *this);
-}
-
 //===----------------------------------------------------------------------===//
 // SparseIterator factory functions.
 //===----------------------------------------------------------------------===//
@@ -1610,26 +1590,6 @@ sparse_tensor::makeSynLevelAndIterator(Value sz, unsigned tid, unsigned lvl,
   return std::make_pair(std::move(stl), std::move(it));
 }
 
-std::unique_ptr<SparseIterator>
-sparse_tensor::makeSimpleIterator(OpBuilder &b, Location l,
-                                  const SparseIterationSpace &iterSpace) {
-  // assert(iterSpace.getSpaceDim() == 1);
-  std::unique_ptr<SparseIterator> ret;
-  if (!iterSpace.isUnique()) {
-    // We always dedupliate the non-unique level, but we should optimize it away
-    // if possible.
-    ret = std::make_unique<DedupIterator>(b, l, iterSpace.getLastLvl(),
-                                          iterSpace.getBoundLo(),
-                                          iterSpace.getBoundHi());
-  } else {
-    ret = std::make_unique<TrivialIterator>(b, l, iterSpace.getLastLvl(),
-                                            iterSpace.getBoundLo(),
-                                            iterSpace.getBoundHi());
-  }
-  ret->setSparseEmitStrategy(SparseEmitStrategy::kFunctional);
-  return ret;
-}
-
 std::unique_ptr<SparseIterator>
 sparse_tensor::makeSimpleIterator(const SparseTensorLevel &stl,
                                   SparseEmitStrategy strategy) {
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
index 91f363db93f1d..17636af2b2f9d 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
@@ -132,10 +132,6 @@ class SparseIterationSpace {
   Value getBoundLo() const { return bound.first; }
   Value getBoundHi() const { return bound.second; }
 
-  // Extract an iterator to iterate over the sparse iteration space.
-  std::unique_ptr<SparseIterator> extractIterator(OpBuilder &b,
-                                                  Location l) const;
-
 private:
   SmallVector<std::unique_ptr<SparseTensorLevel>> lvls;
   std::pair<Value, Value> bound;
@@ -196,13 +192,6 @@ class SparseIterator {
     crd = nullptr;
   }
 
-  // Reconstructs a iteration space directly from the provided ValueRange.
-  static std::unique_ptr<SparseIterator>
-  fromValues(IteratorType dstTp, ValueRange values, unsigned tid);
-
-  // The inverse operation of `fromValues`.
-  SmallVector<Value> toValues() const { llvm_unreachable("Not implemented"); }
-
   //
   // Iterator properties.
   //
@@ -356,21 +345,12 @@ std::unique_ptr<SparseTensorLevel> makeSparseTensorLevel(OpBuilder &b,
                                                          unsigned tid,
                                                          Level lvl);
 
-/// Helper function to create a TensorLevel object from given ValueRange.
+/// Helper function to create a TensorLevel object from given `tensor`.
 std::unique_ptr<SparseTensorLevel> makeSparseTensorLevel(LevelType lt, Value sz,
                                                          ValueRange buffers,
                                                          unsigned tid, Level l);
-
-/// Helper function to create a simple SparseIterator object that iterate
-/// over the entire iteration space.
-std::unique_ptr<SparseIterator>
-makeSimpleIterator(OpBuilder &b, Location l,
-                   const SparseIterationSpace &iterSpace);
-
-/// Helper function to create a simple SparseIterator object that iterate
-/// over the sparse tensor level.
-/// TODO: switch to `SparseIterationSpace` (which support N-D iterator) when
-/// feature complete.
+/// Helper function to create a simple SparseIterator object that iterates
+/// over the SparseTensorLevel.
 std::unique_ptr<SparseIterator> makeSimpleIterator(
     const SparseTensorLevel &stl,
     SparseEmitStrategy strategy = SparseEmitStrategy::kFunctional);
diff --git a/mlir/test/Dialect/SparseTensor/sparse_iteration_to_scf.mlir b/mlir/test/Dialect/SparseTensor/sparse_iteration_to_scf.mlir
index 77a0e89dc7c81..5fcd661bb69b2 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_iteration_to_scf.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_iteration_to_scf.mlir
@@ -1,5 +1,4 @@
 // RUN: mlir-opt %s --lower-sparse-iteration-to-scf | FileCheck %s
-// RUN: mlir-opt %s --sparse-space-collapse --lower-sparse-iteration-to-scf | FileCheck %s --check-prefix COLLAPSED
 
 #COO = #sparse_tensor.encoding<{
   map = (i, j) -> (
@@ -8,44 +7,17 @@
   )
 }>
 
-// CHECK-LABEL:   @sparse_iteration_to_scf
-//                  // deduplication
-// CHECK:           scf.while {{.*}} {
-// CHECK:           } do {
-// CHECK:           }
-// CHECK:           scf.while {{.*}} {
-// CHECK:           } do {
-//                    // actual computation
-// CHECK:             scf.for {{.*}} {
-// CHECK:               arith.addi
-// CHECK:             }
-//                    // deduplication
-// CHECK:             scf.while {{.*}} {
-// CHECK:             } do {
-// CHECK:             }
-// CHECK:             scf.yield
-// CHECK:           }
-// CHECK:           return
-
-// COLLAPSED-LABEL:   @sparse_iteration_to_scf
-// COLLAPSED:           %[[RET:.*]] = scf.for {{.*}} {
-// COLLAPSED:             %[[VAL:.*]] = arith.addi
-// COLLAPSED:             scf.yield %[[VAL]] : index
-// COLLAPSED:           }
-// COLLAPSED:           return %[[RET]] : index
-func.func @sparse_iteration_to_scf(%sp : tensor<4x8xf32, #COO>) -> index {
-  %i = arith.constant 0 : index
-  %c1 = arith.constant 1 : index
-  %l1 = sparse_tensor.extract_iteration_space %sp lvls = 0
-      : tensor<4x8xf32, #COO> -> !sparse_tensor.iter_space<#COO, lvls = 0>
-  %r1 = sparse_tensor.iterate %it1 in %l1 iter_args(%outer = %i): !sparse_tensor.iter_space<#COO, lvls = 0 to 1> -> index {
-    %l2 = sparse_tensor.extract_iteration_space %sp at %it1 lvls = 1
-        : tensor<4x8xf32, #COO>, !sparse_tensor.iterator<#COO, lvls = 0 to 1> -> !sparse_tensor.iter_space<#COO, lvls = 1>
-    %r2 = sparse_tensor.iterate %it2 in %l2 iter_args(%inner = %outer): !sparse_tensor.iter_space<#COO, lvls = 1 to 2> -> index {
-      %k = arith.addi %inner, %c1 : index
-      sparse_tensor.yield %k : index
-    }
-    sparse_tensor.yield %r2 : index
-  }
-  return %r1 : index
+// CHECK-LABEL:   func.func @sparse_1D_space(
+// CHECK-SAME:      %[[VAL_0:.*]]: tensor<?x?xf32, #sparse{{[0-9]*}}>) -> !sparse_tensor.iter_space<#sparse{{[0-9]*}}, lvls = 0> {
+// CHECK-DAG:       %[[C0:.*]] = arith.constant 0 : index
+// CHECK-DAG:       %[[C1:.*]] = arith.constant 1 : index
+// CHECK-DAG:       %[[LVL_SIZE:.*]] = sparse_tensor.lvl %[[VAL_0]], %[[C0]] : tensor<?x?xf32, #sparse{{[0-9]*}}>
+// CHECK:           %[[POS_MEM:.*]] = sparse_tensor.positions %[[VAL_0]] {level = 0 : index} : tensor<?x?xf32, #sparse{{[0-9]*}}> to memref<?xindex>
+// CHECK:           %[[CRD_MEM:.*]] = sparse_tensor.coordinates %[[VAL_0]] {level = 0 : index} : tensor<?x?xf32, #sparse{{[0-9]*}}> to memref<?xindex>
+// CHECK:           %[[POS_LO:.*]] = memref.load %[[POS_MEM]]{{\[}}%[[C0]]] : memref<?xindex>
+// CHECK:           %[[POS_HI:.*]] = memref.load %[[POS_MEM]]{{\[}}%[[C1]]] : memref<?xindex>
+// CHECK:           %[[ITER_SPACE:.*]] = builtin.unrealized_conversion_cast %[[POS_MEM]], %[[CRD_MEM]], %[[LVL_SIZE]], %[[POS_LO]], %[[POS_HI]]
+func.func @sparse_1D_space(%sp : tensor<?x?xf32, #COO>) -> !sparse_tensor.iter_space<#COO, lvls = 0> {
+  %l1 = sparse_tensor.extract_iteration_space %sp lvls = 0 : tensor<?x?xf32, #COO> -> !sparse_tensor.iter_space<#COO, lvls = 0>
+  return %l1 : !sparse_tensor.iter_space<#COO, lvls = 0>
 }

>From 7439072289f41d46d563763509d5d0d0cb62f3a0 Mon Sep 17 00:00:00 2001
From: Petr Hosek <phosek at google.com>
Date: Mon, 17 Jun 2024 11:41:00 -0700
Subject: [PATCH 04/26] [Driver][Fuchsia] Support multilib for C++ include dir
 (#95815)

We generate a separate `__config_site` for each multilib and thus need
to add the additional include dir if it exists.
---
 clang/lib/Driver/ToolChains/Fuchsia.cpp         | 14 ++++++++++++--
 .../asan+noexcept/c++/v1/.keep                  |  0
 .../aarch64-unknown-fuchsia/asan/c++/v1/.keep   |  0
 .../aarch64-unknown-fuchsia/compat/c++/v1/.keep |  0
 .../hwasan+noexcept/c++/v1/.keep                |  0
 .../aarch64-unknown-fuchsia/hwasan/c++/v1/.keep |  0
 .../noexcept/c++/v1/.keep                       |  0
 .../asan+noexcept/c++/v1/.keep                  |  0
 .../riscv64-unknown-fuchsia/asan/c++/v1/.keep   |  0
 .../riscv64-unknown-fuchsia/compat/c++/v1/.keep |  0
 .../hwasan+noexcept/c++/v1/.keep                |  0
 .../riscv64-unknown-fuchsia/hwasan/c++/v1/.keep |  0
 .../noexcept/c++/v1/.keep                       |  0
 .../asan+noexcept/c++/v1/.keep                  |  0
 .../x86_64-unknown-fuchsia/asan/c++/v1/.keep    |  0
 .../x86_64-unknown-fuchsia/compat/c++/v1/.keep  |  0
 .../hwasan+noexcept/c++/v1/.keep                |  0
 .../x86_64-unknown-fuchsia/hwasan/c++/v1/.keep  |  0
 .../noexcept/c++/v1/.keep                       |  0
 .../ubsan+noexcept/libc++.so                    |  0
 .../lib/aarch64-unknown-fuchsia/ubsan/libc++.so |  0
 .../ubsan+noexcept/libc++.so                    |  0
 .../lib/x86_64-unknown-fuchsia/ubsan/libc++.so  |  0
 clang/test/Driver/fuchsia.cpp                   | 17 ++++++++++++-----
 24 files changed, 24 insertions(+), 7 deletions(-)
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/asan+noexcept/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/asan/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/compat/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/hwasan+noexcept/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/hwasan/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/noexcept/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/asan+noexcept/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/asan/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/compat/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/hwasan+noexcept/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/hwasan/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/noexcept/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/asan+noexcept/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/asan/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/compat/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/hwasan+noexcept/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/hwasan/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/noexcept/c++/v1/.keep
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/lib/aarch64-unknown-fuchsia/ubsan+noexcept/libc++.so
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/lib/aarch64-unknown-fuchsia/ubsan/libc++.so
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/lib/x86_64-unknown-fuchsia/ubsan+noexcept/libc++.so
 create mode 100644 clang/test/Driver/Inputs/basic_fuchsia_tree/lib/x86_64-unknown-fuchsia/ubsan/libc++.so

diff --git a/clang/lib/Driver/ToolChains/Fuchsia.cpp b/clang/lib/Driver/ToolChains/Fuchsia.cpp
index 598289f48ff43..6daa73c7a54c8 100644
--- a/clang/lib/Driver/ToolChains/Fuchsia.cpp
+++ b/clang/lib/Driver/ToolChains/Fuchsia.cpp
@@ -433,13 +433,23 @@ void Fuchsia::AddClangCXXStdlibIncludeArgs(const ArgList &DriverArgs,
     if (Version.empty())
       return;
 
-    // First add the per-target include path.
+    // First add the per-target multilib include dir.
+    if (!SelectedMultilibs.empty() && !SelectedMultilibs.back().isDefault()) {
+      const Multilib &M = SelectedMultilibs.back();
+      SmallString<128> TargetDir(Path);
+      llvm::sys::path::append(TargetDir, Target, M.gccSuffix(), "c++", Version);
+      if (getVFS().exists(TargetDir)) {
+        addSystemInclude(DriverArgs, CC1Args, TargetDir);
+      }
+    }
+
+    // Second add the per-target include dir.
     SmallString<128> TargetDir(Path);
     llvm::sys::path::append(TargetDir, Target, "c++", Version);
     if (getVFS().exists(TargetDir))
       addSystemInclude(DriverArgs, CC1Args, TargetDir);
 
-    // Second add the generic one.
+    // Third the generic one.
     SmallString<128> Dir(Path);
     llvm::sys::path::append(Dir, "c++", Version);
     addSystemInclude(DriverArgs, CC1Args, Dir);
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/asan+noexcept/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/asan+noexcept/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/asan/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/asan/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/compat/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/compat/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/hwasan+noexcept/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/hwasan+noexcept/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/hwasan/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/hwasan/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/noexcept/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/aarch64-unknown-fuchsia/noexcept/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/asan+noexcept/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/asan+noexcept/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/asan/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/asan/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/compat/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/compat/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/hwasan+noexcept/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/hwasan+noexcept/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/hwasan/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/hwasan/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/noexcept/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/riscv64-unknown-fuchsia/noexcept/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/asan+noexcept/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/asan+noexcept/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/asan/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/asan/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/compat/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/compat/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/hwasan+noexcept/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/hwasan+noexcept/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/hwasan/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/hwasan/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/noexcept/c++/v1/.keep b/clang/test/Driver/Inputs/basic_fuchsia_tree/include/x86_64-unknown-fuchsia/noexcept/c++/v1/.keep
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/lib/aarch64-unknown-fuchsia/ubsan+noexcept/libc++.so b/clang/test/Driver/Inputs/basic_fuchsia_tree/lib/aarch64-unknown-fuchsia/ubsan+noexcept/libc++.so
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/lib/aarch64-unknown-fuchsia/ubsan/libc++.so b/clang/test/Driver/Inputs/basic_fuchsia_tree/lib/aarch64-unknown-fuchsia/ubsan/libc++.so
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/lib/x86_64-unknown-fuchsia/ubsan+noexcept/libc++.so b/clang/test/Driver/Inputs/basic_fuchsia_tree/lib/x86_64-unknown-fuchsia/ubsan+noexcept/libc++.so
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/Inputs/basic_fuchsia_tree/lib/x86_64-unknown-fuchsia/ubsan/libc++.so b/clang/test/Driver/Inputs/basic_fuchsia_tree/lib/x86_64-unknown-fuchsia/ubsan/libc++.so
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/clang/test/Driver/fuchsia.cpp b/clang/test/Driver/fuchsia.cpp
index 69d5cb19041a7..540c1e655a3be 100644
--- a/clang/test/Driver/fuchsia.cpp
+++ b/clang/test/Driver/fuchsia.cpp
@@ -88,16 +88,16 @@
 // RUN:     -resource-dir=%S/Inputs/resource_dir_with_per_target_subdir \
 // RUN:     -fuse-ld=ld 2>&1\
 // RUN:     | FileCheck %s -check-prefixes=CHECK-MULTILIB-X86
-// RUN: %clangxx -### %s --target=x86_64-unknown-fuchsia -fsanitize=address \
+// RUN: %clangxx -### %s --target=x86_64-unknown-fuchsia -fno-exceptions \
 // RUN:     -ccc-install-dir %S/Inputs/basic_fuchsia_tree/bin \
 // RUN:     -resource-dir=%S/Inputs/resource_dir_with_per_target_subdir \
 // RUN:     -fuse-ld=ld 2>&1\
-// RUN:     | FileCheck %s -check-prefixes=CHECK-MULTILIB-X86,CHECK-MULTILIB-ASAN-X86
-// RUN: %clangxx -### %s --target=x86_64-unknown-fuchsia -fno-exceptions \
+// RUN:     | FileCheck %s -check-prefixes=CHECK-MULTILIB-X86,CHECK-MULTILIB-NOEXCEPT-X86
+// RUN: %clangxx -### %s --target=x86_64-unknown-fuchsia -fsanitize=address \
 // RUN:     -ccc-install-dir %S/Inputs/basic_fuchsia_tree/bin \
 // RUN:     -resource-dir=%S/Inputs/resource_dir_with_per_target_subdir \
 // RUN:     -fuse-ld=ld 2>&1\
-// RUN:     | FileCheck %s -check-prefixes=CHECK-MULTILIB-X86,CHECK-MULTILIB-NOEXCEPT-X86
+// RUN:     | FileCheck %s -check-prefixes=CHECK-MULTILIB-X86,CHECK-MULTILIB-ASAN-X86
 // RUN: %clangxx -### %s --target=x86_64-unknown-fuchsia -fsanitize=address -fno-exceptions \
 // RUN:     -ccc-install-dir %S/Inputs/basic_fuchsia_tree/bin \
 // RUN:     -resource-dir=%S/Inputs/resource_dir_with_per_target_subdir \
@@ -131,8 +131,15 @@
 // RUN:     -fuse-ld=ld 2>&1\
 // RUN:     | FileCheck %s -check-prefixes=CHECK-MULTILIB-X86,CHECK-MULTILIB-COMPAT-X86
 // CHECK-MULTILIB-X86: "-resource-dir" "[[RESOURCE_DIR:[^"]+]]"
-// CHECK-MULTILIB-ASAN-X86: "-L{{.*}}{{/|\\\\}}..{{/|\\\\}}lib{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}asan"
+// CHECK-MULTILIB-NOEXCEPT-X86: "-internal-isystem" "{{.*[/\\]}}include{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}noexcept{{/|\\\\}}c++{{/|\\\\}}v1"
+// CHECK-MULTILIB-ASAN-X86: "-internal-isystem" "{{.*[/\\]}}include{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}asan{{/|\\\\}}c++{{/|\\\\}}v1"
+// CHECK-MULTILIB-ASAN-NOEXCEPT-X86: "-internal-isystem" "{{.*[/\\]}}include{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}asan+noexcept{{/|\\\\}}c++{{/|\\\\}}v1"
+// CHECK-MULTILIB-HWASAN-X86: "-internal-isystem" "{{.*[/\\]}}include{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}hwasan{{/|\\\\}}c++{{/|\\\\}}v1"
+// CHECK-MULTILIB-HWASAN-NOEXCEPT-X86: "-internal-isystem" "{{.*[/\\]}}include{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}hwasan+noexcept{{/|\\\\}}c++{{/|\\\\}}v1"
+// CHECK-MULTILIB-COMPAT-X86: "-internal-isystem" "{{.*[/\\]}}include{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}compat{{/|\\\\}}c++{{/|\\\\}}v1"
+// CHECK-MULTILIB-X86: "-internal-isystem" "{{.*[/\\]}}include{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}c++{{/|\\\\}}v1"
 // CHECK-MULTILIB-NOEXCEPT-X86: "-L{{.*}}{{/|\\\\}}..{{/|\\\\}}lib{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}noexcept"
+// CHECK-MULTILIB-ASAN-X86: "-L{{.*}}{{/|\\\\}}..{{/|\\\\}}lib{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}asan"
 // CHECK-MULTILIB-ASAN-NOEXCEPT-X86: "-L{{.*}}{{/|\\\\}}..{{/|\\\\}}lib{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}asan+noexcept"
 // CHECK-MULTILIB-HWASAN-X86: "-L{{.*}}{{/|\\\\}}..{{/|\\\\}}lib{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}hwasan"
 // CHECK-MULTILIB-HWASAN-NOEXCEPT-X86: "-L{{.*}}{{/|\\\\}}..{{/|\\\\}}lib{{/|\\\\}}x86_64-unknown-fuchsia{{/|\\\\}}hwasan+noexcept"

>From 111507ed4ce49bbb8cfbf36a3e143bb25f0f13c0 Mon Sep 17 00:00:00 2001
From: Philip Reames <preames at rivosinc.com>
Date: Mon, 17 Jun 2024 12:01:51 -0700
Subject: [PATCH 05/26] [RISCV] Teach RISCVInsertVSETVLI to work without
 LiveIntervals (#94686)

Stacked on https://github.com/llvm/llvm-project/pull/94658.

We recently moved RISCVInsertVSETVLI from before vector register
allocation to after vector register allocation. When doing so, we added
an unconditional dependency on LiveIntervals - even at O0 where
LiveIntevals hadn't previously run. As reported in #93587, this was
apparently not safe to do.

This change makes LiveIntervals optional, and adjusts all the update
code to only run wen live intervals is present. The only real tricky
part of this change is the abstract state tracking in the dataflow. We
need to represent a "register w/unknown definition" state - but only
when we don't have LiveIntervals.

This adjust the abstract state definition so that the AVLIsReg state can
represent either a register + valno, or a register + unknown definition.
With LiveIntervals, we have an exact definition for each AVL use.
Without LiveIntervals, we treat the definition of a register AVL as
being unknown.

The key semantic change is that we now have a state in the lattice for
which something is known about the AVL value, but for which two
identical lattice elements do *not* necessarily represent the same AVL
value at runtime. Previously, the only case which could result in such
an unknown AVL was the fully unknown state (where VTYPE is also fully
unknown). This requires a small adjustment to hasSameAVL and lattice
state equality to draw this important distinction.

The net effect of this patch is that we remove the LiveIntervals
dependency at O0, and O0 code quality will regress for cases involving
register AVL values.

This patch is an alternative to
https://github.com/llvm/llvm-project/pull/93796 and
https://github.com/llvm/llvm-project/pull/94340. It is very directly
inspired by review conversation around them, and thus should be
considered coauthored by Luke.
---
 llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp  | 157 ++++++++++++------
 llvm/test/CodeGen/RISCV/O0-pipeline.ll        |   3 -
 llvm/test/CodeGen/RISCV/rvv/pr93587.ll        |  37 +++++
 .../CodeGen/RISCV/rvv/vsetvli-insert-O0.ll    |  18 +-
 .../test/CodeGen/RISCV/rvv/vsetvli-insert.mir |   2 +-
 5 files changed, 155 insertions(+), 62 deletions(-)
 create mode 100644 llvm/test/CodeGen/RISCV/rvv/pr93587.ll

diff --git a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
index 151a7821f835d..b7dad160a7f64 100644
--- a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
@@ -48,10 +48,13 @@ static cl::opt<bool> DisableInsertVSETVLPHIOpt(
 namespace {
 
 /// Given a virtual register \p Reg, return the corresponding VNInfo for it.
-/// This will return nullptr if the virtual register is an implicit_def.
+/// This will return nullptr if the virtual register is an implicit_def or
+/// if LiveIntervals is not available.
 static VNInfo *getVNInfoFromReg(Register Reg, const MachineInstr &MI,
                                 const LiveIntervals *LIS) {
   assert(Reg.isVirtual());
+  if (!LIS)
+    return nullptr;
   auto &LI = LIS->getInterval(Reg);
   SlotIndex SI = LIS->getSlotIndexes()->getInstructionIndex(MI);
   return LI.getVNInfoBefore(SI);
@@ -512,7 +515,8 @@ DemandedFields getDemanded(const MachineInstr &MI, const RISCVSubtarget *ST) {
 /// values of the VL and VTYPE registers after insertion.
 class VSETVLIInfo {
   struct AVLDef {
-    // Every AVLDef should have a VNInfo.
+    // Every AVLDef should have a VNInfo, unless we're running without
+    // LiveIntervals in which case this will be nullptr.
     const VNInfo *ValNo;
     Register DefReg;
   };
@@ -526,7 +530,7 @@ class VSETVLIInfo {
     AVLIsReg,
     AVLIsImm,
     AVLIsVLMAX,
-    Unknown,
+    Unknown, // AVL and VTYPE are fully unknown
   } State = Uninitialized;
 
   // Fields from VTYPE.
@@ -552,7 +556,7 @@ class VSETVLIInfo {
   bool isUnknown() const { return State == Unknown; }
 
   void setAVLRegDef(const VNInfo *VNInfo, Register AVLReg) {
-    assert(VNInfo && AVLReg.isVirtual());
+    assert(AVLReg.isVirtual());
     AVLRegDef.ValNo = VNInfo;
     AVLRegDef.DefReg = AVLReg;
     State = AVLIsReg;
@@ -582,9 +586,11 @@ class VSETVLIInfo {
   }
   // Most AVLIsReg infos will have a single defining MachineInstr, unless it was
   // a PHI node. In that case getAVLVNInfo()->def will point to the block
-  // boundary slot.
+  // boundary slot.  If LiveIntervals isn't available, then nullptr is returned.
   const MachineInstr *getAVLDefMI(const LiveIntervals *LIS) const {
     assert(hasAVLReg());
+    if (!LIS)
+      return nullptr;
     auto *MI = LIS->getInstructionFromIndex(getAVLVNInfo()->def);
     assert(!(getAVLVNInfo()->isPHIDef() && MI));
     return MI;
@@ -628,10 +634,15 @@ class VSETVLIInfo {
     return (hasNonZeroAVL(LIS) && Other.hasNonZeroAVL(LIS));
   }
 
-  bool hasSameAVL(const VSETVLIInfo &Other) const {
-    if (hasAVLReg() && Other.hasAVLReg())
+  bool hasSameAVLLatticeValue(const VSETVLIInfo &Other) const {
+    if (hasAVLReg() && Other.hasAVLReg()) {
+      assert(!getAVLVNInfo() == !Other.getAVLVNInfo() &&
+             "we either have intervals or we don't");
+      if (!getAVLVNInfo())
+        return getAVLReg() == Other.getAVLReg();
       return getAVLVNInfo()->id == Other.getAVLVNInfo()->id &&
              getAVLReg() == Other.getAVLReg();
+    }
 
     if (hasAVLImm() && Other.hasAVLImm())
       return getAVLImm() == Other.getAVLImm();
@@ -642,6 +653,21 @@ class VSETVLIInfo {
     return false;
   }
 
+  // Return true if the two lattice values are guaranteed to have
+  // the same AVL value at runtime.
+  bool hasSameAVL(const VSETVLIInfo &Other) const {
+    // Without LiveIntervals, we don't know which instruction defines a
+    // register.  Since a register may be redefined, this means all AVLIsReg
+    // states must be treated as possibly distinct.
+    if (hasAVLReg() && Other.hasAVLReg()) {
+      assert(!getAVLVNInfo() == !Other.getAVLVNInfo() &&
+             "we either have intervals or we don't");
+      if (!getAVLVNInfo())
+        return false;
+    }
+    return hasSameAVLLatticeValue(Other);
+  }
+
   void setVTYPE(unsigned VType) {
     assert(isValid() && !isUnknown() &&
            "Can't set VTYPE for uninitialized or unknown");
@@ -741,7 +767,7 @@ class VSETVLIInfo {
     if (Other.isUnknown())
       return isUnknown();
 
-    if (!hasSameAVL(Other))
+    if (!hasSameAVLLatticeValue(Other))
       return false;
 
     // If the SEWLMULRatioOnly bits are different, then they aren't equal.
@@ -849,6 +875,7 @@ class RISCVInsertVSETVLI : public MachineFunctionPass {
   const RISCVSubtarget *ST;
   const TargetInstrInfo *TII;
   MachineRegisterInfo *MRI;
+  // Possibly null!
   LiveIntervals *LIS;
 
   std::vector<BlockData> BlockInfo;
@@ -863,9 +890,9 @@ class RISCVInsertVSETVLI : public MachineFunctionPass {
   void getAnalysisUsage(AnalysisUsage &AU) const override {
     AU.setPreservesCFG();
 
-    AU.addRequired<LiveIntervals>();
+    AU.addUsedIfAvailable<LiveIntervals>();
     AU.addPreserved<LiveIntervals>();
-    AU.addRequired<SlotIndexes>();
+    AU.addUsedIfAvailable<SlotIndexes>();
     AU.addPreserved<SlotIndexes>();
     AU.addPreserved<LiveDebugVariables>();
     AU.addPreserved<LiveStacks>();
@@ -1061,7 +1088,8 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                     .addReg(RISCV::X0, RegState::Kill)
                     .addImm(Info.encodeVTYPE())
                     .addReg(RISCV::VL, RegState::Implicit);
-      LIS->InsertMachineInstrInMaps(*MI);
+      if (LIS)
+        LIS->InsertMachineInstrInMaps(*MI);
       return;
     }
 
@@ -1078,7 +1106,8 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                         .addReg(RISCV::X0, RegState::Kill)
                         .addImm(Info.encodeVTYPE())
                         .addReg(RISCV::VL, RegState::Implicit);
-          LIS->InsertMachineInstrInMaps(*MI);
+          if (LIS)
+            LIS->InsertMachineInstrInMaps(*MI);
           return;
         }
       }
@@ -1090,7 +1119,8 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                   .addReg(RISCV::X0, RegState::Define | RegState::Dead)
                   .addImm(Info.getAVLImm())
                   .addImm(Info.encodeVTYPE());
-    LIS->InsertMachineInstrInMaps(*MI);
+    if (LIS)
+      LIS->InsertMachineInstrInMaps(*MI);
     return;
   }
 
@@ -1100,8 +1130,10 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                   .addReg(DestReg, RegState::Define | RegState::Dead)
                   .addReg(RISCV::X0, RegState::Kill)
                   .addImm(Info.encodeVTYPE());
-    LIS->InsertMachineInstrInMaps(*MI);
-    LIS->createAndComputeVirtRegInterval(DestReg);
+    if (LIS) {
+      LIS->InsertMachineInstrInMaps(*MI);
+      LIS->createAndComputeVirtRegInterval(DestReg);
+    }
     return;
   }
 
@@ -1111,12 +1143,14 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                 .addReg(RISCV::X0, RegState::Define | RegState::Dead)
                 .addReg(AVLReg)
                 .addImm(Info.encodeVTYPE());
-  LIS->InsertMachineInstrInMaps(*MI);
-  // Normally the AVL's live range will already extend past the inserted vsetvli
-  // because the pseudos below will already use the AVL. But this isn't always
-  // the case, e.g. PseudoVMV_X_S doesn't have an AVL operand.
-  LIS->getInterval(AVLReg).extendInBlock(
-      LIS->getMBBStartIdx(&MBB), LIS->getInstructionIndex(*MI).getRegSlot());
+  if (LIS) {
+    LIS->InsertMachineInstrInMaps(*MI);
+    // Normally the AVL's live range will already extend past the inserted
+    // vsetvli because the pseudos below will already use the AVL. But this
+    // isn't always the case, e.g. PseudoVMV_X_S doesn't have an AVL operand.
+    LIS->getInterval(AVLReg).extendInBlock(
+        LIS->getMBBStartIdx(&MBB), LIS->getInstructionIndex(*MI).getRegSlot());
+  }
 }
 
 /// Return true if a VSETVLI is required to transition from CurInfo to Require
@@ -1230,10 +1264,14 @@ void RISCVInsertVSETVLI::transferAfter(VSETVLIInfo &Info,
   if (RISCV::isFaultFirstLoad(MI)) {
     // Update AVL to vl-output of the fault first load.
     assert(MI.getOperand(1).getReg().isVirtual());
-    auto &LI = LIS->getInterval(MI.getOperand(1).getReg());
-    SlotIndex SI = LIS->getSlotIndexes()->getInstructionIndex(MI).getRegSlot();
-    VNInfo *VNI = LI.getVNInfoAt(SI);
-    Info.setAVLRegDef(VNI, MI.getOperand(1).getReg());
+    if (LIS) {
+      auto &LI = LIS->getInterval(MI.getOperand(1).getReg());
+      SlotIndex SI =
+          LIS->getSlotIndexes()->getInstructionIndex(MI).getRegSlot();
+      VNInfo *VNI = LI.getVNInfoAt(SI);
+      Info.setAVLRegDef(VNI, MI.getOperand(1).getReg());
+    } else
+      Info.setAVLRegDef(nullptr, MI.getOperand(1).getReg());
     return;
   }
 
@@ -1327,6 +1365,9 @@ bool RISCVInsertVSETVLI::needVSETVLIPHI(const VSETVLIInfo &Require,
   if (!Require.hasAVLReg())
     return true;
 
+  if (!LIS)
+    return true;
+
   // We need the AVL to have been produced by a PHI node in this basic block.
   const VNInfo *Valno = Require.getAVLVNInfo();
   if (!Valno->isPHIDef() || LIS->getMBBFromIndex(Valno->def) != &MBB)
@@ -1402,27 +1443,29 @@ void RISCVInsertVSETVLI::emitVSETVLIs(MachineBasicBlock &MBB) {
         MachineOperand &VLOp = MI.getOperand(getVLOpNum(MI));
         if (VLOp.isReg()) {
           Register Reg = VLOp.getReg();
-          LiveInterval &LI = LIS->getInterval(Reg);
 
           // Erase the AVL operand from the instruction.
           VLOp.setReg(RISCV::NoRegister);
           VLOp.setIsKill(false);
-          SmallVector<MachineInstr *> DeadMIs;
-          LIS->shrinkToUses(&LI, &DeadMIs);
-          // We might have separate components that need split due to
-          // needVSETVLIPHI causing us to skip inserting a new VL def.
-          SmallVector<LiveInterval *> SplitLIs;
-          LIS->splitSeparateComponents(LI, SplitLIs);
-
-          // If the AVL was an immediate > 31, then it would have been emitted
-          // as an ADDI. However, the ADDI might not have been used in the
-          // vsetvli, or a vsetvli might not have been emitted, so it may be
-          // dead now.
-          for (MachineInstr *DeadMI : DeadMIs) {
-            if (!TII->isAddImmediate(*DeadMI, Reg))
-              continue;
-            LIS->RemoveMachineInstrFromMaps(*DeadMI);
-            DeadMI->eraseFromParent();
+          if (LIS) {
+            LiveInterval &LI = LIS->getInterval(Reg);
+            SmallVector<MachineInstr *> DeadMIs;
+            LIS->shrinkToUses(&LI, &DeadMIs);
+            // We might have separate components that need split due to
+            // needVSETVLIPHI causing us to skip inserting a new VL def.
+            SmallVector<LiveInterval *> SplitLIs;
+            LIS->splitSeparateComponents(LI, SplitLIs);
+
+            // If the AVL was an immediate > 31, then it would have been emitted
+            // as an ADDI. However, the ADDI might not have been used in the
+            // vsetvli, or a vsetvli might not have been emitted, so it may be
+            // dead now.
+            for (MachineInstr *DeadMI : DeadMIs) {
+              if (!TII->isAddImmediate(*DeadMI, Reg))
+                continue;
+              LIS->RemoveMachineInstrFromMaps(*DeadMI);
+              DeadMI->eraseFromParent();
+            }
           }
         }
         MI.addOperand(MachineOperand::CreateReg(RISCV::VL, /*isDef*/ false,
@@ -1479,6 +1522,9 @@ void RISCVInsertVSETVLI::doPRE(MachineBasicBlock &MBB) {
   if (!UnavailablePred || !AvailableInfo.isValid())
     return;
 
+  if (!LIS)
+    return;
+
   // If we don't know the exact VTYPE, we can't copy the vsetvli to the exit of
   // the unavailable pred.
   if (AvailableInfo.hasSEWLMULRatioOnly())
@@ -1625,7 +1671,7 @@ void RISCVInsertVSETVLI::coalesceVSETVLIs(MachineBasicBlock &MBB) const {
 
           // The def of DefReg moved to MI, so extend the LiveInterval up to
           // it.
-          if (DefReg.isVirtual()) {
+          if (DefReg.isVirtual() && LIS) {
             LiveInterval &DefLI = LIS->getInterval(DefReg);
             SlotIndex MISlot = LIS->getInstructionIndex(MI).getRegSlot();
             VNInfo *DefVNI = DefLI.getVNInfoAt(DefLI.beginIndex());
@@ -1654,13 +1700,15 @@ void RISCVInsertVSETVLI::coalesceVSETVLIs(MachineBasicBlock &MBB) const {
 
           if (OldVLReg && OldVLReg.isVirtual()) {
             // NextMI no longer uses OldVLReg so shrink its LiveInterval.
-            LIS->shrinkToUses(&LIS->getInterval(OldVLReg));
+            if (LIS)
+              LIS->shrinkToUses(&LIS->getInterval(OldVLReg));
 
             MachineInstr *VLOpDef = MRI->getUniqueVRegDef(OldVLReg);
             if (VLOpDef && TII->isAddImmediate(*VLOpDef, OldVLReg) &&
                 MRI->use_nodbg_empty(OldVLReg)) {
               VLOpDef->eraseFromParent();
-              LIS->removeInterval(OldVLReg);
+              if (LIS)
+                LIS->removeInterval(OldVLReg);
             }
           }
           MI.setDesc(NextMI->getDesc());
@@ -1676,7 +1724,8 @@ void RISCVInsertVSETVLI::coalesceVSETVLIs(MachineBasicBlock &MBB) const {
 
   NumCoalescedVSETVL += ToDelete.size();
   for (auto *MI : ToDelete) {
-    LIS->RemoveMachineInstrFromMaps(*MI);
+    if (LIS)
+      LIS->RemoveMachineInstrFromMaps(*MI);
     MI->eraseFromParent();
   }
 }
@@ -1691,12 +1740,14 @@ void RISCVInsertVSETVLI::insertReadVL(MachineBasicBlock &MBB) {
         auto ReadVLMI = BuildMI(MBB, I, MI.getDebugLoc(),
                                 TII->get(RISCV::PseudoReadVL), VLOutput);
         // Move the LiveInterval's definition down to PseudoReadVL.
-        SlotIndex NewDefSI =
-            LIS->InsertMachineInstrInMaps(*ReadVLMI).getRegSlot();
-        LiveInterval &DefLI = LIS->getInterval(VLOutput);
-        VNInfo *DefVNI = DefLI.getVNInfoAt(DefLI.beginIndex());
-        DefLI.removeSegment(DefLI.beginIndex(), NewDefSI);
-        DefVNI->def = NewDefSI;
+        if (LIS) {
+          SlotIndex NewDefSI =
+              LIS->InsertMachineInstrInMaps(*ReadVLMI).getRegSlot();
+          LiveInterval &DefLI = LIS->getInterval(VLOutput);
+          VNInfo *DefVNI = DefLI.getVNInfoAt(DefLI.beginIndex());
+          DefLI.removeSegment(DefLI.beginIndex(), NewDefSI);
+          DefVNI->def = NewDefSI;
+        }
       }
       // We don't use the vl output of the VLEFF/VLSEGFF anymore.
       MI.getOperand(1).setReg(RISCV::X0);
@@ -1714,7 +1765,7 @@ bool RISCVInsertVSETVLI::runOnMachineFunction(MachineFunction &MF) {
 
   TII = ST->getInstrInfo();
   MRI = &MF.getRegInfo();
-  LIS = &getAnalysis<LiveIntervals>();
+  LIS = getAnalysisIfAvailable<LiveIntervals>();
 
   assert(BlockInfo.empty() && "Expect empty block infos");
   BlockInfo.resize(MF.getNumBlockIDs());
diff --git a/llvm/test/CodeGen/RISCV/O0-pipeline.ll b/llvm/test/CodeGen/RISCV/O0-pipeline.ll
index ec49ed302d49d..953eb873b660b 100644
--- a/llvm/test/CodeGen/RISCV/O0-pipeline.ll
+++ b/llvm/test/CodeGen/RISCV/O0-pipeline.ll
@@ -47,9 +47,6 @@
 ; CHECK-NEXT:       Eliminate PHI nodes for register allocation
 ; CHECK-NEXT:       Two-Address instruction pass
 ; CHECK-NEXT:       Fast Register Allocator
-; CHECK-NEXT:       MachineDominator Tree Construction
-; CHECK-NEXT:       Slot index numbering
-; CHECK-NEXT:       Live Interval Analysis
 ; CHECK-NEXT:       RISC-V Insert VSETVLI pass
 ; CHECK-NEXT:       Fast Register Allocator
 ; CHECK-NEXT:       Remove Redundant DEBUG_VALUE analysis
diff --git a/llvm/test/CodeGen/RISCV/rvv/pr93587.ll b/llvm/test/CodeGen/RISCV/rvv/pr93587.ll
new file mode 100644
index 0000000000000..1c2923a2de893
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/rvv/pr93587.ll
@@ -0,0 +1,37 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=riscv64 -O0 < %s | FileCheck %s
+
+; Make sure we don't run LiveIntervals at O0, otherwise it will crash when
+; running on this unreachable block.
+
+define i16 @f() {
+; CHECK-LABEL: f:
+; CHECK:       # %bb.0: # %BB
+; CHECK-NEXT:    addi sp, sp, -16
+; CHECK-NEXT:    .cfi_def_cfa_offset 16
+; CHECK-NEXT:    j .LBB0_1
+; CHECK-NEXT:  .LBB0_1: # %BB1
+; CHECK-NEXT:    # =>This Inner Loop Header: Depth=1
+; CHECK-NEXT:    li a0, 0
+; CHECK-NEXT:    sd a0, 8(sp) # 8-byte Folded Spill
+; CHECK-NEXT:    j .LBB0_1
+; CHECK-NEXT:  # %bb.2: # %BB1
+; CHECK-NEXT:    li a0, 0
+; CHECK-NEXT:    bnez a0, .LBB0_1
+; CHECK-NEXT:    j .LBB0_3
+; CHECK-NEXT:  .LBB0_3: # %BB2
+; CHECK-NEXT:    ld a0, 8(sp) # 8-byte Folded Reload
+; CHECK-NEXT:    addi sp, sp, 16
+; CHECK-NEXT:    ret
+BB:
+  br label %BB1
+
+BB1:
+  %A = or i16 0, 0
+  %B = fcmp true float 0.000000e+00, 0.000000e+00
+  %C = or i1 %B, false
+  br i1 %C, label %BB1, label %BB2
+
+BB2:
+  ret i16 %A
+}
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
index aef18fcd06cd6..33acfb7dceb94 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
@@ -54,10 +54,12 @@ define <vscale x 1 x double> @intrinsic_same_vlmax(<vscale x 1 x double> %a, <vs
 ; CHECK-LABEL: intrinsic_same_vlmax:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetvli a0, zero, e64, m1, tu, ma
+; CHECK-NEXT:    vsetvli a0, zero, e32, mf2, ta, ma
 ; CHECK-NEXT:    # implicit-def: $v9
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
 ; CHECK-NEXT:    # implicit-def: $v8
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
@@ -80,10 +82,12 @@ define <vscale x 1 x double> @intrinsic_same_avl_imm(<vscale x 1 x double> %a, <
 ; CHECK-LABEL: intrinsic_same_avl_imm:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetivli a0, 2, e64, m1, tu, ma
+; CHECK-NEXT:    vsetivli a0, 2, e32, mf2, ta, ma
 ; CHECK-NEXT:    # implicit-def: $v9
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
 ; CHECK-NEXT:    # implicit-def: $v8
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
@@ -105,10 +109,12 @@ define <vscale x 1 x double> @intrinsic_same_avl_reg(i64 %avl, <vscale x 1 x dou
 ; CHECK-LABEL: intrinsic_same_avl_reg:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetvli a0, a0, e64, m1, tu, ma
+; CHECK-NEXT:    vsetvli a0, a0, e32, mf2, ta, ma
 ; CHECK-NEXT:    # implicit-def: $v9
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
 ; CHECK-NEXT:    # implicit-def: $v8
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
@@ -130,11 +136,13 @@ define <vscale x 1 x double> @intrinsic_diff_avl_reg(i64 %avl, i64 %avl2, <vscal
 ; CHECK-LABEL: intrinsic_diff_avl_reg:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetvli a0, a0, e64, m1, tu, ma
+; CHECK-NEXT:    vsetvli a0, a0, e32, mf2, ta, ma
 ; CHECK-NEXT:    # implicit-def: $v9
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
-; CHECK-NEXT:    vsetvli a0, a1, e64, m1, tu, ma
+; CHECK-NEXT:    vsetvli a0, a1, e32, mf2, ta, ma
 ; CHECK-NEXT:    # implicit-def: $v8
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir
index a4b374c8bb401..681b50de5b81c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir
@@ -1,5 +1,5 @@
 # NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
-# RUN: llc %s -o - -mtriple=riscv64 -mattr=v -run-pass=riscv-insert-vsetvli \
+# RUN: llc %s -o - -mtriple=riscv64 -mattr=v -run-pass=liveintervals,riscv-insert-vsetvli \
 # RUN:     | FileCheck %s
 
 --- |

>From 1d028151c9cd79d76c1cda0bc3b4f10a2239d8b6 Mon Sep 17 00:00:00 2001
From: Philip Reames <preames at rivosinc.com>
Date: Mon, 17 Jun 2024 12:04:44 -0700
Subject: [PATCH 06/26] Revert "[RISCV] Teach RISCVInsertVSETVLI to work
 without LiveIntervals (#94686)"

This reverts commit 111507ed4ce49bbb8cfbf36a3e143bb25f0f13c0.  Accidentally landed with stale commit message, will reply shortly.
---
 llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp  | 157 ++++++------------
 llvm/test/CodeGen/RISCV/O0-pipeline.ll        |   3 +
 llvm/test/CodeGen/RISCV/rvv/pr93587.ll        |  37 -----
 .../CodeGen/RISCV/rvv/vsetvli-insert-O0.ll    |  18 +-
 .../test/CodeGen/RISCV/rvv/vsetvli-insert.mir |   2 +-
 5 files changed, 62 insertions(+), 155 deletions(-)
 delete mode 100644 llvm/test/CodeGen/RISCV/rvv/pr93587.ll

diff --git a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
index b7dad160a7f64..151a7821f835d 100644
--- a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
@@ -48,13 +48,10 @@ static cl::opt<bool> DisableInsertVSETVLPHIOpt(
 namespace {
 
 /// Given a virtual register \p Reg, return the corresponding VNInfo for it.
-/// This will return nullptr if the virtual register is an implicit_def or
-/// if LiveIntervals is not available.
+/// This will return nullptr if the virtual register is an implicit_def.
 static VNInfo *getVNInfoFromReg(Register Reg, const MachineInstr &MI,
                                 const LiveIntervals *LIS) {
   assert(Reg.isVirtual());
-  if (!LIS)
-    return nullptr;
   auto &LI = LIS->getInterval(Reg);
   SlotIndex SI = LIS->getSlotIndexes()->getInstructionIndex(MI);
   return LI.getVNInfoBefore(SI);
@@ -515,8 +512,7 @@ DemandedFields getDemanded(const MachineInstr &MI, const RISCVSubtarget *ST) {
 /// values of the VL and VTYPE registers after insertion.
 class VSETVLIInfo {
   struct AVLDef {
-    // Every AVLDef should have a VNInfo, unless we're running without
-    // LiveIntervals in which case this will be nullptr.
+    // Every AVLDef should have a VNInfo.
     const VNInfo *ValNo;
     Register DefReg;
   };
@@ -530,7 +526,7 @@ class VSETVLIInfo {
     AVLIsReg,
     AVLIsImm,
     AVLIsVLMAX,
-    Unknown, // AVL and VTYPE are fully unknown
+    Unknown,
   } State = Uninitialized;
 
   // Fields from VTYPE.
@@ -556,7 +552,7 @@ class VSETVLIInfo {
   bool isUnknown() const { return State == Unknown; }
 
   void setAVLRegDef(const VNInfo *VNInfo, Register AVLReg) {
-    assert(AVLReg.isVirtual());
+    assert(VNInfo && AVLReg.isVirtual());
     AVLRegDef.ValNo = VNInfo;
     AVLRegDef.DefReg = AVLReg;
     State = AVLIsReg;
@@ -586,11 +582,9 @@ class VSETVLIInfo {
   }
   // Most AVLIsReg infos will have a single defining MachineInstr, unless it was
   // a PHI node. In that case getAVLVNInfo()->def will point to the block
-  // boundary slot.  If LiveIntervals isn't available, then nullptr is returned.
+  // boundary slot.
   const MachineInstr *getAVLDefMI(const LiveIntervals *LIS) const {
     assert(hasAVLReg());
-    if (!LIS)
-      return nullptr;
     auto *MI = LIS->getInstructionFromIndex(getAVLVNInfo()->def);
     assert(!(getAVLVNInfo()->isPHIDef() && MI));
     return MI;
@@ -634,15 +628,10 @@ class VSETVLIInfo {
     return (hasNonZeroAVL(LIS) && Other.hasNonZeroAVL(LIS));
   }
 
-  bool hasSameAVLLatticeValue(const VSETVLIInfo &Other) const {
-    if (hasAVLReg() && Other.hasAVLReg()) {
-      assert(!getAVLVNInfo() == !Other.getAVLVNInfo() &&
-             "we either have intervals or we don't");
-      if (!getAVLVNInfo())
-        return getAVLReg() == Other.getAVLReg();
+  bool hasSameAVL(const VSETVLIInfo &Other) const {
+    if (hasAVLReg() && Other.hasAVLReg())
       return getAVLVNInfo()->id == Other.getAVLVNInfo()->id &&
              getAVLReg() == Other.getAVLReg();
-    }
 
     if (hasAVLImm() && Other.hasAVLImm())
       return getAVLImm() == Other.getAVLImm();
@@ -653,21 +642,6 @@ class VSETVLIInfo {
     return false;
   }
 
-  // Return true if the two lattice values are guaranteed to have
-  // the same AVL value at runtime.
-  bool hasSameAVL(const VSETVLIInfo &Other) const {
-    // Without LiveIntervals, we don't know which instruction defines a
-    // register.  Since a register may be redefined, this means all AVLIsReg
-    // states must be treated as possibly distinct.
-    if (hasAVLReg() && Other.hasAVLReg()) {
-      assert(!getAVLVNInfo() == !Other.getAVLVNInfo() &&
-             "we either have intervals or we don't");
-      if (!getAVLVNInfo())
-        return false;
-    }
-    return hasSameAVLLatticeValue(Other);
-  }
-
   void setVTYPE(unsigned VType) {
     assert(isValid() && !isUnknown() &&
            "Can't set VTYPE for uninitialized or unknown");
@@ -767,7 +741,7 @@ class VSETVLIInfo {
     if (Other.isUnknown())
       return isUnknown();
 
-    if (!hasSameAVLLatticeValue(Other))
+    if (!hasSameAVL(Other))
       return false;
 
     // If the SEWLMULRatioOnly bits are different, then they aren't equal.
@@ -875,7 +849,6 @@ class RISCVInsertVSETVLI : public MachineFunctionPass {
   const RISCVSubtarget *ST;
   const TargetInstrInfo *TII;
   MachineRegisterInfo *MRI;
-  // Possibly null!
   LiveIntervals *LIS;
 
   std::vector<BlockData> BlockInfo;
@@ -890,9 +863,9 @@ class RISCVInsertVSETVLI : public MachineFunctionPass {
   void getAnalysisUsage(AnalysisUsage &AU) const override {
     AU.setPreservesCFG();
 
-    AU.addUsedIfAvailable<LiveIntervals>();
+    AU.addRequired<LiveIntervals>();
     AU.addPreserved<LiveIntervals>();
-    AU.addUsedIfAvailable<SlotIndexes>();
+    AU.addRequired<SlotIndexes>();
     AU.addPreserved<SlotIndexes>();
     AU.addPreserved<LiveDebugVariables>();
     AU.addPreserved<LiveStacks>();
@@ -1088,8 +1061,7 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                     .addReg(RISCV::X0, RegState::Kill)
                     .addImm(Info.encodeVTYPE())
                     .addReg(RISCV::VL, RegState::Implicit);
-      if (LIS)
-        LIS->InsertMachineInstrInMaps(*MI);
+      LIS->InsertMachineInstrInMaps(*MI);
       return;
     }
 
@@ -1106,8 +1078,7 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                         .addReg(RISCV::X0, RegState::Kill)
                         .addImm(Info.encodeVTYPE())
                         .addReg(RISCV::VL, RegState::Implicit);
-          if (LIS)
-            LIS->InsertMachineInstrInMaps(*MI);
+          LIS->InsertMachineInstrInMaps(*MI);
           return;
         }
       }
@@ -1119,8 +1090,7 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                   .addReg(RISCV::X0, RegState::Define | RegState::Dead)
                   .addImm(Info.getAVLImm())
                   .addImm(Info.encodeVTYPE());
-    if (LIS)
-      LIS->InsertMachineInstrInMaps(*MI);
+    LIS->InsertMachineInstrInMaps(*MI);
     return;
   }
 
@@ -1130,10 +1100,8 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                   .addReg(DestReg, RegState::Define | RegState::Dead)
                   .addReg(RISCV::X0, RegState::Kill)
                   .addImm(Info.encodeVTYPE());
-    if (LIS) {
-      LIS->InsertMachineInstrInMaps(*MI);
-      LIS->createAndComputeVirtRegInterval(DestReg);
-    }
+    LIS->InsertMachineInstrInMaps(*MI);
+    LIS->createAndComputeVirtRegInterval(DestReg);
     return;
   }
 
@@ -1143,14 +1111,12 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                 .addReg(RISCV::X0, RegState::Define | RegState::Dead)
                 .addReg(AVLReg)
                 .addImm(Info.encodeVTYPE());
-  if (LIS) {
-    LIS->InsertMachineInstrInMaps(*MI);
-    // Normally the AVL's live range will already extend past the inserted
-    // vsetvli because the pseudos below will already use the AVL. But this
-    // isn't always the case, e.g. PseudoVMV_X_S doesn't have an AVL operand.
-    LIS->getInterval(AVLReg).extendInBlock(
-        LIS->getMBBStartIdx(&MBB), LIS->getInstructionIndex(*MI).getRegSlot());
-  }
+  LIS->InsertMachineInstrInMaps(*MI);
+  // Normally the AVL's live range will already extend past the inserted vsetvli
+  // because the pseudos below will already use the AVL. But this isn't always
+  // the case, e.g. PseudoVMV_X_S doesn't have an AVL operand.
+  LIS->getInterval(AVLReg).extendInBlock(
+      LIS->getMBBStartIdx(&MBB), LIS->getInstructionIndex(*MI).getRegSlot());
 }
 
 /// Return true if a VSETVLI is required to transition from CurInfo to Require
@@ -1264,14 +1230,10 @@ void RISCVInsertVSETVLI::transferAfter(VSETVLIInfo &Info,
   if (RISCV::isFaultFirstLoad(MI)) {
     // Update AVL to vl-output of the fault first load.
     assert(MI.getOperand(1).getReg().isVirtual());
-    if (LIS) {
-      auto &LI = LIS->getInterval(MI.getOperand(1).getReg());
-      SlotIndex SI =
-          LIS->getSlotIndexes()->getInstructionIndex(MI).getRegSlot();
-      VNInfo *VNI = LI.getVNInfoAt(SI);
-      Info.setAVLRegDef(VNI, MI.getOperand(1).getReg());
-    } else
-      Info.setAVLRegDef(nullptr, MI.getOperand(1).getReg());
+    auto &LI = LIS->getInterval(MI.getOperand(1).getReg());
+    SlotIndex SI = LIS->getSlotIndexes()->getInstructionIndex(MI).getRegSlot();
+    VNInfo *VNI = LI.getVNInfoAt(SI);
+    Info.setAVLRegDef(VNI, MI.getOperand(1).getReg());
     return;
   }
 
@@ -1365,9 +1327,6 @@ bool RISCVInsertVSETVLI::needVSETVLIPHI(const VSETVLIInfo &Require,
   if (!Require.hasAVLReg())
     return true;
 
-  if (!LIS)
-    return true;
-
   // We need the AVL to have been produced by a PHI node in this basic block.
   const VNInfo *Valno = Require.getAVLVNInfo();
   if (!Valno->isPHIDef() || LIS->getMBBFromIndex(Valno->def) != &MBB)
@@ -1443,29 +1402,27 @@ void RISCVInsertVSETVLI::emitVSETVLIs(MachineBasicBlock &MBB) {
         MachineOperand &VLOp = MI.getOperand(getVLOpNum(MI));
         if (VLOp.isReg()) {
           Register Reg = VLOp.getReg();
+          LiveInterval &LI = LIS->getInterval(Reg);
 
           // Erase the AVL operand from the instruction.
           VLOp.setReg(RISCV::NoRegister);
           VLOp.setIsKill(false);
-          if (LIS) {
-            LiveInterval &LI = LIS->getInterval(Reg);
-            SmallVector<MachineInstr *> DeadMIs;
-            LIS->shrinkToUses(&LI, &DeadMIs);
-            // We might have separate components that need split due to
-            // needVSETVLIPHI causing us to skip inserting a new VL def.
-            SmallVector<LiveInterval *> SplitLIs;
-            LIS->splitSeparateComponents(LI, SplitLIs);
-
-            // If the AVL was an immediate > 31, then it would have been emitted
-            // as an ADDI. However, the ADDI might not have been used in the
-            // vsetvli, or a vsetvli might not have been emitted, so it may be
-            // dead now.
-            for (MachineInstr *DeadMI : DeadMIs) {
-              if (!TII->isAddImmediate(*DeadMI, Reg))
-                continue;
-              LIS->RemoveMachineInstrFromMaps(*DeadMI);
-              DeadMI->eraseFromParent();
-            }
+          SmallVector<MachineInstr *> DeadMIs;
+          LIS->shrinkToUses(&LI, &DeadMIs);
+          // We might have separate components that need split due to
+          // needVSETVLIPHI causing us to skip inserting a new VL def.
+          SmallVector<LiveInterval *> SplitLIs;
+          LIS->splitSeparateComponents(LI, SplitLIs);
+
+          // If the AVL was an immediate > 31, then it would have been emitted
+          // as an ADDI. However, the ADDI might not have been used in the
+          // vsetvli, or a vsetvli might not have been emitted, so it may be
+          // dead now.
+          for (MachineInstr *DeadMI : DeadMIs) {
+            if (!TII->isAddImmediate(*DeadMI, Reg))
+              continue;
+            LIS->RemoveMachineInstrFromMaps(*DeadMI);
+            DeadMI->eraseFromParent();
           }
         }
         MI.addOperand(MachineOperand::CreateReg(RISCV::VL, /*isDef*/ false,
@@ -1522,9 +1479,6 @@ void RISCVInsertVSETVLI::doPRE(MachineBasicBlock &MBB) {
   if (!UnavailablePred || !AvailableInfo.isValid())
     return;
 
-  if (!LIS)
-    return;
-
   // If we don't know the exact VTYPE, we can't copy the vsetvli to the exit of
   // the unavailable pred.
   if (AvailableInfo.hasSEWLMULRatioOnly())
@@ -1671,7 +1625,7 @@ void RISCVInsertVSETVLI::coalesceVSETVLIs(MachineBasicBlock &MBB) const {
 
           // The def of DefReg moved to MI, so extend the LiveInterval up to
           // it.
-          if (DefReg.isVirtual() && LIS) {
+          if (DefReg.isVirtual()) {
             LiveInterval &DefLI = LIS->getInterval(DefReg);
             SlotIndex MISlot = LIS->getInstructionIndex(MI).getRegSlot();
             VNInfo *DefVNI = DefLI.getVNInfoAt(DefLI.beginIndex());
@@ -1700,15 +1654,13 @@ void RISCVInsertVSETVLI::coalesceVSETVLIs(MachineBasicBlock &MBB) const {
 
           if (OldVLReg && OldVLReg.isVirtual()) {
             // NextMI no longer uses OldVLReg so shrink its LiveInterval.
-            if (LIS)
-              LIS->shrinkToUses(&LIS->getInterval(OldVLReg));
+            LIS->shrinkToUses(&LIS->getInterval(OldVLReg));
 
             MachineInstr *VLOpDef = MRI->getUniqueVRegDef(OldVLReg);
             if (VLOpDef && TII->isAddImmediate(*VLOpDef, OldVLReg) &&
                 MRI->use_nodbg_empty(OldVLReg)) {
               VLOpDef->eraseFromParent();
-              if (LIS)
-                LIS->removeInterval(OldVLReg);
+              LIS->removeInterval(OldVLReg);
             }
           }
           MI.setDesc(NextMI->getDesc());
@@ -1724,8 +1676,7 @@ void RISCVInsertVSETVLI::coalesceVSETVLIs(MachineBasicBlock &MBB) const {
 
   NumCoalescedVSETVL += ToDelete.size();
   for (auto *MI : ToDelete) {
-    if (LIS)
-      LIS->RemoveMachineInstrFromMaps(*MI);
+    LIS->RemoveMachineInstrFromMaps(*MI);
     MI->eraseFromParent();
   }
 }
@@ -1740,14 +1691,12 @@ void RISCVInsertVSETVLI::insertReadVL(MachineBasicBlock &MBB) {
         auto ReadVLMI = BuildMI(MBB, I, MI.getDebugLoc(),
                                 TII->get(RISCV::PseudoReadVL), VLOutput);
         // Move the LiveInterval's definition down to PseudoReadVL.
-        if (LIS) {
-          SlotIndex NewDefSI =
-              LIS->InsertMachineInstrInMaps(*ReadVLMI).getRegSlot();
-          LiveInterval &DefLI = LIS->getInterval(VLOutput);
-          VNInfo *DefVNI = DefLI.getVNInfoAt(DefLI.beginIndex());
-          DefLI.removeSegment(DefLI.beginIndex(), NewDefSI);
-          DefVNI->def = NewDefSI;
-        }
+        SlotIndex NewDefSI =
+            LIS->InsertMachineInstrInMaps(*ReadVLMI).getRegSlot();
+        LiveInterval &DefLI = LIS->getInterval(VLOutput);
+        VNInfo *DefVNI = DefLI.getVNInfoAt(DefLI.beginIndex());
+        DefLI.removeSegment(DefLI.beginIndex(), NewDefSI);
+        DefVNI->def = NewDefSI;
       }
       // We don't use the vl output of the VLEFF/VLSEGFF anymore.
       MI.getOperand(1).setReg(RISCV::X0);
@@ -1765,7 +1714,7 @@ bool RISCVInsertVSETVLI::runOnMachineFunction(MachineFunction &MF) {
 
   TII = ST->getInstrInfo();
   MRI = &MF.getRegInfo();
-  LIS = getAnalysisIfAvailable<LiveIntervals>();
+  LIS = &getAnalysis<LiveIntervals>();
 
   assert(BlockInfo.empty() && "Expect empty block infos");
   BlockInfo.resize(MF.getNumBlockIDs());
diff --git a/llvm/test/CodeGen/RISCV/O0-pipeline.ll b/llvm/test/CodeGen/RISCV/O0-pipeline.ll
index 953eb873b660b..ec49ed302d49d 100644
--- a/llvm/test/CodeGen/RISCV/O0-pipeline.ll
+++ b/llvm/test/CodeGen/RISCV/O0-pipeline.ll
@@ -47,6 +47,9 @@
 ; CHECK-NEXT:       Eliminate PHI nodes for register allocation
 ; CHECK-NEXT:       Two-Address instruction pass
 ; CHECK-NEXT:       Fast Register Allocator
+; CHECK-NEXT:       MachineDominator Tree Construction
+; CHECK-NEXT:       Slot index numbering
+; CHECK-NEXT:       Live Interval Analysis
 ; CHECK-NEXT:       RISC-V Insert VSETVLI pass
 ; CHECK-NEXT:       Fast Register Allocator
 ; CHECK-NEXT:       Remove Redundant DEBUG_VALUE analysis
diff --git a/llvm/test/CodeGen/RISCV/rvv/pr93587.ll b/llvm/test/CodeGen/RISCV/rvv/pr93587.ll
deleted file mode 100644
index 1c2923a2de893..0000000000000
--- a/llvm/test/CodeGen/RISCV/rvv/pr93587.ll
+++ /dev/null
@@ -1,37 +0,0 @@
-; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
-; RUN: llc -mtriple=riscv64 -O0 < %s | FileCheck %s
-
-; Make sure we don't run LiveIntervals at O0, otherwise it will crash when
-; running on this unreachable block.
-
-define i16 @f() {
-; CHECK-LABEL: f:
-; CHECK:       # %bb.0: # %BB
-; CHECK-NEXT:    addi sp, sp, -16
-; CHECK-NEXT:    .cfi_def_cfa_offset 16
-; CHECK-NEXT:    j .LBB0_1
-; CHECK-NEXT:  .LBB0_1: # %BB1
-; CHECK-NEXT:    # =>This Inner Loop Header: Depth=1
-; CHECK-NEXT:    li a0, 0
-; CHECK-NEXT:    sd a0, 8(sp) # 8-byte Folded Spill
-; CHECK-NEXT:    j .LBB0_1
-; CHECK-NEXT:  # %bb.2: # %BB1
-; CHECK-NEXT:    li a0, 0
-; CHECK-NEXT:    bnez a0, .LBB0_1
-; CHECK-NEXT:    j .LBB0_3
-; CHECK-NEXT:  .LBB0_3: # %BB2
-; CHECK-NEXT:    ld a0, 8(sp) # 8-byte Folded Reload
-; CHECK-NEXT:    addi sp, sp, 16
-; CHECK-NEXT:    ret
-BB:
-  br label %BB1
-
-BB1:
-  %A = or i16 0, 0
-  %B = fcmp true float 0.000000e+00, 0.000000e+00
-  %C = or i1 %B, false
-  br i1 %C, label %BB1, label %BB2
-
-BB2:
-  ret i16 %A
-}
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
index 33acfb7dceb94..aef18fcd06cd6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
@@ -54,12 +54,10 @@ define <vscale x 1 x double> @intrinsic_same_vlmax(<vscale x 1 x double> %a, <vs
 ; CHECK-LABEL: intrinsic_same_vlmax:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetvli a0, zero, e32, mf2, ta, ma
+; CHECK-NEXT:    vsetvli a0, zero, e64, m1, tu, ma
 ; CHECK-NEXT:    # implicit-def: $v9
-; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
 ; CHECK-NEXT:    # implicit-def: $v8
-; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
@@ -82,12 +80,10 @@ define <vscale x 1 x double> @intrinsic_same_avl_imm(<vscale x 1 x double> %a, <
 ; CHECK-LABEL: intrinsic_same_avl_imm:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetivli a0, 2, e32, mf2, ta, ma
+; CHECK-NEXT:    vsetivli a0, 2, e64, m1, tu, ma
 ; CHECK-NEXT:    # implicit-def: $v9
-; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
 ; CHECK-NEXT:    # implicit-def: $v8
-; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
@@ -109,12 +105,10 @@ define <vscale x 1 x double> @intrinsic_same_avl_reg(i64 %avl, <vscale x 1 x dou
 ; CHECK-LABEL: intrinsic_same_avl_reg:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetvli a0, a0, e32, mf2, ta, ma
+; CHECK-NEXT:    vsetvli a0, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    # implicit-def: $v9
-; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
 ; CHECK-NEXT:    # implicit-def: $v8
-; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
@@ -136,13 +130,11 @@ define <vscale x 1 x double> @intrinsic_diff_avl_reg(i64 %avl, i64 %avl2, <vscal
 ; CHECK-LABEL: intrinsic_diff_avl_reg:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetvli a0, a0, e32, mf2, ta, ma
+; CHECK-NEXT:    vsetvli a0, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    # implicit-def: $v9
-; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
-; CHECK-NEXT:    vsetvli a0, a1, e32, mf2, ta, ma
+; CHECK-NEXT:    vsetvli a0, a1, e64, m1, tu, ma
 ; CHECK-NEXT:    # implicit-def: $v8
-; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir
index 681b50de5b81c..a4b374c8bb401 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir
@@ -1,5 +1,5 @@
 # NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
-# RUN: llc %s -o - -mtriple=riscv64 -mattr=v -run-pass=liveintervals,riscv-insert-vsetvli \
+# RUN: llc %s -o - -mtriple=riscv64 -mattr=v -run-pass=riscv-insert-vsetvli \
 # RUN:     | FileCheck %s
 
 --- |

>From 8756043467edbc6d62efd36af9985150b5781111 Mon Sep 17 00:00:00 2001
From: Philip Reames <preames at rivosinc.com>
Date: Mon, 17 Jun 2024 12:05:14 -0700
Subject: [PATCH 07/26] [RISCV] Teach RISCVInsertVSETVLI to work without
 LiveIntervals

(Reapplying with corrected commit message)

We recently moved RISCVInsertVSETVLI from before vector register allocation
to after vector register allocation.  When doing so, we added an unconditional
dependency on LiveIntervals - even at O0 where LiveIntevals hadn't previously
run.  As reported in #93587, this was apparently not safe to do.

This change makes LiveIntervals optional, and adjusts all the update code to
only run wen live intervals is present.  The only real tricky part of this
change is the abstract state tracking in the dataflow.  We need to represent
a "register w/unknown definition" state - but only when we don't have
LiveIntervals.

This adjust the abstract state definition so that the AVLIsReg state can
represent either a register + valno, or a register + unknown definition.
With LiveIntervals, we have an exact definition for each AVL use.  Without
LiveIntervals, we  treat the definition of a register AVL as being unknown.

The key semantic change is that we now have a state in the lattice for which
something is known about the AVL value, but for which two identical lattice
elements do *not* neccessarily represent the same AVL value at runtime.
Previously, the only case which could result in such an unknown AVL was the
fully unknown state (where VTYPE is also fully unknown).  This requires a
small adjustment to hasSameAVL and lattice state equality to draw this
important distinction.

The net effect of this patch is that we remove the LiveIntervals dependency
at O0, and O0 code quality will regress for cases involving register AVL values.
In practice, this means we pessimize code written with intrinsics at O0.

This patch is an alternative to #93796 and #94340.  It is very directly
inspired by review conversation around them, and thus should be considered
coauthored by Luke.
---
 llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp  | 157 ++++++++++++------
 llvm/test/CodeGen/RISCV/O0-pipeline.ll        |   3 -
 llvm/test/CodeGen/RISCV/rvv/pr93587.ll        |  37 +++++
 .../CodeGen/RISCV/rvv/vsetvli-insert-O0.ll    |  18 +-
 .../test/CodeGen/RISCV/rvv/vsetvli-insert.mir |   2 +-
 5 files changed, 155 insertions(+), 62 deletions(-)
 create mode 100644 llvm/test/CodeGen/RISCV/rvv/pr93587.ll

diff --git a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
index 151a7821f835d..b7dad160a7f64 100644
--- a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
@@ -48,10 +48,13 @@ static cl::opt<bool> DisableInsertVSETVLPHIOpt(
 namespace {
 
 /// Given a virtual register \p Reg, return the corresponding VNInfo for it.
-/// This will return nullptr if the virtual register is an implicit_def.
+/// This will return nullptr if the virtual register is an implicit_def or
+/// if LiveIntervals is not available.
 static VNInfo *getVNInfoFromReg(Register Reg, const MachineInstr &MI,
                                 const LiveIntervals *LIS) {
   assert(Reg.isVirtual());
+  if (!LIS)
+    return nullptr;
   auto &LI = LIS->getInterval(Reg);
   SlotIndex SI = LIS->getSlotIndexes()->getInstructionIndex(MI);
   return LI.getVNInfoBefore(SI);
@@ -512,7 +515,8 @@ DemandedFields getDemanded(const MachineInstr &MI, const RISCVSubtarget *ST) {
 /// values of the VL and VTYPE registers after insertion.
 class VSETVLIInfo {
   struct AVLDef {
-    // Every AVLDef should have a VNInfo.
+    // Every AVLDef should have a VNInfo, unless we're running without
+    // LiveIntervals in which case this will be nullptr.
     const VNInfo *ValNo;
     Register DefReg;
   };
@@ -526,7 +530,7 @@ class VSETVLIInfo {
     AVLIsReg,
     AVLIsImm,
     AVLIsVLMAX,
-    Unknown,
+    Unknown, // AVL and VTYPE are fully unknown
   } State = Uninitialized;
 
   // Fields from VTYPE.
@@ -552,7 +556,7 @@ class VSETVLIInfo {
   bool isUnknown() const { return State == Unknown; }
 
   void setAVLRegDef(const VNInfo *VNInfo, Register AVLReg) {
-    assert(VNInfo && AVLReg.isVirtual());
+    assert(AVLReg.isVirtual());
     AVLRegDef.ValNo = VNInfo;
     AVLRegDef.DefReg = AVLReg;
     State = AVLIsReg;
@@ -582,9 +586,11 @@ class VSETVLIInfo {
   }
   // Most AVLIsReg infos will have a single defining MachineInstr, unless it was
   // a PHI node. In that case getAVLVNInfo()->def will point to the block
-  // boundary slot.
+  // boundary slot.  If LiveIntervals isn't available, then nullptr is returned.
   const MachineInstr *getAVLDefMI(const LiveIntervals *LIS) const {
     assert(hasAVLReg());
+    if (!LIS)
+      return nullptr;
     auto *MI = LIS->getInstructionFromIndex(getAVLVNInfo()->def);
     assert(!(getAVLVNInfo()->isPHIDef() && MI));
     return MI;
@@ -628,10 +634,15 @@ class VSETVLIInfo {
     return (hasNonZeroAVL(LIS) && Other.hasNonZeroAVL(LIS));
   }
 
-  bool hasSameAVL(const VSETVLIInfo &Other) const {
-    if (hasAVLReg() && Other.hasAVLReg())
+  bool hasSameAVLLatticeValue(const VSETVLIInfo &Other) const {
+    if (hasAVLReg() && Other.hasAVLReg()) {
+      assert(!getAVLVNInfo() == !Other.getAVLVNInfo() &&
+             "we either have intervals or we don't");
+      if (!getAVLVNInfo())
+        return getAVLReg() == Other.getAVLReg();
       return getAVLVNInfo()->id == Other.getAVLVNInfo()->id &&
              getAVLReg() == Other.getAVLReg();
+    }
 
     if (hasAVLImm() && Other.hasAVLImm())
       return getAVLImm() == Other.getAVLImm();
@@ -642,6 +653,21 @@ class VSETVLIInfo {
     return false;
   }
 
+  // Return true if the two lattice values are guaranteed to have
+  // the same AVL value at runtime.
+  bool hasSameAVL(const VSETVLIInfo &Other) const {
+    // Without LiveIntervals, we don't know which instruction defines a
+    // register.  Since a register may be redefined, this means all AVLIsReg
+    // states must be treated as possibly distinct.
+    if (hasAVLReg() && Other.hasAVLReg()) {
+      assert(!getAVLVNInfo() == !Other.getAVLVNInfo() &&
+             "we either have intervals or we don't");
+      if (!getAVLVNInfo())
+        return false;
+    }
+    return hasSameAVLLatticeValue(Other);
+  }
+
   void setVTYPE(unsigned VType) {
     assert(isValid() && !isUnknown() &&
            "Can't set VTYPE for uninitialized or unknown");
@@ -741,7 +767,7 @@ class VSETVLIInfo {
     if (Other.isUnknown())
       return isUnknown();
 
-    if (!hasSameAVL(Other))
+    if (!hasSameAVLLatticeValue(Other))
       return false;
 
     // If the SEWLMULRatioOnly bits are different, then they aren't equal.
@@ -849,6 +875,7 @@ class RISCVInsertVSETVLI : public MachineFunctionPass {
   const RISCVSubtarget *ST;
   const TargetInstrInfo *TII;
   MachineRegisterInfo *MRI;
+  // Possibly null!
   LiveIntervals *LIS;
 
   std::vector<BlockData> BlockInfo;
@@ -863,9 +890,9 @@ class RISCVInsertVSETVLI : public MachineFunctionPass {
   void getAnalysisUsage(AnalysisUsage &AU) const override {
     AU.setPreservesCFG();
 
-    AU.addRequired<LiveIntervals>();
+    AU.addUsedIfAvailable<LiveIntervals>();
     AU.addPreserved<LiveIntervals>();
-    AU.addRequired<SlotIndexes>();
+    AU.addUsedIfAvailable<SlotIndexes>();
     AU.addPreserved<SlotIndexes>();
     AU.addPreserved<LiveDebugVariables>();
     AU.addPreserved<LiveStacks>();
@@ -1061,7 +1088,8 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                     .addReg(RISCV::X0, RegState::Kill)
                     .addImm(Info.encodeVTYPE())
                     .addReg(RISCV::VL, RegState::Implicit);
-      LIS->InsertMachineInstrInMaps(*MI);
+      if (LIS)
+        LIS->InsertMachineInstrInMaps(*MI);
       return;
     }
 
@@ -1078,7 +1106,8 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                         .addReg(RISCV::X0, RegState::Kill)
                         .addImm(Info.encodeVTYPE())
                         .addReg(RISCV::VL, RegState::Implicit);
-          LIS->InsertMachineInstrInMaps(*MI);
+          if (LIS)
+            LIS->InsertMachineInstrInMaps(*MI);
           return;
         }
       }
@@ -1090,7 +1119,8 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                   .addReg(RISCV::X0, RegState::Define | RegState::Dead)
                   .addImm(Info.getAVLImm())
                   .addImm(Info.encodeVTYPE());
-    LIS->InsertMachineInstrInMaps(*MI);
+    if (LIS)
+      LIS->InsertMachineInstrInMaps(*MI);
     return;
   }
 
@@ -1100,8 +1130,10 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                   .addReg(DestReg, RegState::Define | RegState::Dead)
                   .addReg(RISCV::X0, RegState::Kill)
                   .addImm(Info.encodeVTYPE());
-    LIS->InsertMachineInstrInMaps(*MI);
-    LIS->createAndComputeVirtRegInterval(DestReg);
+    if (LIS) {
+      LIS->InsertMachineInstrInMaps(*MI);
+      LIS->createAndComputeVirtRegInterval(DestReg);
+    }
     return;
   }
 
@@ -1111,12 +1143,14 @@ void RISCVInsertVSETVLI::insertVSETVLI(MachineBasicBlock &MBB,
                 .addReg(RISCV::X0, RegState::Define | RegState::Dead)
                 .addReg(AVLReg)
                 .addImm(Info.encodeVTYPE());
-  LIS->InsertMachineInstrInMaps(*MI);
-  // Normally the AVL's live range will already extend past the inserted vsetvli
-  // because the pseudos below will already use the AVL. But this isn't always
-  // the case, e.g. PseudoVMV_X_S doesn't have an AVL operand.
-  LIS->getInterval(AVLReg).extendInBlock(
-      LIS->getMBBStartIdx(&MBB), LIS->getInstructionIndex(*MI).getRegSlot());
+  if (LIS) {
+    LIS->InsertMachineInstrInMaps(*MI);
+    // Normally the AVL's live range will already extend past the inserted
+    // vsetvli because the pseudos below will already use the AVL. But this
+    // isn't always the case, e.g. PseudoVMV_X_S doesn't have an AVL operand.
+    LIS->getInterval(AVLReg).extendInBlock(
+        LIS->getMBBStartIdx(&MBB), LIS->getInstructionIndex(*MI).getRegSlot());
+  }
 }
 
 /// Return true if a VSETVLI is required to transition from CurInfo to Require
@@ -1230,10 +1264,14 @@ void RISCVInsertVSETVLI::transferAfter(VSETVLIInfo &Info,
   if (RISCV::isFaultFirstLoad(MI)) {
     // Update AVL to vl-output of the fault first load.
     assert(MI.getOperand(1).getReg().isVirtual());
-    auto &LI = LIS->getInterval(MI.getOperand(1).getReg());
-    SlotIndex SI = LIS->getSlotIndexes()->getInstructionIndex(MI).getRegSlot();
-    VNInfo *VNI = LI.getVNInfoAt(SI);
-    Info.setAVLRegDef(VNI, MI.getOperand(1).getReg());
+    if (LIS) {
+      auto &LI = LIS->getInterval(MI.getOperand(1).getReg());
+      SlotIndex SI =
+          LIS->getSlotIndexes()->getInstructionIndex(MI).getRegSlot();
+      VNInfo *VNI = LI.getVNInfoAt(SI);
+      Info.setAVLRegDef(VNI, MI.getOperand(1).getReg());
+    } else
+      Info.setAVLRegDef(nullptr, MI.getOperand(1).getReg());
     return;
   }
 
@@ -1327,6 +1365,9 @@ bool RISCVInsertVSETVLI::needVSETVLIPHI(const VSETVLIInfo &Require,
   if (!Require.hasAVLReg())
     return true;
 
+  if (!LIS)
+    return true;
+
   // We need the AVL to have been produced by a PHI node in this basic block.
   const VNInfo *Valno = Require.getAVLVNInfo();
   if (!Valno->isPHIDef() || LIS->getMBBFromIndex(Valno->def) != &MBB)
@@ -1402,27 +1443,29 @@ void RISCVInsertVSETVLI::emitVSETVLIs(MachineBasicBlock &MBB) {
         MachineOperand &VLOp = MI.getOperand(getVLOpNum(MI));
         if (VLOp.isReg()) {
           Register Reg = VLOp.getReg();
-          LiveInterval &LI = LIS->getInterval(Reg);
 
           // Erase the AVL operand from the instruction.
           VLOp.setReg(RISCV::NoRegister);
           VLOp.setIsKill(false);
-          SmallVector<MachineInstr *> DeadMIs;
-          LIS->shrinkToUses(&LI, &DeadMIs);
-          // We might have separate components that need split due to
-          // needVSETVLIPHI causing us to skip inserting a new VL def.
-          SmallVector<LiveInterval *> SplitLIs;
-          LIS->splitSeparateComponents(LI, SplitLIs);
-
-          // If the AVL was an immediate > 31, then it would have been emitted
-          // as an ADDI. However, the ADDI might not have been used in the
-          // vsetvli, or a vsetvli might not have been emitted, so it may be
-          // dead now.
-          for (MachineInstr *DeadMI : DeadMIs) {
-            if (!TII->isAddImmediate(*DeadMI, Reg))
-              continue;
-            LIS->RemoveMachineInstrFromMaps(*DeadMI);
-            DeadMI->eraseFromParent();
+          if (LIS) {
+            LiveInterval &LI = LIS->getInterval(Reg);
+            SmallVector<MachineInstr *> DeadMIs;
+            LIS->shrinkToUses(&LI, &DeadMIs);
+            // We might have separate components that need split due to
+            // needVSETVLIPHI causing us to skip inserting a new VL def.
+            SmallVector<LiveInterval *> SplitLIs;
+            LIS->splitSeparateComponents(LI, SplitLIs);
+
+            // If the AVL was an immediate > 31, then it would have been emitted
+            // as an ADDI. However, the ADDI might not have been used in the
+            // vsetvli, or a vsetvli might not have been emitted, so it may be
+            // dead now.
+            for (MachineInstr *DeadMI : DeadMIs) {
+              if (!TII->isAddImmediate(*DeadMI, Reg))
+                continue;
+              LIS->RemoveMachineInstrFromMaps(*DeadMI);
+              DeadMI->eraseFromParent();
+            }
           }
         }
         MI.addOperand(MachineOperand::CreateReg(RISCV::VL, /*isDef*/ false,
@@ -1479,6 +1522,9 @@ void RISCVInsertVSETVLI::doPRE(MachineBasicBlock &MBB) {
   if (!UnavailablePred || !AvailableInfo.isValid())
     return;
 
+  if (!LIS)
+    return;
+
   // If we don't know the exact VTYPE, we can't copy the vsetvli to the exit of
   // the unavailable pred.
   if (AvailableInfo.hasSEWLMULRatioOnly())
@@ -1625,7 +1671,7 @@ void RISCVInsertVSETVLI::coalesceVSETVLIs(MachineBasicBlock &MBB) const {
 
           // The def of DefReg moved to MI, so extend the LiveInterval up to
           // it.
-          if (DefReg.isVirtual()) {
+          if (DefReg.isVirtual() && LIS) {
             LiveInterval &DefLI = LIS->getInterval(DefReg);
             SlotIndex MISlot = LIS->getInstructionIndex(MI).getRegSlot();
             VNInfo *DefVNI = DefLI.getVNInfoAt(DefLI.beginIndex());
@@ -1654,13 +1700,15 @@ void RISCVInsertVSETVLI::coalesceVSETVLIs(MachineBasicBlock &MBB) const {
 
           if (OldVLReg && OldVLReg.isVirtual()) {
             // NextMI no longer uses OldVLReg so shrink its LiveInterval.
-            LIS->shrinkToUses(&LIS->getInterval(OldVLReg));
+            if (LIS)
+              LIS->shrinkToUses(&LIS->getInterval(OldVLReg));
 
             MachineInstr *VLOpDef = MRI->getUniqueVRegDef(OldVLReg);
             if (VLOpDef && TII->isAddImmediate(*VLOpDef, OldVLReg) &&
                 MRI->use_nodbg_empty(OldVLReg)) {
               VLOpDef->eraseFromParent();
-              LIS->removeInterval(OldVLReg);
+              if (LIS)
+                LIS->removeInterval(OldVLReg);
             }
           }
           MI.setDesc(NextMI->getDesc());
@@ -1676,7 +1724,8 @@ void RISCVInsertVSETVLI::coalesceVSETVLIs(MachineBasicBlock &MBB) const {
 
   NumCoalescedVSETVL += ToDelete.size();
   for (auto *MI : ToDelete) {
-    LIS->RemoveMachineInstrFromMaps(*MI);
+    if (LIS)
+      LIS->RemoveMachineInstrFromMaps(*MI);
     MI->eraseFromParent();
   }
 }
@@ -1691,12 +1740,14 @@ void RISCVInsertVSETVLI::insertReadVL(MachineBasicBlock &MBB) {
         auto ReadVLMI = BuildMI(MBB, I, MI.getDebugLoc(),
                                 TII->get(RISCV::PseudoReadVL), VLOutput);
         // Move the LiveInterval's definition down to PseudoReadVL.
-        SlotIndex NewDefSI =
-            LIS->InsertMachineInstrInMaps(*ReadVLMI).getRegSlot();
-        LiveInterval &DefLI = LIS->getInterval(VLOutput);
-        VNInfo *DefVNI = DefLI.getVNInfoAt(DefLI.beginIndex());
-        DefLI.removeSegment(DefLI.beginIndex(), NewDefSI);
-        DefVNI->def = NewDefSI;
+        if (LIS) {
+          SlotIndex NewDefSI =
+              LIS->InsertMachineInstrInMaps(*ReadVLMI).getRegSlot();
+          LiveInterval &DefLI = LIS->getInterval(VLOutput);
+          VNInfo *DefVNI = DefLI.getVNInfoAt(DefLI.beginIndex());
+          DefLI.removeSegment(DefLI.beginIndex(), NewDefSI);
+          DefVNI->def = NewDefSI;
+        }
       }
       // We don't use the vl output of the VLEFF/VLSEGFF anymore.
       MI.getOperand(1).setReg(RISCV::X0);
@@ -1714,7 +1765,7 @@ bool RISCVInsertVSETVLI::runOnMachineFunction(MachineFunction &MF) {
 
   TII = ST->getInstrInfo();
   MRI = &MF.getRegInfo();
-  LIS = &getAnalysis<LiveIntervals>();
+  LIS = getAnalysisIfAvailable<LiveIntervals>();
 
   assert(BlockInfo.empty() && "Expect empty block infos");
   BlockInfo.resize(MF.getNumBlockIDs());
diff --git a/llvm/test/CodeGen/RISCV/O0-pipeline.ll b/llvm/test/CodeGen/RISCV/O0-pipeline.ll
index ec49ed302d49d..953eb873b660b 100644
--- a/llvm/test/CodeGen/RISCV/O0-pipeline.ll
+++ b/llvm/test/CodeGen/RISCV/O0-pipeline.ll
@@ -47,9 +47,6 @@
 ; CHECK-NEXT:       Eliminate PHI nodes for register allocation
 ; CHECK-NEXT:       Two-Address instruction pass
 ; CHECK-NEXT:       Fast Register Allocator
-; CHECK-NEXT:       MachineDominator Tree Construction
-; CHECK-NEXT:       Slot index numbering
-; CHECK-NEXT:       Live Interval Analysis
 ; CHECK-NEXT:       RISC-V Insert VSETVLI pass
 ; CHECK-NEXT:       Fast Register Allocator
 ; CHECK-NEXT:       Remove Redundant DEBUG_VALUE analysis
diff --git a/llvm/test/CodeGen/RISCV/rvv/pr93587.ll b/llvm/test/CodeGen/RISCV/rvv/pr93587.ll
new file mode 100644
index 0000000000000..1c2923a2de893
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/rvv/pr93587.ll
@@ -0,0 +1,37 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=riscv64 -O0 < %s | FileCheck %s
+
+; Make sure we don't run LiveIntervals at O0, otherwise it will crash when
+; running on this unreachable block.
+
+define i16 @f() {
+; CHECK-LABEL: f:
+; CHECK:       # %bb.0: # %BB
+; CHECK-NEXT:    addi sp, sp, -16
+; CHECK-NEXT:    .cfi_def_cfa_offset 16
+; CHECK-NEXT:    j .LBB0_1
+; CHECK-NEXT:  .LBB0_1: # %BB1
+; CHECK-NEXT:    # =>This Inner Loop Header: Depth=1
+; CHECK-NEXT:    li a0, 0
+; CHECK-NEXT:    sd a0, 8(sp) # 8-byte Folded Spill
+; CHECK-NEXT:    j .LBB0_1
+; CHECK-NEXT:  # %bb.2: # %BB1
+; CHECK-NEXT:    li a0, 0
+; CHECK-NEXT:    bnez a0, .LBB0_1
+; CHECK-NEXT:    j .LBB0_3
+; CHECK-NEXT:  .LBB0_3: # %BB2
+; CHECK-NEXT:    ld a0, 8(sp) # 8-byte Folded Reload
+; CHECK-NEXT:    addi sp, sp, 16
+; CHECK-NEXT:    ret
+BB:
+  br label %BB1
+
+BB1:
+  %A = or i16 0, 0
+  %B = fcmp true float 0.000000e+00, 0.000000e+00
+  %C = or i1 %B, false
+  br i1 %C, label %BB1, label %BB2
+
+BB2:
+  ret i16 %A
+}
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
index aef18fcd06cd6..33acfb7dceb94 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
@@ -54,10 +54,12 @@ define <vscale x 1 x double> @intrinsic_same_vlmax(<vscale x 1 x double> %a, <vs
 ; CHECK-LABEL: intrinsic_same_vlmax:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetvli a0, zero, e64, m1, tu, ma
+; CHECK-NEXT:    vsetvli a0, zero, e32, mf2, ta, ma
 ; CHECK-NEXT:    # implicit-def: $v9
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
 ; CHECK-NEXT:    # implicit-def: $v8
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
@@ -80,10 +82,12 @@ define <vscale x 1 x double> @intrinsic_same_avl_imm(<vscale x 1 x double> %a, <
 ; CHECK-LABEL: intrinsic_same_avl_imm:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetivli a0, 2, e64, m1, tu, ma
+; CHECK-NEXT:    vsetivli a0, 2, e32, mf2, ta, ma
 ; CHECK-NEXT:    # implicit-def: $v9
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
 ; CHECK-NEXT:    # implicit-def: $v8
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
@@ -105,10 +109,12 @@ define <vscale x 1 x double> @intrinsic_same_avl_reg(i64 %avl, <vscale x 1 x dou
 ; CHECK-LABEL: intrinsic_same_avl_reg:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetvli a0, a0, e64, m1, tu, ma
+; CHECK-NEXT:    vsetvli a0, a0, e32, mf2, ta, ma
 ; CHECK-NEXT:    # implicit-def: $v9
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
 ; CHECK-NEXT:    # implicit-def: $v8
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
@@ -130,11 +136,13 @@ define <vscale x 1 x double> @intrinsic_diff_avl_reg(i64 %avl, i64 %avl2, <vscal
 ; CHECK-LABEL: intrinsic_diff_avl_reg:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vmv1r.v v10, v9
-; CHECK-NEXT:    vsetvli a0, a0, e64, m1, tu, ma
+; CHECK-NEXT:    vsetvli a0, a0, e32, mf2, ta, ma
 ; CHECK-NEXT:    # implicit-def: $v9
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v9, v8, v10
-; CHECK-NEXT:    vsetvli a0, a1, e64, m1, tu, ma
+; CHECK-NEXT:    vsetvli a0, a1, e32, mf2, ta, ma
 ; CHECK-NEXT:    # implicit-def: $v8
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, tu, ma
 ; CHECK-NEXT:    vfadd.vv v8, v9, v10
 ; CHECK-NEXT:    ret
 entry:
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir
index a4b374c8bb401..681b50de5b81c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.mir
@@ -1,5 +1,5 @@
 # NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
-# RUN: llc %s -o - -mtriple=riscv64 -mattr=v -run-pass=riscv-insert-vsetvli \
+# RUN: llc %s -o - -mtriple=riscv64 -mattr=v -run-pass=liveintervals,riscv-insert-vsetvli \
 # RUN:     | FileCheck %s
 
 --- |

>From c9549e10e9ea70428ada80a34d15afeaf5710b2d Mon Sep 17 00:00:00 2001
From: Jacques Pienaar <jpienaar at google.com>
Date: Mon, 17 Jun 2024 12:11:49 -0700
Subject: [PATCH 08/26]  [mlirc] Add missing extern C (#95829)

This was missing being wrapped in extern C block.

Don't know why didn't fail elsewhere, but failed on Windows build while
linking Python libs.

Signed-off-by: Jacques Pienaar <jpienaar at google.com>
---
 mlir/include/mlir-c/Rewrite.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/mlir/include/mlir-c/Rewrite.h b/mlir/include/mlir-c/Rewrite.h
index 45218a1cd4ebd..bed93045f4b50 100644
--- a/mlir/include/mlir-c/Rewrite.h
+++ b/mlir/include/mlir-c/Rewrite.h
@@ -19,6 +19,10 @@
 #include "mlir-c/Support.h"
 #include "mlir/Config/mlir-config.h"
 
+#ifdef __cplusplus
+extern "C" {
+#endif
+
 //===----------------------------------------------------------------------===//
 /// Opaque type declarations (see mlir-c/IR.h for more details).
 //===----------------------------------------------------------------------===//
@@ -57,4 +61,8 @@ mlirRewritePatternSetFromPDLPatternModule(MlirPDLPatternModule op);
 
 #undef DEFINE_C_API_STRUCT
 
+#ifdef __cplusplus
+}
+#endif
+
 #endif // MLIR_C_REWRITE_H

>From c22d3917b93a6d54613d2e5b2ea4c97546144c46 Mon Sep 17 00:00:00 2001
From: Antonio Frighetto <me at antoniofrighetto.com>
Date: Thu, 6 Jun 2024 08:26:40 +0200
Subject: [PATCH 09/26] [LVI][ConstantRange] Generalize mask not equal
 conditions handling

Extend `V & Mask != 0` for non-zero constants if satisfiable, when
retrieving constraint value information from a non-equality comparison.

Proof: https://alive2.llvm.org/ce/z/dc5BeT.

Motivating example: https://github.com/gcc-mirror/gcc/blob/master/gcc/testsuite/gcc.dg/tree-ssa/vrp76.c.
---
 llvm/include/llvm/IR/ConstantRange.h          |  5 ++
 llvm/lib/Analysis/LazyValueInfo.cpp           | 11 ++---
 llvm/lib/IR/ConstantRange.cpp                 | 17 +++++++
 .../CorrelatedValuePropagation/icmp.ll        | 38 ++++++++++++++-
 llvm/unittests/IR/ConstantRangeTest.cpp       | 48 +++++++++++++++++++
 5 files changed, 111 insertions(+), 8 deletions(-)

diff --git a/llvm/include/llvm/IR/ConstantRange.h b/llvm/include/llvm/IR/ConstantRange.h
index a5e2f809ab411..7b94b9c6c6d11 100644
--- a/llvm/include/llvm/IR/ConstantRange.h
+++ b/llvm/include/llvm/IR/ConstantRange.h
@@ -176,6 +176,11 @@ class [[nodiscard]] ConstantRange {
                                              const APInt &Other,
                                              unsigned NoWrapKind);
 
+  /// Initialize a range containing all values X that satisfy `(X & Mask)
+  /// != C`. Note that the range returned may contain values where `(X & Mask)
+  /// == C` holds, making it less precise, but still conservative.
+  static ConstantRange makeMaskNotEqualRange(const APInt &Mask, const APInt &C);
+
   /// Returns true if ConstantRange calculations are supported for intrinsic
   /// with \p IntrinsicID.
   static bool isIntrinsicSupported(Intrinsic::ID IntrinsicID);
diff --git a/llvm/lib/Analysis/LazyValueInfo.cpp b/llvm/lib/Analysis/LazyValueInfo.cpp
index 8b6e56ce8113d..aaa7baa3b114a 100644
--- a/llvm/lib/Analysis/LazyValueInfo.cpp
+++ b/llvm/lib/Analysis/LazyValueInfo.cpp
@@ -1188,13 +1188,10 @@ std::optional<ValueLatticeElement> LazyValueInfoImpl::getValueFromICmpCondition(
       return ValueLatticeElement::getRange(
           ConstantRange::fromKnownBits(Known, /*IsSigned*/ false));
     }
-    // If (Val & Mask) != 0 then the value must be larger than the lowest set
-    // bit of Mask.
-    if (EdgePred == ICmpInst::ICMP_NE && !Mask->isZero() && C->isZero()) {
-      return ValueLatticeElement::getRange(ConstantRange::getNonEmpty(
-          APInt::getOneBitSet(BitWidth, Mask->countr_zero()),
-          APInt::getZero(BitWidth)));
-    }
+
+    if (EdgePred == ICmpInst::ICMP_NE)
+      return ValueLatticeElement::getRange(
+          ConstantRange::makeMaskNotEqualRange(*Mask, *C));
   }
 
   // If (X urem Modulus) >= C, then X >= C.
diff --git a/llvm/lib/IR/ConstantRange.cpp b/llvm/lib/IR/ConstantRange.cpp
index 08041c96ffe5a..19041704a40be 100644
--- a/llvm/lib/IR/ConstantRange.cpp
+++ b/llvm/lib/IR/ConstantRange.cpp
@@ -364,6 +364,23 @@ ConstantRange ConstantRange::makeExactNoWrapRegion(Instruction::BinaryOps BinOp,
   return makeGuaranteedNoWrapRegion(BinOp, ConstantRange(Other), NoWrapKind);
 }
 
+ConstantRange ConstantRange::makeMaskNotEqualRange(const APInt &Mask,
+                                                   const APInt &C) {
+  unsigned BitWidth = Mask.getBitWidth();
+
+  if ((Mask & C) != C)
+    return getFull(BitWidth);
+
+  if (Mask.isZero())
+    return getEmpty(BitWidth);
+
+  // If (Val & Mask) != C, constrained to the non-equality being
+  // satisfiable, then the value must be larger than the lowest set bit of
+  // Mask, offset by constant C.
+  return ConstantRange::getNonEmpty(
+      APInt::getOneBitSet(BitWidth, Mask.countr_zero()) + C, C);
+}
+
 bool ConstantRange::isFullSet() const {
   return Lower == Upper && Lower.isMaxValue();
 }
diff --git a/llvm/test/Transforms/CorrelatedValuePropagation/icmp.ll b/llvm/test/Transforms/CorrelatedValuePropagation/icmp.ll
index b5337b9ddc248..ca70713440219 100644
--- a/llvm/test/Transforms/CorrelatedValuePropagation/icmp.ll
+++ b/llvm/test/Transforms/CorrelatedValuePropagation/icmp.ll
@@ -595,7 +595,7 @@ define i1 @test_assume_cmp_with_offset_or(i64 %idx, i1 %other) {
 ; CHECK:       T:
 ; CHECK-NEXT:    ret i1 true
 ; CHECK:       F:
-; CHECK-NEXT:    ret i1 [[CMP2:%.*]]
+; CHECK-NEXT:    ret i1 [[OTHER:%.*]]
 ;
   %idx.off1 = or disjoint i64 %idx, 5
   %cmp1 = icmp ugt i64 %idx.off1, 10
@@ -1475,3 +1475,39 @@ entry:
   %select = select i1 %cmp1, i1 %cmp2, i1 false
   ret i1 %select
 }
+
+declare void @opaque()
+
+define void @test_icmp_ne_from_implied_range(i32 noundef %arg) {
+; CHECK-LABEL: @test_icmp_ne_from_implied_range(
+; CHECK-NEXT:    [[AND_MASK:%.*]] = and i32 [[ARG:%.*]], -8
+; CHECK-NEXT:    [[CMP:%.*]] = icmp eq i32 [[AND_MASK]], -16
+; CHECK-NEXT:    br i1 [[CMP]], label [[END:%.*]], label [[ELSE:%.*]]
+; CHECK:       else:
+; CHECK-NEXT:    br label [[END]]
+; CHECK:       sw.case:
+; CHECK-NEXT:    call void @opaque()
+; CHECK-NEXT:    br label [[END]]
+; CHECK:       end:
+; CHECK-NEXT:    ret void
+;
+  %and.mask = and i32 %arg, -8
+  %cmp = icmp eq i32 %and.mask, -16
+  br i1 %cmp, label %end, label %else
+
+else:
+  ; %arg is within [-8, -16).
+  switch i32 %arg, label %end [
+  i32 -16, label %sw.case
+  i32 -12, label %sw.case
+  i32 -9, label %sw.case
+  ]
+
+sw.case:
+  call void @opaque()
+  br label %end
+
+end:
+  ; %arg is within [-16, -8).
+  ret void
+}
diff --git a/llvm/unittests/IR/ConstantRangeTest.cpp b/llvm/unittests/IR/ConstantRangeTest.cpp
index ac2075cb4af47..392c41f74b431 100644
--- a/llvm/unittests/IR/ConstantRangeTest.cpp
+++ b/llvm/unittests/IR/ConstantRangeTest.cpp
@@ -2788,4 +2788,52 @@ TEST_F(ConstantRangeTest, isSizeLargerThan) {
   EXPECT_FALSE(One.isSizeLargerThan(1));
 }
 
+TEST_F(ConstantRangeTest, MakeMaskNotEqualRange) {
+  // Mask: 0b0001, C: 0b0001. MMNE() = [2, 1)
+  ConstantRange CR(APInt(4, 2), APInt(4, 1));
+  EXPECT_EQ(CR, ConstantRange::makeMaskNotEqualRange(APInt(4, 1), APInt(4, 1)));
+  EXPECT_NE(CR, ConstantRange::makeMaskNotEqualRange(APInt(4, 0),
+                                                     APInt(4, -1, true)));
+  EXPECT_TRUE(CR.contains(APInt(4, 7)));
+  EXPECT_TRUE(CR.contains(APInt(4, 15)));
+
+  // Mask: 0b0100, C: 0b0100. MMNE() = [-8, 4)
+  ConstantRange CR2(APInt(4, -8, true), APInt(4, 4));
+  auto MMNE = ConstantRange::makeMaskNotEqualRange(APInt(4, 4), APInt(4, 4));
+  EXPECT_EQ(CR2, MMNE);
+  EXPECT_NE(ConstantRange::getNonEmpty(APInt(4, 0), APInt(4, -4, true)), MMNE);
+
+  // CR: [-16, -8). MMNE() = [-8, -16)
+  ConstantRange CR3(APInt(8, 240), APInt(8, 248));
+  EXPECT_EQ(CR3.inverse(),
+            ConstantRange::makeMaskNotEqualRange(APInt(8, 248), APInt(8, 240)));
+
+  // Mask: 0, C: 0b1111: unsatisfiable.
+  EXPECT_EQ(ConstantRange::getFull(4),
+            ConstantRange::makeMaskNotEqualRange(APInt(4, 0), APInt(4, 15)));
+}
+
+TEST_F(ConstantRangeTest, MakeMaskNotEqualRangeExhaustive) {
+  unsigned Bits = 4;
+  unsigned Max = 1 << Bits;
+
+  EnumerateAPInts(Bits, [&](const APInt &Mask) {
+    EnumerateAPInts(Bits, [&](const APInt &C) {
+      SmallBitVector Elems(Max);
+      for (unsigned N = 0; N < Max; ++N) {
+        APInt Num(Bits, N);
+        if ((Num & Mask) == C)
+          continue;
+        Elems.set(Num.getZExtValue());
+      }
+
+      // Only test optimality with PreferSmallest. E.g., given Mask = 0b0001, C
+      // = 0b0001, a possible better range would be [0, 15) when preferring the
+      // smallest unsigned, however we conservatively return [2, 1).
+      TestRange(ConstantRange::makeMaskNotEqualRange(Mask, C), Elems,
+                PreferSmallest, {});
+    });
+  });
+}
+
 } // anonymous namespace

>From 06aa078d68380bc775f0a903204fe330d50f4f1f Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Hana=20Dus=C3=ADkov=C3=A1?= <hanicka at hanicka.net>
Date: Mon, 17 Jun 2024 21:30:50 +0200
Subject: [PATCH 10/26] [llvm-cov] Coverage report HTML UI to jump between
 uncovered parts of code (#95662)

I replaced "jump to first uncovered line" with UI buttons and keyboard
shortcut to jump between uncovered parts of code: lines (key L), branchs
(key B), regions (key R).

User can also jump in reverse direction with shift+key.
---
 llvm/docs/ReleaseNotes.rst                    |   6 +
 .../llvm-cov/Inputs/showProjectSummary.test   |   2 +-
 .../llvm-cov/showLineExecutionCounts.cpp      |   8 +-
 .../llvm-cov/showTemplateInstantiations.cpp   |  14 +-
 llvm/tools/llvm-cov/SourceCoverageView.cpp    |   3 +-
 llvm/tools/llvm-cov/SourceCoverageView.h      |   3 +-
 .../tools/llvm-cov/SourceCoverageViewHTML.cpp | 220 ++++++++++++++++--
 llvm/tools/llvm-cov/SourceCoverageViewHTML.h  |   4 +-
 .../tools/llvm-cov/SourceCoverageViewText.cpp |   3 +-
 llvm/tools/llvm-cov/SourceCoverageViewText.h  |   3 +-
 10 files changed, 223 insertions(+), 43 deletions(-)

diff --git a/llvm/docs/ReleaseNotes.rst b/llvm/docs/ReleaseNotes.rst
index 5fdbc9f349af4..97f445d498dd0 100644
--- a/llvm/docs/ReleaseNotes.rst
+++ b/llvm/docs/ReleaseNotes.rst
@@ -292,6 +292,12 @@ Changes to the LLVM tools
   now has a map for the mapped files. (`#92835
   <https://github.com/llvm/llvm-project/pull/92835>`).
 
+* llvm-cov now generates HTML report with JavaScript code to allow simple
+  jumping between uncovered parts (lines/regions/branches) of code 
+  using buttons on top-right corner of the page or using keys (L/R/B or 
+  jumping in reverse direction with shift+L/R/B). (`#95662
+  <https://github.com/llvm/llvm-project/pull/95662>`).
+
 Changes to LLDB
 ---------------------------------
 
diff --git a/llvm/test/tools/llvm-cov/Inputs/showProjectSummary.test b/llvm/test/tools/llvm-cov/Inputs/showProjectSummary.test
index eb0222677eaa5..5eb6b4e24d536 100644
--- a/llvm/test/tools/llvm-cov/Inputs/showProjectSummary.test
+++ b/llvm/test/tools/llvm-cov/Inputs/showProjectSummary.test
@@ -11,5 +11,5 @@ HTML-FILE: <pre>{{.*}}showProjectSummary.cpp</pre>
 HTML-FUNCTION: <pre>main</pre>
 HTML-HEADER: <td><pre>Line</pre></td>
 HTML-HEADER: <td><pre>Count</pre></td>
-HTML-HEADER: <td><pre>Source (<a href='#L8'>jump to first uncovered line</a>)</pre></td>
+HTML-HEADER: <td><pre>Source</pre></td>
 HTML-FOOTER: <h5>Generated by llvm-cov{{.*}}</h5>
diff --git a/llvm/test/tools/llvm-cov/showLineExecutionCounts.cpp b/llvm/test/tools/llvm-cov/showLineExecutionCounts.cpp
index 51ac3ae1deee5..f72a9978b8a73 100644
--- a/llvm/test/tools/llvm-cov/showLineExecutionCounts.cpp
+++ b/llvm/test/tools/llvm-cov/showLineExecutionCounts.cpp
@@ -44,11 +44,11 @@ int main() {                              // TEXT: [[@LINE]]|   161|int main(
 // RUN: FileCheck -check-prefixes=HTML,HTML-WHOLE-FILE -input-file %t.html.dir/coverage/tmp/showLineExecutionCounts.cpp.html %s
 // RUN: FileCheck -check-prefixes=HTML,HTML-FILTER -input-file %t.html.filtered.dir/coverage/tmp/showLineExecutionCounts.cpp.html %s
 //
-// HTML-WHOLE-FILE: <td class='line-number'><a name='L4' href='#L4'><pre>4</pre></a></td><td class='uncovered-line'></td><td class='code'><pre>// before
-// HTML-FILTER-NOT: <td class='line-number'><a name='L4' href='#L4'><pre>4</pre></a></td><td class='uncovered-line'></td><td class='code'><pre>// before
+// HTML-WHOLE-FILE: <td class='line-number'><a name='L4' href='#L4'><pre>4</pre></a></td><td class='skipped-line'></td><td class='code'><pre>// before
+// HTML-FILTER-NOT: <td class='line-number'><a name='L4' href='#L4'><pre>4</pre></a></td><td class='skipped-line'></td><td class='code'><pre>// before
 // HTML: <td class='line-number'><a name='L6' href='#L6'><pre>6</pre></a></td><td class='covered-line'><pre>161</pre></td><td class='code'><pre>int main() {
-// HTML-WHOLE-FILE: <td class='line-number'><a name='L26' href='#L26'><pre>26</pre></a></td><td class='uncovered-line'></td><td class='code'><pre>// after
-// HTML-FILTER-NOT: <td class='line-number'><a name='L26' href='#L26'><pre>26</pre></a></td><td class='uncovered-line'></td><td class='code'><pre>// after
+// HTML-WHOLE-FILE: <td class='line-number'><a name='L26' href='#L26'><pre>26</pre></a></td><td class='skipped-line'></td><td class='code'><pre>// after
+// HTML-FILTER-NOT: <td class='line-number'><a name='L26' href='#L26'><pre>26</pre></a></td><td class='skipped-line'></td><td class='code'><pre>// after
 //
 // Test index creation.
 // RUN: FileCheck -check-prefix=TEXT-INDEX -input-file %t.dir/index.txt %s
diff --git a/llvm/test/tools/llvm-cov/showTemplateInstantiations.cpp b/llvm/test/tools/llvm-cov/showTemplateInstantiations.cpp
index 380e98b42bb8c..5d6318e3d1f31 100644
--- a/llvm/test/tools/llvm-cov/showTemplateInstantiations.cpp
+++ b/llvm/test/tools/llvm-cov/showTemplateInstantiations.cpp
@@ -45,9 +45,9 @@ int main() {         // ALL:         [[@LINE]]| 1|int main() {
 // RUN: FileCheck -check-prefixes=HTML-SHARED,HTML-ALL -input-file=%t.html.dir/coverage/tmp/showTemplateInstantiations.cpp.html %s
 // RUN: FileCheck -check-prefixes=HTML-SHARED,HTML-FILTER -input-file=%t.html.filtered.dir/coverage/tmp/showTemplateInstantiations.cpp.html %s
 
-// HTML-ALL: <td class='line-number'><a name='L4' href='#L4'><pre>4</pre></a></td><td class='uncovered-line'></td><td class='code'><pre>// before
-// HTML-FILTER-NOT: <td class='line-number'><a name='L4' href='#L4'><pre>4</pre></a></td><td class='uncovered-line'></td><td class='code'><pre>// before
-// HTML-ALL: <td class='line-number'><a name='L6' href='#L6'><pre>6</pre></a></td><td class='uncovered-line'></td><td class='code'><pre>template<typename T>
+// HTML-ALL: <td class='line-number'><a name='L4' href='#L4'><pre>4</pre></a></td><td class='skipped-line'></td><td class='code'><pre>// before
+// HTML-FILTER-NOT: <td class='line-number'><a name='L4' href='#L4'><pre>4</pre></a></td><td class='skipped-line'></td><td class='code'><pre>// before
+// HTML-ALL: <td class='line-number'><a name='L6' href='#L6'><pre>6</pre></a></td><td class='skipped-line'></td><td class='code'><pre>template<typename T>
 
 // HTML-ALL: <div class='source-name-title'><pre>_Z4funcIiEiT_</pre></div>
 // HTML-FILTER-NOT: <div class='source-name-title'><pre>_Z4funcIiEiT_</pre></div><table>
@@ -57,8 +57,12 @@ int main() {         // ALL:         [[@LINE]]| 1|int main() {
 // HTML-SHARED: <td class='line-number'><a name='L7' href='#L7'><pre>7</pre></a></td><td class='covered-line'><pre>1</pre></td><td class='code'><pre>int func(T x) {
 
 // RUN: FileCheck -check-prefix=HTML-JUMP -input-file=%t.html.dir/coverage/tmp/showTemplateInstantiations.cpp.html %s
-// HTML-JUMP: <pre>Source (<a href='#L{{[0-9]+}}'>jump to first uncovered line</a>)</pre>
-// HTML-JUMP-NOT: <pre>Source (<a href='#L{{[0-9]+}}'>jump to first uncovered line</a>)</pre>
+// HTML-JUMP: <a href='javascript:next_line()'>next uncovered line (L)</a>
+// HTML-JUMP-NOT: <a href='javascript:next_line()'>next uncovered line (L)</a>
+// HTML-JUMP: <a href='javascript:next_region()'>next uncovered region (R)</a>
+// HTML-JUMP-NOT: <a href='javascript:next_region()'>next uncovered region (R)</a>
+// HTML-JUMP: <a href='javascript:next_branch()'>next uncovered branch (B)</a>
+// HTML-JUMP-NOT: <a href='javascript:next_branch()'>next uncovered branch (B)</a>
 
 // RUN: llvm-cov show %S/Inputs/templateInstantiations.covmapping -instr-profile %S/Inputs/templateInstantiations.profdata -show-instantiations=false -path-equivalence=/tmp,%S %s | FileCheck -check-prefix=NO_INSTS %s
 // NO_INSTS-NOT: {{^ *}}| _Z4funcIbEiT_:
diff --git a/llvm/tools/llvm-cov/SourceCoverageView.cpp b/llvm/tools/llvm-cov/SourceCoverageView.cpp
index 45bddd7284461..ce55e3abf23bd 100644
--- a/llvm/tools/llvm-cov/SourceCoverageView.cpp
+++ b/llvm/tools/llvm-cov/SourceCoverageView.cpp
@@ -203,8 +203,7 @@ void SourceCoverageView::print(raw_ostream &OS, bool WholeFile,
   if (ShowSourceName)
     renderSourceName(OS, WholeFile);
 
-  renderTableHeader(OS, (ViewDepth > 0) ? 0 : getFirstUncoveredLineNo(),
-                    ViewDepth);
+  renderTableHeader(OS, ViewDepth);
 
   // We need the expansions, instantiations, and branches sorted so we can go
   // through them while we iterate lines.
diff --git a/llvm/tools/llvm-cov/SourceCoverageView.h b/llvm/tools/llvm-cov/SourceCoverageView.h
index a874f7c6820d2..d255f8c200b24 100644
--- a/llvm/tools/llvm-cov/SourceCoverageView.h
+++ b/llvm/tools/llvm-cov/SourceCoverageView.h
@@ -262,8 +262,7 @@ class SourceCoverageView {
   virtual void renderTitle(raw_ostream &OS, StringRef CellText) = 0;
 
   /// Render the table header for a given source file.
-  virtual void renderTableHeader(raw_ostream &OS, unsigned FirstUncoveredLineNo,
-                                 unsigned IndentLevel) = 0;
+  virtual void renderTableHeader(raw_ostream &OS, unsigned IndentLevel) = 0;
 
   /// @}
 
diff --git a/llvm/tools/llvm-cov/SourceCoverageViewHTML.cpp b/llvm/tools/llvm-cov/SourceCoverageViewHTML.cpp
index d4b2ea3594fc5..6f4d327679d6b 100644
--- a/llvm/tools/llvm-cov/SourceCoverageViewHTML.cpp
+++ b/llvm/tools/llvm-cov/SourceCoverageViewHTML.cpp
@@ -88,6 +88,113 @@ const char *BeginHeader =
     "<meta name='viewport' content='width=device-width,initial-scale=1'>"
     "<meta charset='UTF-8'>";
 
+const char *JSForCoverage =
+    R"javascript(
+
+function next_uncovered(selector, reverse, scroll_selector) {
+  function visit_element(element) {
+    element.classList.add("seen");
+    element.classList.add("selected");
+  
+  if (!scroll_selector) {
+    scroll_selector = "tr:has(.selected) td.line-number"
+  }
+  
+  const scroll_to = document.querySelector(scroll_selector);
+  if (scroll_to) {
+    scroll_to.scrollIntoView({behavior: "smooth", block: "center", inline: "end"});
+  }
+  
+  }
+  
+  function select_one() {
+    if (!reverse) {
+      const previously_selected = document.querySelector(".selected");
+      
+      if (previously_selected) {
+        previously_selected.classList.remove("selected");
+      }
+      
+      return document.querySelector(selector + ":not(.seen)");
+    } else {      
+      const previously_selected = document.querySelector(".selected");
+      
+      if (previously_selected) {
+        previously_selected.classList.remove("selected");
+        previously_selected.classList.remove("seen");
+      }
+      
+      const nodes = document.querySelectorAll(selector + ".seen");
+      if (nodes) {
+        const last = nodes[nodes.length - 1]; // last
+        return last;
+      } else {
+        return undefined;
+      }
+    }
+  }
+  
+  function reset_all() {
+    if (!reverse) {
+      const all_seen = document.querySelectorAll(selector + ".seen");
+  
+      if (all_seen) {
+        all_seen.forEach(e => e.classList.remove("seen"));
+      }
+    } else {
+      const all_seen = document.querySelectorAll(selector + ":not(.seen)");
+  
+      if (all_seen) {
+        all_seen.forEach(e => e.classList.add("seen"));
+      }
+    }
+    
+  }
+  
+  const uncovered = select_one();
+
+  if (uncovered) {
+    visit_element(uncovered);
+  } else {
+    reset_all();
+    
+    
+    const uncovered = select_one();
+    
+    if (uncovered) {
+      visit_element(uncovered);
+    }
+  }
+}
+
+function next_line(reverse) { 
+  next_uncovered("td.uncovered-line", reverse)
+}
+
+function next_region(reverse) { 
+  next_uncovered("span.red.region", reverse);
+}
+
+function next_branch(reverse) { 
+  next_uncovered("span.red.branch", reverse);
+}
+
+document.addEventListener("keypress", function(event) {
+  console.log(event);
+  const reverse = event.shiftKey;
+  if (event.code == "KeyL") {
+    next_line(reverse);
+  }
+  if (event.code == "KeyB") {
+    next_branch(reverse);
+  }
+  if (event.code == "KeyR") {
+    next_region(reverse);
+  }
+  
+});
+)javascript";
+
 const char *CSSForCoverage =
     R"(.red {
   background-color: #f004;
@@ -95,6 +202,9 @@ const char *CSSForCoverage =
 .cyan {
   background-color: cyan;
 }
+html {
+  scroll-behavior: smooth;
+}
 body {
   font-family: -apple-system, sans-serif;
 }
@@ -171,6 +281,18 @@ table {
   text-align: right;
   color: #d00;
 }
+.uncovered-line.selected {
+  color: #f00;
+  font-weight: bold;
+}
+.region.red.selected {
+  background-color: #f008;
+  font-weight: bold;
+}
+.branch.red.selected {
+  background-color: #f008;
+  font-weight: bold;
+}
 .tooltip {
   position: relative;
   display: inline;
@@ -231,12 +353,19 @@ tr:hover {
 tr:last-child {
   border-bottom: none;
 }
-tr:has(> td >a:target) {
-  background-color: #50f6;
+tr:has(> td >a:target), tr:has(> td.uncovered-line.selected) {
+  background-color: #8884;
 }
 a {
   color: inherit;
 }
+.control {
+  position: fixed;
+  top: 0em;
+  right: 0em;
+  padding: 1em;
+  background: #FFF8;
+}
 @media (prefers-color-scheme: dark) {
   body {
     background-color: #222;
@@ -254,6 +383,12 @@ a {
   .tooltip {
     background-color: #068;
   }
+  .control {
+    background: #2228;
+  }
+  tr:has(> td >a:target), tr:has(> td.uncovered-line.selected) {
+    background-color: #8884;
+  }
 }
 )";
 
@@ -298,8 +433,18 @@ std::string getPathToStyle(StringRef ViewPath) {
   return PathToStyle + "style.css";
 }
 
+std::string getPathToJavaScript(StringRef ViewPath) {
+  std::string PathToJavaScript;
+  std::string PathSep = std::string(sys::path::get_separator());
+  unsigned NumSeps = ViewPath.count(PathSep);
+  for (unsigned I = 0, E = NumSeps; I < E; ++I)
+    PathToJavaScript += ".." + PathSep;
+  return PathToJavaScript + "control.js";
+}
+
 void emitPrelude(raw_ostream &OS, const CoverageViewOptions &Opts,
-                 const std::string &PathToStyle = "") {
+                 const std::string &PathToStyle = "",
+                 const std::string &PathToJavaScript = "") {
   OS << "<!doctype html>"
         "<html>"
      << BeginHeader;
@@ -311,6 +456,12 @@ void emitPrelude(raw_ostream &OS, const CoverageViewOptions &Opts,
     OS << "<link rel='stylesheet' type='text/css' href='"
        << escape(PathToStyle, Opts) << "'>";
 
+  // Link to a JavaScript if one is available
+  if (PathToJavaScript.empty())
+    OS << "<script>" << JSForCoverage << "</script>";
+  else
+    OS << "<script src='" << escape(PathToJavaScript, Opts) << "'></script>";
+
   OS << EndHeader << "<body>";
 }
 
@@ -390,7 +541,8 @@ CoveragePrinterHTML::createViewFile(StringRef Path, bool InToplevel) {
     emitPrelude(*OS.get(), Opts);
   } else {
     std::string ViewPath = getOutputPath(Path, "html", InToplevel);
-    emitPrelude(*OS.get(), Opts, getPathToStyle(ViewPath));
+    emitPrelude(*OS.get(), Opts, getPathToStyle(ViewPath),
+                getPathToJavaScript(ViewPath));
   }
 
   return std::move(OS);
@@ -442,6 +594,17 @@ Error CoveragePrinterHTML::emitStyleSheet() {
   return Error::success();
 }
 
+Error CoveragePrinterHTML::emitJavaScript() {
+  auto JSOrErr = createOutputStream("control", "js", /*InToplevel=*/true);
+  if (Error E = JSOrErr.takeError())
+    return E;
+
+  OwnedStream JS = std::move(JSOrErr.get());
+  JS->operator<<(JSForCoverage);
+
+  return Error::success();
+}
+
 void CoveragePrinterHTML::emitReportHeader(raw_ostream &OSRef,
                                            const std::string &Title) {
   // Emit some basic information about the coverage report.
@@ -487,6 +650,10 @@ Error CoveragePrinterHTML::createIndexFile(
   if (Error E = emitStyleSheet())
     return E;
 
+  // Emit the JavaScript UI implementation
+  if (Error E = emitJavaScript())
+    return E;
+
   // Emit a file index along with some coverage statistics.
   auto OSOrErr = createOutputStream("index", "html", /*InToplevel=*/true);
   if (Error E = OSOrErr.takeError())
@@ -495,7 +662,7 @@ Error CoveragePrinterHTML::createIndexFile(
   raw_ostream &OSRef = *OS.get();
 
   assert(Opts.hasOutputDirectory() && "No output directory for index file");
-  emitPrelude(OSRef, Opts, getPathToStyle(""));
+  emitPrelude(OSRef, Opts, getPathToStyle(""), getPathToJavaScript(""));
 
   emitReportHeader(OSRef, "Coverage Report");
 
@@ -561,7 +728,8 @@ struct CoveragePrinterHTMLDirectory::Reporter : public DirectoryCoverageReport {
 
     auto IndexHtmlPath = Printer.getOutputPath((LCPath + "index").str(), "html",
                                                /*InToplevel=*/false);
-    emitPrelude(OSRef, Options, getPathToStyle(IndexHtmlPath));
+    emitPrelude(OSRef, Options, getPathToStyle(IndexHtmlPath),
+                getPathToJavaScript(IndexHtmlPath));
 
     auto NavLink = buildTitleLinks(LCPath);
     Printer.emitReportHeader(OSRef, "Coverage Report (" + NavLink + ")");
@@ -699,6 +867,10 @@ Error CoveragePrinterHTMLDirectory::createIndexFile(
   if (Error E = emitStyleSheet())
     return E;
 
+  // Emit the JavaScript UI implementation
+  if (Error E = emitJavaScript())
+    return E;
+
   // Emit index files in every subdirectory.
   Reporter Report(*this, Coverage, Filters);
   auto TotalsOrErr = Report.prepareDirectoryReports(SourceFiles);
@@ -800,7 +972,10 @@ void SourceCoverageViewHTML::renderLine(raw_ostream &OS, LineRef L,
   auto Highlight = [&](const std::string &Snippet, unsigned LC, unsigned RC) {
     if (getOptions().Debug)
       HighlightedRanges.emplace_back(LC, RC);
-    return tag("span", Snippet, std::string(*Color));
+    if (Snippet.empty())
+      return tag("span", Snippet, std::string(*Color));
+    else
+      return tag("span", Snippet, "region " + std::string(*Color));
   };
 
   auto CheckIfUncovered = [&](const CoverageSegment *S) {
@@ -883,7 +1058,9 @@ void SourceCoverageViewHTML::renderLineCoverageColumn(
   if (Line.isMapped())
     Count = tag("pre", formatCount(Line.getExecutionCount()));
   std::string CoverageClass =
-      (Line.getExecutionCount() > 0) ? "covered-line" : "uncovered-line";
+      (Line.getExecutionCount() > 0)
+          ? "covered-line"
+          : (Line.isMapped() ? "uncovered-line" : "skipped-line");
   OS << tag("td", Count, CoverageClass);
 }
 
@@ -957,7 +1134,7 @@ void SourceCoverageViewHTML::renderBranchView(raw_ostream &OS, BranchView &BRV,
     }
 
     // Display TrueCount or TruePercent.
-    std::string TrueColor = R.ExecutionCount ? "None" : "red";
+    std::string TrueColor = R.ExecutionCount ? "None" : "red branch";
     std::string TrueCovClass =
         (R.ExecutionCount > 0) ? "covered-line" : "uncovered-line";
 
@@ -969,7 +1146,7 @@ void SourceCoverageViewHTML::renderBranchView(raw_ostream &OS, BranchView &BRV,
       OS << format("%0.2f", TruePercent) << "%, ";
 
     // Display FalseCount or FalsePercent.
-    std::string FalseColor = R.FalseExecutionCount ? "None" : "red";
+    std::string FalseColor = R.FalseExecutionCount ? "None" : "red branch";
     std::string FalseCovClass =
         (R.FalseExecutionCount > 0) ? "covered-line" : "uncovered-line";
 
@@ -1053,24 +1230,21 @@ void SourceCoverageViewHTML::renderTitle(raw_ostream &OS, StringRef Title) {
   if (getOptions().hasCreatedTime())
     OS << tag(CreatedTimeTag,
               escape(getOptions().CreatedTimeStr, getOptions()));
+
+  OS << tag("span",
+            a("javascript:next_line()", "next uncovered line (L)") + ", " +
+                a("javascript:next_region()", "next uncovered region (R)") +
+                ", " +
+                a("javascript:next_branch()", "next uncovered branch (B)"),
+            "control");
 }
 
 void SourceCoverageViewHTML::renderTableHeader(raw_ostream &OS,
-                                               unsigned FirstUncoveredLineNo,
                                                unsigned ViewDepth) {
-  std::string SourceLabel;
-  if (FirstUncoveredLineNo == 0) {
-    SourceLabel = tag("td", tag("pre", "Source"));
-  } else {
-    std::string LinkTarget = "#L" + utostr(uint64_t(FirstUncoveredLineNo));
-    SourceLabel =
-        tag("td", tag("pre", "Source (" +
-                                 a(LinkTarget, "jump to first uncovered line") +
-                                 ")"));
-  }
+  std::string Links;
 
   renderLinePrefix(OS, ViewDepth);
-  OS << tag("td", tag("pre", "Line")) << tag("td", tag("pre", "Count"))
-     << SourceLabel;
+  OS << tag("td", tag("pre", "Line")) << tag("td", tag("pre", "Count"));
+  OS << tag("td", tag("pre", "Source" + Links));
   renderLineSuffix(OS, ViewDepth);
 }
diff --git a/llvm/tools/llvm-cov/SourceCoverageViewHTML.h b/llvm/tools/llvm-cov/SourceCoverageViewHTML.h
index 32313a3963c43..9b7391d0043ec 100644
--- a/llvm/tools/llvm-cov/SourceCoverageViewHTML.h
+++ b/llvm/tools/llvm-cov/SourceCoverageViewHTML.h
@@ -38,6 +38,7 @@ class CoveragePrinterHTML : public CoveragePrinter {
 
 protected:
   Error emitStyleSheet();
+  Error emitJavaScript();
   void emitReportHeader(raw_ostream &OSRef, const std::string &Title);
 
 private:
@@ -105,8 +106,7 @@ class SourceCoverageViewHTML : public SourceCoverageView {
 
   void renderTitle(raw_ostream &OS, StringRef Title) override;
 
-  void renderTableHeader(raw_ostream &OS, unsigned FirstUncoveredLineNo,
-                         unsigned IndentLevel) override;
+  void renderTableHeader(raw_ostream &OS, unsigned IndentLevel) override;
 
 public:
   SourceCoverageViewHTML(StringRef SourceName, const MemoryBuffer &File,
diff --git a/llvm/tools/llvm-cov/SourceCoverageViewText.cpp b/llvm/tools/llvm-cov/SourceCoverageViewText.cpp
index 580da45ecfc0d..cab60c2d9034e 100644
--- a/llvm/tools/llvm-cov/SourceCoverageViewText.cpp
+++ b/llvm/tools/llvm-cov/SourceCoverageViewText.cpp
@@ -414,5 +414,4 @@ void SourceCoverageViewText::renderTitle(raw_ostream &OS, StringRef Title) {
         << getOptions().CreatedTimeStr << "\n";
 }
 
-void SourceCoverageViewText::renderTableHeader(raw_ostream &, unsigned,
-                                               unsigned) {}
+void SourceCoverageViewText::renderTableHeader(raw_ostream &, unsigned) {}
diff --git a/llvm/tools/llvm-cov/SourceCoverageViewText.h b/llvm/tools/llvm-cov/SourceCoverageViewText.h
index 7cb47fcbf42bd..25a161b096200 100644
--- a/llvm/tools/llvm-cov/SourceCoverageViewText.h
+++ b/llvm/tools/llvm-cov/SourceCoverageViewText.h
@@ -93,8 +93,7 @@ class SourceCoverageViewText : public SourceCoverageView {
 
   void renderTitle(raw_ostream &OS, StringRef Title) override;
 
-  void renderTableHeader(raw_ostream &OS, unsigned FirstUncoveredLineNo,
-                         unsigned IndentLevel) override;
+  void renderTableHeader(raw_ostream &OS, unsigned IndentLevel) override;
 
 public:
   SourceCoverageViewText(StringRef SourceName, const MemoryBuffer &File,

>From bba5951b6f9cd77047cafc554b20144b33602298 Mon Sep 17 00:00:00 2001
From: Shivam Gupta <shivam98.tkg at gmail.com>
Date: Tue, 18 Jun 2024 01:09:30 +0530
Subject: [PATCH 11/26] [MLIR] Fix an assert that contains a mistake in
 conditional operator (#95668)

This is described in (N2) https://pvs-studio.com/en/blog/posts/cpp/1126/
so caught by the PVS Studio analyzer.

Warning message -
V502 Perhaps the '?:' operator works in a different way than it was
expected. The '?:' operator has a lower priority than the '+' operator.
LoopEmitter.cpp 983
V502 Perhaps the '?:' operator works in a different way than it was
expected. The '?:' operator has a lower priority than the '+' operator.
LoopEmitter.cpp 1039

The assert should be
assert(bArgs.size() == reduc.size() + (needsUniv ? 1 : 0));
since + has higher precedence and ? has lower.

This further can be reduce to
assert(aArgs.size() == reduc.size() + needsUniv);
because needUniv is a bool value which is implicitly converted to 0 or
---
 .../lib/Dialect/SparseTensor/Transforms/Utils/LoopEmitter.cpp | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/LoopEmitter.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/LoopEmitter.cpp
index 05883f1cefdf3..fe0e515a2d180 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/LoopEmitter.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/LoopEmitter.cpp
@@ -542,7 +542,7 @@ std::pair<Operation *, Value> LoopEmitter::emitWhileLoopOverTensorsAtLvls(
   }
   // The remaining block arguments are user-provided reduction values and an
   // optional universal index. Make sure their sizes match.
-  assert(bArgs.size() == reduc.size() + needsUniv ? 1 : 0);
+  assert(bArgs.size() == reduc.size() + needsUniv);
   builder.create<scf::ConditionOp>(loc, whileCond, before->getArguments());
 
   // Generates loop body.
@@ -560,7 +560,7 @@ std::pair<Operation *, Value> LoopEmitter::emitWhileLoopOverTensorsAtLvls(
   }
 
   // In-place update on reduction variable.
-  assert(aArgs.size() == reduc.size() + needsUniv ? 1 : 0);
+  assert(aArgs.size() == reduc.size() + needsUniv);
   for (unsigned i = 0, e = reduc.size(); i < e; i++)
     reduc[i] = aArgs[i];
 

>From 3b997294d6117241477ab36be0595040f8278707 Mon Sep 17 00:00:00 2001
From: Matt Arsenault <Matthew.Arsenault at amd.com>
Date: Mon, 17 Jun 2024 21:44:52 +0200
Subject: [PATCH 12/26] AMDGPU: Remove .v2bf16 buffer atomic fadd intrinsics
 (#95783)

These are redundant with the unsuffixed versions, and have a name
collision with surprising behavior when the base intrinsic is used with
v2bf16.

The global and flat variants should be removed too, but those are complicated
due to using v2i16 in place of the natural v2bf16. Those cases can soon be
completely deleted in favor of atomicrmw.

The GlobalISel codegen change is broken and substitutes handling as bf16
for handling as f16, but it's a bug that this passed the IRTranslator in the first
place.
---
 llvm/include/llvm/IR/IntrinsicsAMDGPU.td      | 44 +------------------
 llvm/lib/Target/AMDGPU/AMDGPUGISel.td         |  1 -
 llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp |  1 -
 llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h   |  1 -
 .../lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp |  9 ----
 .../Target/AMDGPU/AMDGPURegisterBankInfo.cpp  |  2 -
 .../Target/AMDGPU/AMDGPUSearchableTables.td   |  4 --
 llvm/lib/Target/AMDGPU/BUFInstructions.td     |  2 +-
 llvm/lib/Target/AMDGPU/SIISelLowering.cpp     |  9 ----
 llvm/lib/Target/AMDGPU/SIInstrInfo.td         |  1 -
 llvm/lib/Target/AMDGPU/SIInstructions.td      |  1 -
 .../test/CodeGen/AMDGPU/fp-atomics-gfx1200.ll | 12 ++---
 ...mdgcn.ptr.buffer.atomic.fadd_rtn_errors.ll | 10 +++--
 13 files changed, 15 insertions(+), 82 deletions(-)

diff --git a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
index e6b69b39911a9..45f1092094572 100644
--- a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
+++ b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
@@ -1337,27 +1337,9 @@ def int_amdgcn_raw_ptr_buffer_atomic_cmpswap : Intrinsic<
 
 // gfx908 intrinsic
 def int_amdgcn_raw_buffer_atomic_fadd : AMDGPURawBufferAtomic<llvm_anyfloat_ty>;
+
+// Supports float and <2 x half> on gfx908. Supports v2bf16 on gfx90a, gfx940, gfx12+.
 def int_amdgcn_raw_ptr_buffer_atomic_fadd : AMDGPURawPtrBufferAtomic<llvm_anyfloat_ty>;
-// gfx12+ intrinsic
-def int_amdgcn_raw_buffer_atomic_fadd_v2bf16 : Intrinsic <
-  [llvm_v2bf16_ty],
-  [llvm_v2bf16_ty,
-   llvm_v4i32_ty,
-   llvm_i32_ty,
-   llvm_i32_ty,
-   llvm_i32_ty],
- [ImmArg<ArgIndex<4>>, IntrWillReturn, IntrNoCallback, IntrNoFree], "", [SDNPMemOperand]>,
- AMDGPURsrcIntrinsic<1, 0>;
-def int_amdgcn_raw_ptr_buffer_atomic_fadd_v2bf16 : Intrinsic <
-  [llvm_v2bf16_ty],
-  [llvm_v2bf16_ty,
-   AMDGPUBufferRsrcTy,
-   llvm_i32_ty,
-   llvm_i32_ty,
-   llvm_i32_ty],
- [IntrArgMemOnly, NoCapture<ArgIndex<1>>,
-  ImmArg<ArgIndex<4>>, IntrWillReturn, IntrNoCallback, IntrNoFree], "", [SDNPMemOperand]>,
- AMDGPURsrcIntrinsic<1, 0>;
 
 class AMDGPUStructBufferAtomic<LLVMType data_ty = llvm_any_ty> : Intrinsic <
   [data_ty],
@@ -1434,28 +1416,6 @@ def int_amdgcn_struct_ptr_buffer_atomic_cmpswap : Intrinsic<
 // gfx908 intrinsic
 def int_amdgcn_struct_buffer_atomic_fadd : AMDGPUStructBufferAtomic<llvm_anyfloat_ty>;
 def int_amdgcn_struct_ptr_buffer_atomic_fadd : AMDGPUStructPtrBufferAtomic<llvm_anyfloat_ty>;
-// gfx12 intrinsic
-def int_amdgcn_struct_buffer_atomic_fadd_v2bf16 : Intrinsic <
-  [llvm_v2bf16_ty],
-  [llvm_v2bf16_ty,
-   llvm_v4i32_ty,
-   llvm_i32_ty,
-   llvm_i32_ty,
-   llvm_i32_ty,
-   llvm_i32_ty],
-  [ImmArg<ArgIndex<5>>, IntrWillReturn, IntrNoCallback, IntrNoFree], "", [SDNPMemOperand]>,
-  AMDGPURsrcIntrinsic<1, 0>;
-def int_amdgcn_struct_ptr_buffer_atomic_fadd_v2bf16 : Intrinsic <
-  [llvm_v2bf16_ty],
-  [llvm_v2bf16_ty,
-   AMDGPUBufferRsrcTy,
-   llvm_i32_ty,
-   llvm_i32_ty,
-   llvm_i32_ty,
-   llvm_i32_ty],
-  [IntrArgMemOnly, NoCapture<ArgIndex<1>>,
-   ImmArg<ArgIndex<5>>, IntrWillReturn, IntrNoCallback, IntrNoFree], "", [SDNPMemOperand]>,
-  AMDGPURsrcIntrinsic<1, 0>;
 
 // gfx90a intrinsics
 def int_amdgcn_struct_buffer_atomic_fmin : AMDGPUStructBufferAtomic<llvm_anyfloat_ty>;
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUGISel.td b/llvm/lib/Target/AMDGPU/AMDGPUGISel.td
index 231db188e65dc..d81c18875eebd 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUGISel.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPUGISel.td
@@ -290,7 +290,6 @@ def : GINodeEquiv<G_AMDGPU_BUFFER_ATOMIC_XOR, SIbuffer_atomic_xor>;
 def : GINodeEquiv<G_AMDGPU_BUFFER_ATOMIC_INC, SIbuffer_atomic_inc>;
 def : GINodeEquiv<G_AMDGPU_BUFFER_ATOMIC_DEC, SIbuffer_atomic_dec>;
 def : GINodeEquiv<G_AMDGPU_BUFFER_ATOMIC_FADD, SIbuffer_atomic_fadd>;
-def : GINodeEquiv<G_AMDGPU_BUFFER_ATOMIC_FADD_BF16, SIbuffer_atomic_fadd_bf16>;
 def : GINodeEquiv<G_AMDGPU_BUFFER_ATOMIC_FMIN, SIbuffer_atomic_fmin>;
 def : GINodeEquiv<G_AMDGPU_BUFFER_ATOMIC_FMAX, SIbuffer_atomic_fmax>;
 def : GINodeEquiv<G_AMDGPU_BUFFER_ATOMIC_CMPSWAP, SIbuffer_atomic_cmpswap>;
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
index 18193d8807597..519e623306eb1 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
@@ -5564,7 +5564,6 @@ const char* AMDGPUTargetLowering::getTargetNodeName(unsigned Opcode) const {
   NODE_NAME_CASE(BUFFER_ATOMIC_CMPSWAP)
   NODE_NAME_CASE(BUFFER_ATOMIC_CSUB)
   NODE_NAME_CASE(BUFFER_ATOMIC_FADD)
-  NODE_NAME_CASE(BUFFER_ATOMIC_FADD_BF16)
   NODE_NAME_CASE(BUFFER_ATOMIC_FMIN)
   NODE_NAME_CASE(BUFFER_ATOMIC_FMAX)
   NODE_NAME_CASE(BUFFER_ATOMIC_COND_SUB_U32)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h
index 71c4334029b43..206bb46b6c863 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h
+++ b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h
@@ -615,7 +615,6 @@ enum NodeType : unsigned {
   BUFFER_ATOMIC_CMPSWAP,
   BUFFER_ATOMIC_CSUB,
   BUFFER_ATOMIC_FADD,
-  BUFFER_ATOMIC_FADD_BF16,
   BUFFER_ATOMIC_FMIN,
   BUFFER_ATOMIC_FMAX,
   BUFFER_ATOMIC_COND_SUB_U32,
diff --git a/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp b/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp
index 973b6b8cce177..0c7b1968e551c 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp
@@ -6018,11 +6018,6 @@ static unsigned getBufferAtomicPseudo(Intrinsic::ID IntrID) {
   case Intrinsic::amdgcn_struct_buffer_atomic_fadd:
   case Intrinsic::amdgcn_struct_ptr_buffer_atomic_fadd:
     return AMDGPU::G_AMDGPU_BUFFER_ATOMIC_FADD;
-  case Intrinsic::amdgcn_raw_buffer_atomic_fadd_v2bf16:
-  case Intrinsic::amdgcn_struct_buffer_atomic_fadd_v2bf16:
-  case Intrinsic::amdgcn_raw_ptr_buffer_atomic_fadd_v2bf16:
-  case Intrinsic::amdgcn_struct_ptr_buffer_atomic_fadd_v2bf16:
-    return AMDGPU::G_AMDGPU_BUFFER_ATOMIC_FADD_BF16;
   case Intrinsic::amdgcn_raw_buffer_atomic_fmin:
   case Intrinsic::amdgcn_raw_ptr_buffer_atomic_fmin:
   case Intrinsic::amdgcn_struct_buffer_atomic_fmin:
@@ -7330,10 +7325,6 @@ bool AMDGPULegalizerInfo::legalizeIntrinsic(LegalizerHelper &Helper,
   case Intrinsic::amdgcn_raw_ptr_buffer_atomic_fadd:
   case Intrinsic::amdgcn_struct_buffer_atomic_fadd:
   case Intrinsic::amdgcn_struct_ptr_buffer_atomic_fadd:
-  case Intrinsic::amdgcn_raw_buffer_atomic_fadd_v2bf16:
-  case Intrinsic::amdgcn_raw_ptr_buffer_atomic_fadd_v2bf16:
-  case Intrinsic::amdgcn_struct_buffer_atomic_fadd_v2bf16:
-  case Intrinsic::amdgcn_struct_ptr_buffer_atomic_fadd_v2bf16:
     return legalizeBufferAtomic(MI, B, IntrID);
   case Intrinsic::amdgcn_rsq_clamp:
     return legalizeRsqClampIntrinsic(MI, MRI, B);
diff --git a/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp b/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp
index 7ebd674757fbc..313d53a1524d2 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp
@@ -3079,7 +3079,6 @@ void AMDGPURegisterBankInfo::applyMappingImpl(
     return;
   }
   case AMDGPU::G_AMDGPU_BUFFER_ATOMIC_FADD:
-  case AMDGPU::G_AMDGPU_BUFFER_ATOMIC_FADD_BF16:
   case AMDGPU::G_AMDGPU_BUFFER_ATOMIC_FMIN:
   case AMDGPU::G_AMDGPU_BUFFER_ATOMIC_FMAX: {
     applyDefaultMapping(OpdMapper);
@@ -4376,7 +4375,6 @@ AMDGPURegisterBankInfo::getInstrMapping(const MachineInstr &MI) const {
   case AMDGPU::G_AMDGPU_BUFFER_ATOMIC_INC:
   case AMDGPU::G_AMDGPU_BUFFER_ATOMIC_DEC:
   case AMDGPU::G_AMDGPU_BUFFER_ATOMIC_FADD:
-  case AMDGPU::G_AMDGPU_BUFFER_ATOMIC_FADD_BF16:
   case AMDGPU::G_AMDGPU_BUFFER_ATOMIC_FMIN:
   case AMDGPU::G_AMDGPU_BUFFER_ATOMIC_FMAX: {
     // vdata_out
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUSearchableTables.td b/llvm/lib/Target/AMDGPU/AMDGPUSearchableTables.td
index e84d39a2895c8..7b29d573b6101 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUSearchableTables.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPUSearchableTables.td
@@ -269,7 +269,6 @@ def : SourceOfDivergence<int_amdgcn_raw_buffer_atomic_xor>;
 def : SourceOfDivergence<int_amdgcn_raw_buffer_atomic_inc>;
 def : SourceOfDivergence<int_amdgcn_raw_buffer_atomic_dec>;
 def : SourceOfDivergence<int_amdgcn_raw_buffer_atomic_fadd>;
-def : SourceOfDivergence<int_amdgcn_raw_buffer_atomic_fadd_v2bf16>;
 def : SourceOfDivergence<int_amdgcn_raw_buffer_atomic_fmin>;
 def : SourceOfDivergence<int_amdgcn_raw_buffer_atomic_fmax>;
 def : SourceOfDivergence<int_amdgcn_raw_buffer_atomic_cmpswap>;
@@ -287,7 +286,6 @@ def : SourceOfDivergence<int_amdgcn_raw_ptr_buffer_atomic_xor>;
 def : SourceOfDivergence<int_amdgcn_raw_ptr_buffer_atomic_inc>;
 def : SourceOfDivergence<int_amdgcn_raw_ptr_buffer_atomic_dec>;
 def : SourceOfDivergence<int_amdgcn_raw_ptr_buffer_atomic_fadd>;
-def : SourceOfDivergence<int_amdgcn_raw_ptr_buffer_atomic_fadd_v2bf16>;
 def : SourceOfDivergence<int_amdgcn_raw_ptr_buffer_atomic_fmin>;
 def : SourceOfDivergence<int_amdgcn_raw_ptr_buffer_atomic_fmax>;
 def : SourceOfDivergence<int_amdgcn_raw_ptr_buffer_atomic_cmpswap>;
@@ -305,7 +303,6 @@ def : SourceOfDivergence<int_amdgcn_struct_buffer_atomic_xor>;
 def : SourceOfDivergence<int_amdgcn_struct_buffer_atomic_inc>;
 def : SourceOfDivergence<int_amdgcn_struct_buffer_atomic_dec>;
 def : SourceOfDivergence<int_amdgcn_struct_buffer_atomic_fadd>;
-def : SourceOfDivergence<int_amdgcn_struct_buffer_atomic_fadd_v2bf16>;
 def : SourceOfDivergence<int_amdgcn_struct_buffer_atomic_fmin>;
 def : SourceOfDivergence<int_amdgcn_struct_buffer_atomic_fmax>;
 def : SourceOfDivergence<int_amdgcn_struct_buffer_atomic_cmpswap>;
@@ -323,7 +320,6 @@ def : SourceOfDivergence<int_amdgcn_struct_ptr_buffer_atomic_xor>;
 def : SourceOfDivergence<int_amdgcn_struct_ptr_buffer_atomic_inc>;
 def : SourceOfDivergence<int_amdgcn_struct_ptr_buffer_atomic_dec>;
 def : SourceOfDivergence<int_amdgcn_struct_ptr_buffer_atomic_fadd>;
-def : SourceOfDivergence<int_amdgcn_struct_ptr_buffer_atomic_fadd_v2bf16>;
 def : SourceOfDivergence<int_amdgcn_struct_ptr_buffer_atomic_fmin>;
 def : SourceOfDivergence<int_amdgcn_struct_ptr_buffer_atomic_fmax>;
 def : SourceOfDivergence<int_amdgcn_struct_ptr_buffer_atomic_cmpswap>;
diff --git a/llvm/lib/Target/AMDGPU/BUFInstructions.td b/llvm/lib/Target/AMDGPU/BUFInstructions.td
index 43e5434ea2700..f5b6de15e19e7 100644
--- a/llvm/lib/Target/AMDGPU/BUFInstructions.td
+++ b/llvm/lib/Target/AMDGPU/BUFInstructions.td
@@ -1751,7 +1751,7 @@ let OtherPredicates = [HasAtomicCSubNoRtnInsts] in
 defm : SIBufferAtomicPat<"SIbuffer_atomic_csub", i32, "BUFFER_ATOMIC_CSUB", ["noret"]>;
 
 let SubtargetPredicate = isGFX12Plus in {
-  defm : SIBufferAtomicPat_Common<"SIbuffer_atomic_fadd_bf16", v2bf16, "BUFFER_ATOMIC_PK_ADD_BF16_VBUFFER">;
+  defm : SIBufferAtomicPat_Common<"SIbuffer_atomic_fadd", v2bf16, "BUFFER_ATOMIC_PK_ADD_BF16_VBUFFER">;
   defm : SIBufferAtomicPat_Common<"SIbuffer_atomic_cond_sub_u32", i32, "BUFFER_ATOMIC_COND_SUB_U32_VBUFFER", ["ret"]>;
 
   let OtherPredicates = [HasAtomicCSubNoRtnInsts] in
diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
index d9a163ded6bab..c436e03806dc8 100644
--- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
@@ -8833,17 +8833,9 @@ SDValue SITargetLowering::LowerINTRINSIC_W_CHAIN(SDValue Op,
   case Intrinsic::amdgcn_raw_buffer_atomic_fadd:
   case Intrinsic::amdgcn_raw_ptr_buffer_atomic_fadd:
     return lowerRawBufferAtomicIntrin(Op, DAG, AMDGPUISD::BUFFER_ATOMIC_FADD);
-  case Intrinsic::amdgcn_raw_ptr_buffer_atomic_fadd_v2bf16:
-  case Intrinsic::amdgcn_raw_buffer_atomic_fadd_v2bf16:
-    return lowerRawBufferAtomicIntrin(Op, DAG,
-                                      AMDGPUISD::BUFFER_ATOMIC_FADD_BF16);
   case Intrinsic::amdgcn_struct_buffer_atomic_fadd:
   case Intrinsic::amdgcn_struct_ptr_buffer_atomic_fadd:
     return lowerStructBufferAtomicIntrin(Op, DAG, AMDGPUISD::BUFFER_ATOMIC_FADD);
-  case Intrinsic::amdgcn_struct_buffer_atomic_fadd_v2bf16:
-  case Intrinsic::amdgcn_struct_ptr_buffer_atomic_fadd_v2bf16:
-    return lowerStructBufferAtomicIntrin(Op, DAG,
-                                         AMDGPUISD::BUFFER_ATOMIC_FADD_BF16);
   case Intrinsic::amdgcn_raw_buffer_atomic_fmin:
   case Intrinsic::amdgcn_raw_ptr_buffer_atomic_fmin:
     return lowerRawBufferAtomicIntrin(Op, DAG, AMDGPUISD::BUFFER_ATOMIC_FMIN);
@@ -15841,7 +15833,6 @@ bool SITargetLowering::isSDNodeSourceOfDivergence(const SDNode *N,
   case AMDGPUISD::BUFFER_ATOMIC_CMPSWAP:
   case AMDGPUISD::BUFFER_ATOMIC_CSUB:
   case AMDGPUISD::BUFFER_ATOMIC_FADD:
-  case AMDGPUISD::BUFFER_ATOMIC_FADD_BF16:
   case AMDGPUISD::BUFFER_ATOMIC_FMIN:
   case AMDGPUISD::BUFFER_ATOMIC_FMAX:
     // Target-specific read-modify-write atomics are sources of divergence.
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.td b/llvm/lib/Target/AMDGPU/SIInstrInfo.td
index 6682763210411..9b9ff4a5d6996 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.td
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.td
@@ -222,7 +222,6 @@ defm SIbuffer_atomic_inc : SDBufferAtomic <"AMDGPUISD::BUFFER_ATOMIC_INC">;
 defm SIbuffer_atomic_dec : SDBufferAtomic <"AMDGPUISD::BUFFER_ATOMIC_DEC">;
 defm SIbuffer_atomic_csub : SDBufferAtomic <"AMDGPUISD::BUFFER_ATOMIC_CSUB">;
 defm SIbuffer_atomic_fadd : SDBufferAtomic <"AMDGPUISD::BUFFER_ATOMIC_FADD">;
-defm SIbuffer_atomic_fadd_bf16 : SDBufferAtomic <"AMDGPUISD::BUFFER_ATOMIC_FADD_BF16">;
 defm SIbuffer_atomic_fmin : SDBufferAtomic <"AMDGPUISD::BUFFER_ATOMIC_FMIN">;
 defm SIbuffer_atomic_fmax : SDBufferAtomic <"AMDGPUISD::BUFFER_ATOMIC_FMAX">;
 defm SIbuffer_atomic_cond_sub_u32 : SDBufferAtomic <"AMDGPUISD::BUFFER_ATOMIC_COND_SUB_U32">;
diff --git a/llvm/lib/Target/AMDGPU/SIInstructions.td b/llvm/lib/Target/AMDGPU/SIInstructions.td
index ba31027da92e8..e32bb8fec1f54 100644
--- a/llvm/lib/Target/AMDGPU/SIInstructions.td
+++ b/llvm/lib/Target/AMDGPU/SIInstructions.td
@@ -3892,7 +3892,6 @@ def G_AMDGPU_BUFFER_ATOMIC_XOR : BufferAtomicGenericInstruction;
 def G_AMDGPU_BUFFER_ATOMIC_INC : BufferAtomicGenericInstruction;
 def G_AMDGPU_BUFFER_ATOMIC_DEC : BufferAtomicGenericInstruction;
 def G_AMDGPU_BUFFER_ATOMIC_FADD : BufferAtomicGenericInstruction;
-def G_AMDGPU_BUFFER_ATOMIC_FADD_BF16 : BufferAtomicGenericInstruction;
 def G_AMDGPU_BUFFER_ATOMIC_FMIN : BufferAtomicGenericInstruction;
 def G_AMDGPU_BUFFER_ATOMIC_FMAX : BufferAtomicGenericInstruction;
 
diff --git a/llvm/test/CodeGen/AMDGPU/fp-atomics-gfx1200.ll b/llvm/test/CodeGen/AMDGPU/fp-atomics-gfx1200.ll
index 2f29a1a9aa768..9f339af0f5580 100644
--- a/llvm/test/CodeGen/AMDGPU/fp-atomics-gfx1200.ll
+++ b/llvm/test/CodeGen/AMDGPU/fp-atomics-gfx1200.ll
@@ -321,7 +321,7 @@ define amdgpu_ps void @raw_buffer_atomic_add_v2f16_noret_offset(<2 x half> %val,
 ;
 ; GFX12-GISEL-LABEL: raw_buffer_atomic_add_v2f16_noret_offset:
 ; GFX12-GISEL:       ; %bb.0:
-; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_f16 v0, off, s[0:3], s4 offset:92
+; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_bf16 v0, off, s[0:3], s4 offset:92
 ; GFX12-GISEL-NEXT:    s_nop 0
 ; GFX12-GISEL-NEXT:    s_sendmsg sendmsg(MSG_DEALLOC_VGPRS)
 ; GFX12-GISEL-NEXT:    s_endpgm
@@ -339,7 +339,7 @@ define amdgpu_ps void @raw_buffer_atomic_add_v2f16_noret(<2 x half> %val, <4 x i
 ;
 ; GFX12-GISEL-LABEL: raw_buffer_atomic_add_v2f16_noret:
 ; GFX12-GISEL:       ; %bb.0:
-; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_f16 v0, v1, s[0:3], s4 offen
+; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_bf16 v0, v1, s[0:3], s4 offen
 ; GFX12-GISEL-NEXT:    s_nop 0
 ; GFX12-GISEL-NEXT:    s_sendmsg sendmsg(MSG_DEALLOC_VGPRS)
 ; GFX12-GISEL-NEXT:    s_endpgm
@@ -356,7 +356,7 @@ define amdgpu_ps <2 x half> @raw_buffer_atomic_add_v2f16_ret_offset(<2 x half> %
 ;
 ; GFX12-GISEL-LABEL: raw_buffer_atomic_add_v2f16_ret_offset:
 ; GFX12-GISEL:       ; %bb.0:
-; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_f16 v0, off, s[0:3], s4 offset:92 th:TH_ATOMIC_RETURN
+; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_bf16 v0, off, s[0:3], s4 offset:92 th:TH_ATOMIC_RETURN
 ; GFX12-GISEL-NEXT:    s_wait_loadcnt 0x0
 ; GFX12-GISEL-NEXT:    ; return to shader part epilog
   %ret = call <2 x half> @llvm.amdgcn.raw.buffer.atomic.fadd.v2f16(<2 x half> %val, <4 x i32> %rsrc, i32 92, i32 %soffset, i32 0)
@@ -372,7 +372,7 @@ define amdgpu_ps <2 x half> @raw_buffer_atomic_add_v2f16_ret(<2 x half> %val, <4
 ;
 ; GFX12-GISEL-LABEL: raw_buffer_atomic_add_v2f16_ret:
 ; GFX12-GISEL:       ; %bb.0:
-; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_f16 v0, v1, s[0:3], s4 offen th:TH_ATOMIC_RETURN
+; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_bf16 v0, v1, s[0:3], s4 offen th:TH_ATOMIC_RETURN
 ; GFX12-GISEL-NEXT:    s_wait_loadcnt 0x0
 ; GFX12-GISEL-NEXT:    ; return to shader part epilog
   %ret = call <2 x half> @llvm.amdgcn.raw.buffer.atomic.fadd.v2f16(<2 x half> %val, <4 x i32> %rsrc, i32 %voffset, i32 %soffset, i32 0)
@@ -388,7 +388,7 @@ define amdgpu_ps float @struct_buffer_atomic_add_v2f16_ret(<2 x half> %val, <4 x
 ;
 ; GFX12-GISEL-LABEL: struct_buffer_atomic_add_v2f16_ret:
 ; GFX12-GISEL:       ; %bb.0:
-; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_f16 v0, v[1:2], s[0:3], s4 idxen offen th:TH_ATOMIC_RETURN
+; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_bf16 v0, v[1:2], s[0:3], s4 idxen offen th:TH_ATOMIC_RETURN
 ; GFX12-GISEL-NEXT:    s_wait_loadcnt 0x0
 ; GFX12-GISEL-NEXT:    ; return to shader part epilog
   %orig = call <2 x half> @llvm.amdgcn.struct.buffer.atomic.fadd.v2f16(<2 x half> %val, <4 x i32> %rsrc, i32 %vindex, i32 %voffset, i32 %soffset, i32 0)
@@ -406,7 +406,7 @@ define amdgpu_ps void @struct_buffer_atomic_add_v2f16_noret(<2 x half> %val, <4
 ;
 ; GFX12-GISEL-LABEL: struct_buffer_atomic_add_v2f16_noret:
 ; GFX12-GISEL:       ; %bb.0:
-; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_f16 v0, v[1:2], s[0:3], s4 idxen offen
+; GFX12-GISEL-NEXT:    buffer_atomic_pk_add_bf16 v0, v[1:2], s[0:3], s4 idxen offen
 ; GFX12-GISEL-NEXT:    s_nop 0
 ; GFX12-GISEL-NEXT:    s_sendmsg sendmsg(MSG_DEALLOC_VGPRS)
 ; GFX12-GISEL-NEXT:    s_endpgm
diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.ptr.buffer.atomic.fadd_rtn_errors.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.ptr.buffer.atomic.fadd_rtn_errors.ll
index a3b83c346c1bb..f8caf84d5c51a 100644
--- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.ptr.buffer.atomic.fadd_rtn_errors.ll
+++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.ptr.buffer.atomic.fadd_rtn_errors.ll
@@ -17,10 +17,12 @@
 ; RUN: not --crash llc -global-isel=1 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx908 -filetype=null %t/struct-ret-v2f16-error.ll 2>&1 | FileCheck -check-prefix=ERR-STRUCT-V2F16-GISEL %s
 ; RUN: not --crash llc -global-isel=1 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx908 -filetype=null %t/raw-ret-v2bf16-error.ll 2>&1 | FileCheck -check-prefix=ERR-RAW-V2BF16-GISEL %s
 ; RUN: not --crash llc -global-isel=1 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx908 -filetype=null %t/struct-ret-v2bf16-error.ll 2>&1 | FileCheck -check-prefix=ERR-STRUCT-V2BF16-GISEL %s
-; RUN: not --crash llc -global-isel=1 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx90a -filetype=null %t/raw-ret-v2bf16-error.ll 2>&1 | FileCheck -check-prefix=ERR-RAW-V2BF16-GISEL %s
-; RUN: not --crash llc -global-isel=1 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx90a -filetype=null %t/struct-ret-v2bf16-error.ll 2>&1 | FileCheck -check-prefix=ERR-STRUCT-V2BF16-GISEL %s
-; RUN: not --crash llc -global-isel=1 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx940 -filetype=null %t/raw-ret-v2bf16-error.ll 2>&1 | FileCheck -check-prefix=ERR-RAW-V2BF16-GISEL %s
-; RUN: not --crash llc -global-isel=1 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx940 -filetype=null %t/struct-ret-v2bf16-error.ll 2>&1 | FileCheck -check-prefix=ERR-STRUCT-V2BF16-GISEL %s
+
+; FIXME: These should fail when bfloat support is handled correctly
+; xUN: not --crash llc -global-isel=1 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx90a -filetype=null %t/raw-ret-v2bf16-error.ll 2>&1 | FileCheck -check-prefix=ERR-RAW-V2BF16-GISEL %s
+; xUN: not --crash llc -global-isel=1 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx90a -filetype=null %t/struct-ret-v2bf16-error.ll 2>&1 | FileCheck -check-prefix=ERR-STRUCT-V2BF16-GISEL %s
+; xUN: not --crash llc -global-isel=1 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx940 -filetype=null %t/raw-ret-v2bf16-error.ll 2>&1 | FileCheck -check-prefix=ERR-RAW-V2BF16-GISEL %s
+; xUN: not --crash llc -global-isel=1 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx940 -filetype=null %t/struct-ret-v2bf16-error.ll 2>&1 | FileCheck -check-prefix=ERR-STRUCT-V2BF16-GISEL %s
 
 ; Make sure buffer fadd atomics with return values are not selected
 ; for gfx908 where they do not work.

>From 8930ac1bbe0bc50402da53b22501e17045a537ca Mon Sep 17 00:00:00 2001
From: Matt Arsenault <Matthew.Arsenault at amd.com>
Date: Mon, 17 Jun 2024 21:51:25 +0200
Subject: [PATCH 13/26] AMDGPU: Cleanup selection patterns for buffer loads
 (#95378)

We should just support these for all register types.
---
 llvm/lib/Target/AMDGPU/BUFInstructions.td | 72 ++++++++++-------------
 llvm/lib/Target/AMDGPU/SIRegisterInfo.td  | 16 ++---
 2 files changed, 39 insertions(+), 49 deletions(-)

diff --git a/llvm/lib/Target/AMDGPU/BUFInstructions.td b/llvm/lib/Target/AMDGPU/BUFInstructions.td
index f5b6de15e19e7..dff19b6a93286 100644
--- a/llvm/lib/Target/AMDGPU/BUFInstructions.td
+++ b/llvm/lib/Target/AMDGPU/BUFInstructions.td
@@ -1419,27 +1419,21 @@ let OtherPredicates = [HasPackedD16VMem] in {
   defm : MUBUF_LoadIntrinsicPat<SIbuffer_load_format_d16, v4i16, "BUFFER_LOAD_FORMAT_D16_XYZW">;
 } // End HasPackedD16VMem.
 
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, f32, "BUFFER_LOAD_DWORD">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, i32, "BUFFER_LOAD_DWORD">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v2i16, "BUFFER_LOAD_DWORD">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v2f16, "BUFFER_LOAD_DWORD">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v2bf16, "BUFFER_LOAD_DWORD">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v2f32, "BUFFER_LOAD_DWORDX2">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v2i32, "BUFFER_LOAD_DWORDX2">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v4i16, "BUFFER_LOAD_DWORDX2">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v4f16, "BUFFER_LOAD_DWORDX2">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, i64, "BUFFER_LOAD_DWORDX2">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, f64, "BUFFER_LOAD_DWORDX2">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v4bf16, "BUFFER_LOAD_DWORDX2">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v3f32, "BUFFER_LOAD_DWORDX3">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v3i32, "BUFFER_LOAD_DWORDX3">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v4f32, "BUFFER_LOAD_DWORDX4">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v4i32, "BUFFER_LOAD_DWORDX4">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v2i64, "BUFFER_LOAD_DWORDX4">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v2f64, "BUFFER_LOAD_DWORDX4">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v8i16, "BUFFER_LOAD_DWORDX4">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v8f16, "BUFFER_LOAD_DWORDX4">;
-defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, v8bf16, "BUFFER_LOAD_DWORDX4">;
+foreach vt = Reg32Types.types in {
+defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, vt, "BUFFER_LOAD_DWORD">;
+}
+
+foreach vt = Reg64Types.types in {
+defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, vt, "BUFFER_LOAD_DWORDX2">;
+}
+
+foreach vt = Reg96Types.types in {
+defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, vt, "BUFFER_LOAD_DWORDX3">;
+}
+
+foreach vt = Reg128Types.types in {
+defm : MUBUF_LoadIntrinsicPat<SIbuffer_load, vt, "BUFFER_LOAD_DWORDX4">;
+}
 
 defm : MUBUF_LoadIntrinsicPat<SIbuffer_load_byte, i32, "BUFFER_LOAD_SBYTE">;
 defm : MUBUF_LoadIntrinsicPat<SIbuffer_load_short, i32, "BUFFER_LOAD_SSHORT">;
@@ -1530,27 +1524,21 @@ let OtherPredicates = [HasPackedD16VMem] in {
   defm : MUBUF_StoreIntrinsicPat<SIbuffer_store_format_d16, v4i16, "BUFFER_STORE_FORMAT_D16_XYZW">;
 } // End HasPackedD16VMem.
 
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, f32, "BUFFER_STORE_DWORD">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, i32, "BUFFER_STORE_DWORD">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v2i16, "BUFFER_STORE_DWORD">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v2f16, "BUFFER_STORE_DWORD">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v2bf16, "BUFFER_STORE_DWORD">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v2f32, "BUFFER_STORE_DWORDX2">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v2i32, "BUFFER_STORE_DWORDX2">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, i64, "BUFFER_STORE_DWORDX2">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, f64, "BUFFER_STORE_DWORDX2">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v4i16, "BUFFER_STORE_DWORDX2">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v4f16, "BUFFER_STORE_DWORDX2">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v4bf16, "BUFFER_STORE_DWORDX2">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v3f32, "BUFFER_STORE_DWORDX3">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v3i32, "BUFFER_STORE_DWORDX3">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v4f32, "BUFFER_STORE_DWORDX4">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v4i32, "BUFFER_STORE_DWORDX4">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v2i64, "BUFFER_STORE_DWORDX4">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v2f64, "BUFFER_STORE_DWORDX4">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v8f16, "BUFFER_STORE_DWORDX4">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v8i16, "BUFFER_STORE_DWORDX4">;
-defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, v8bf16, "BUFFER_STORE_DWORDX4">;
+foreach vt = Reg32Types.types in {
+defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, vt, "BUFFER_STORE_DWORD">;
+}
+
+foreach vt = Reg64Types.types in {
+defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, vt, "BUFFER_STORE_DWORDX2">;
+}
+
+foreach vt = Reg96Types.types in {
+defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, vt, "BUFFER_STORE_DWORDX3">;
+}
+
+foreach vt = Reg128Types.types in {
+defm : MUBUF_StoreIntrinsicPat<SIbuffer_store, vt, "BUFFER_STORE_DWORDX4">;
+}
 
 defm : MUBUF_StoreIntrinsicPat<SIbuffer_store_byte, i32, "BUFFER_STORE_BYTE">;
 defm : MUBUF_StoreIntrinsicPat<SIbuffer_store_short, i32, "BUFFER_STORE_SHORT">;
diff --git a/llvm/lib/Target/AMDGPU/SIRegisterInfo.td b/llvm/lib/Target/AMDGPU/SIRegisterInfo.td
index 3666976cf82f8..a8efe2b2ba35e 100644
--- a/llvm/lib/Target/AMDGPU/SIRegisterInfo.td
+++ b/llvm/lib/Target/AMDGPU/SIRegisterInfo.td
@@ -586,7 +586,9 @@ class RegisterTypes<list<ValueType> reg_types> {
 
 def Reg16Types : RegisterTypes<[i16, f16, bf16]>;
 def Reg32Types : RegisterTypes<[i32, f32, v2i16, v2f16, v2bf16, p2, p3, p5, p6]>;
-def Reg64Types : RegisterTypes<[i64, f64, v2i32, v2f32, v4i16, v4f16, v4bf16, p0]>;
+def Reg64Types : RegisterTypes<[i64, f64, v2i32, v2f32, p0, v4i16, v4f16, v4bf16]>;
+def Reg96Types : RegisterTypes<[v3i32, v3f32]>;
+def Reg128Types : RegisterTypes<[v4i32, v4f32, v2i64, v2f64, v8i16, v8f16, v8bf16]>;
 
 let HasVGPR = 1 in {
 // VOP3 and VINTERP can access 256 lo and 256 hi registers.
@@ -744,7 +746,7 @@ def Pseudo_SReg_32 : SIRegisterClass<"AMDGPU", [i32, f32, i16, f16, bf16, v2i16,
   let BaseClassOrder = 10000;
 }
 
-def Pseudo_SReg_128 : SIRegisterClass<"AMDGPU", [v4i32, v2i64, v2f64, v8i16, v8f16, v8bf16], 32,
+def Pseudo_SReg_128 : SIRegisterClass<"AMDGPU", Reg128Types.types, 32,
   (add PRIVATE_RSRC_REG)> {
   let isAllocatable = 0;
   let CopyCost = -1;
@@ -815,7 +817,7 @@ def SRegOrLds_32 : SIRegisterClass<"AMDGPU", [i32, f32, i16, f16, bf16, v2i16, v
   let HasSGPR = 1;
 }
 
-def SGPR_64 : SIRegisterClass<"AMDGPU", [v2i32, i64, v2f32, f64, v4i16, v4f16, v4bf16], 32,
+def SGPR_64 : SIRegisterClass<"AMDGPU", Reg64Types.types, 32,
                             (add SGPR_64Regs)> {
   let CopyCost = 1;
   let AllocationPriority = 1;
@@ -905,8 +907,8 @@ multiclass SRegClass<int numRegs,
   }
 }
 
-defm "" : SRegClass<3, [v3i32, v3f32], SGPR_96Regs, TTMP_96Regs>;
-defm "" : SRegClass<4, [v4i32, v4f32, v2i64, v2f64, v8i16, v8f16, v8bf16], SGPR_128Regs, TTMP_128Regs>;
+defm "" : SRegClass<3, Reg96Types.types, SGPR_96Regs, TTMP_96Regs>;
+defm "" : SRegClass<4, Reg128Types.types, SGPR_128Regs, TTMP_128Regs>;
 defm "" : SRegClass<5, [v5i32, v5f32], SGPR_160Regs, TTMP_160Regs>;
 defm "" : SRegClass<6, [v6i32, v6f32, v3i64, v3f64], SGPR_192Regs, TTMP_192Regs>;
 defm "" : SRegClass<7, [v7i32, v7f32], SGPR_224Regs, TTMP_224Regs>;
@@ -958,8 +960,8 @@ multiclass VRegClass<int numRegs, list<ValueType> regTypes, dag regList> {
 
 defm VReg_64 : VRegClass<2, [i64, f64, v2i32, v2f32, v4f16, v4bf16, v4i16, p0, p1, p4],
                                 (add VGPR_64)>;
-defm VReg_96 : VRegClass<3, [v3i32, v3f32], (add VGPR_96)>;
-defm VReg_128 : VRegClass<4, [v4i32, v4f32, v2i64, v2f64, v8i16, v8f16, v8bf16], (add VGPR_128)>;
+defm VReg_96 : VRegClass<3, Reg96Types.types, (add VGPR_96)>;
+defm VReg_128 : VRegClass<4, Reg128Types.types, (add VGPR_128)>;
 defm VReg_160 : VRegClass<5, [v5i32, v5f32], (add VGPR_160)>;
 
 defm VReg_192 : VRegClass<6, [v6i32, v6f32, v3i64, v3f64], (add VGPR_192)>;

>From 804335638078f5a7c2bd31847b6080763cb22159 Mon Sep 17 00:00:00 2001
From: Joseph Huber <huberjn at outlook.com>
Date: Mon, 17 Jun 2024 14:52:50 -0500
Subject: [PATCH 14/26] [Offload] Change HSA header search order (#95769)

Summary:
The HSA headers existed previously in `include/hsa.h` and were moved to
`include/hsa/hsa.h` in a later ROCm version. The include headers here
were originally designed to favor a newer one. However, this
unintentionally prevented the dyanmic HSA's `hsa.h` from being used if
both were present. This patch changes the order so it will be found
first.

Related to https://github.com/llvm/llvm-project/pull/95484.
---
 offload/plugins-nextgen/amdgpu/src/rtl.cpp | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/offload/plugins-nextgen/amdgpu/src/rtl.cpp b/offload/plugins-nextgen/amdgpu/src/rtl.cpp
index e678213df18ce..e6643d3260eb4 100644
--- a/offload/plugins-nextgen/amdgpu/src/rtl.cpp
+++ b/offload/plugins-nextgen/amdgpu/src/rtl.cpp
@@ -58,12 +58,12 @@
 #endif
 
 #if defined(__has_include)
-#if __has_include("hsa/hsa.h")
-#include "hsa/hsa.h"
-#include "hsa/hsa_ext_amd.h"
-#elif __has_include("hsa.h")
+#if __has_include("hsa.h")
 #include "hsa.h"
 #include "hsa_ext_amd.h"
+#elif __has_include("hsa/hsa.h")
+#include "hsa/hsa.h"
+#include "hsa/hsa_ext_amd.h"
 #endif
 #else
 #include "hsa/hsa.h"

>From 77d8cfb3c50e3341d65af1f9e442004bbd77af9b Mon Sep 17 00:00:00 2001
From: Alexander Shaposhnikov
 <6532716+alexander-shaposhnikov at users.noreply.github.com>
Date: Mon, 17 Jun 2024 12:59:04 -0700
Subject: [PATCH 15/26] [Flang] Switch to common::visit more call sites
 (#90018)

Switch to common::visit more call sites.

Test plan: ninja check-all
---
 flang/include/flang/Evaluate/tools.h          |   4 +-
 flang/include/flang/Lower/DumpEvaluateExpr.h  |   2 +-
 flang/include/flang/Lower/PFTBuilder.h        |   7 +-
 flang/include/flang/Lower/Support/Utils.h     |  40 +++--
 .../include/flang/Optimizer/Support/Matcher.h |   5 +-
 flang/lib/Evaluate/intrinsics.cpp             |   2 +-
 flang/lib/Lower/Allocatable.cpp               |  23 +--
 flang/lib/Lower/Bridge.cpp                    |  98 +++++------
 flang/lib/Lower/CallInterface.cpp             |   6 +-
 flang/lib/Lower/ComponentPath.cpp             |   2 +-
 flang/lib/Lower/ConvertArrayConstructor.cpp   |  37 +++--
 flang/lib/Lower/ConvertCall.cpp               |   3 +-
 flang/lib/Lower/ConvertConstant.cpp           |   8 +-
 flang/lib/Lower/ConvertExpr.cpp               | 155 ++++++++++--------
 flang/lib/Lower/ConvertExprToHLFIR.cpp        |  23 +--
 flang/lib/Lower/ConvertType.cpp               |   2 +-
 flang/lib/Lower/DirectivesCommon.h            |   8 +-
 flang/lib/Lower/IO.cpp                        |  16 +-
 flang/lib/Lower/IterationSpace.cpp            |  39 ++---
 flang/lib/Lower/Mangler.cpp                   |   2 +-
 flang/lib/Lower/OpenACC.cpp                   |  41 ++---
 flang/lib/Lower/OpenMP/Clauses.cpp            |  24 +--
 flang/lib/Lower/OpenMP/OpenMP.cpp             |  27 +--
 flang/lib/Lower/OpenMP/Utils.cpp              |   2 +-
 flang/lib/Lower/PFTBuilder.cpp                |  51 +++---
 flang/lib/Lower/VectorSubscripts.cpp          |  95 +++++------
 flang/lib/Optimizer/Builder/IntrinsicCall.cpp |   6 +-
 flang/lib/Semantics/check-acc-structure.cpp   |  14 +-
 flang/lib/Semantics/check-coarray.cpp         |   2 +-
 29 files changed, 391 insertions(+), 353 deletions(-)

diff --git a/flang/include/flang/Evaluate/tools.h b/flang/include/flang/Evaluate/tools.h
index 378a5fca03264..ea56a20633f0c 100644
--- a/flang/include/flang/Evaluate/tools.h
+++ b/flang/include/flang/Evaluate/tools.h
@@ -450,12 +450,12 @@ struct ExtractSubstringHelper {
 
   template <typename T>
   static std::optional<Substring> visit(const Designator<T> &e) {
-    return std::visit([](auto &&s) { return visit(s); }, e.u);
+    return common::visit([](auto &&s) { return visit(s); }, e.u);
   }
 
   template <typename T>
   static std::optional<Substring> visit(const Expr<T> &e) {
-    return std::visit([](auto &&s) { return visit(s); }, e.u);
+    return common::visit([](auto &&s) { return visit(s); }, e.u);
   }
 };
 
diff --git a/flang/include/flang/Lower/DumpEvaluateExpr.h b/flang/include/flang/Lower/DumpEvaluateExpr.h
index c67df245359e3..88f53e96a81c2 100644
--- a/flang/include/flang/Lower/DumpEvaluateExpr.h
+++ b/flang/include/flang/Lower/DumpEvaluateExpr.h
@@ -68,7 +68,7 @@ class DumpEvaluateExpr {
   }
   template <typename... A>
   void show(const std::variant<A...> &u) {
-    std::visit([&](const auto &v) { show(v); }, u);
+    Fortran::common::visit([&](const auto &v) { show(v); }, u);
   }
   template <typename A>
   void show(const std::vector<A> &x) {
diff --git a/flang/include/flang/Lower/PFTBuilder.h b/flang/include/flang/Lower/PFTBuilder.h
index c2b600c6b5d9b..7f1b93c564b4c 100644
--- a/flang/include/flang/Lower/PFTBuilder.h
+++ b/flang/include/flang/Lower/PFTBuilder.h
@@ -76,7 +76,7 @@ class ReferenceVariantBase {
   }
   template <typename VISITOR>
   constexpr auto visit(VISITOR &&visitor) const {
-    return std::visit(
+    return Fortran::common::visit(
         common::visitors{[&visitor](auto ref) { return visitor(ref.get()); }},
         u);
   }
@@ -494,7 +494,8 @@ struct Variable {
 
   /// Is this variable a global?
   bool isGlobal() const {
-    return std::visit([](const auto &x) { return x.isGlobal(); }, var);
+    return Fortran::common::visit([](const auto &x) { return x.isGlobal(); },
+                                  var);
   }
 
   /// Is this a module or submodule variable?
@@ -504,7 +505,7 @@ struct Variable {
   }
 
   const Fortran::semantics::Scope *getOwningScope() const {
-    return std::visit(
+    return Fortran::common::visit(
         common::visitors{
             [](const Nominal &x) { return &x.symbol->GetUltimate().owner(); },
             [](const AggregateStore &agg) { return &agg.getOwningScope(); }},
diff --git a/flang/include/flang/Lower/Support/Utils.h b/flang/include/flang/Lower/Support/Utils.h
index e791f3dbb221a..1cc74521e22d8 100644
--- a/flang/include/flang/Lower/Support/Utils.h
+++ b/flang/include/flang/Lower/Support/Utils.h
@@ -69,7 +69,8 @@ static Fortran::lower::SomeExpr ignoreEvConvert(const A &x) {
 inline Fortran::lower::SomeExpr
 ignoreEvConvert(const Fortran::evaluate::Expr<Fortran::evaluate::Type<
                     Fortran::common::TypeCategory::Integer, 8>> &x) {
-  return std::visit([](const auto &v) { return ignoreEvConvert(v); }, x.u);
+  return Fortran::common::visit(
+      [](const auto &v) { return ignoreEvConvert(v); }, x.u);
 }
 
 /// Zip two containers of the same size together and flatten the pairs. `flatZip
@@ -119,7 +120,8 @@ class HashEvaluateExpr {
     return 0u;
   }
   static unsigned getHashValue(const Fortran::evaluate::Subscript &x) {
-    return std::visit([&](const auto &v) { return getHashValue(v); }, x.u);
+    return Fortran::common::visit(
+        [&](const auto &v) { return getHashValue(v); }, x.u);
   }
   static unsigned getHashValue(const Fortran::evaluate::Triplet &x) {
     return getHashValue(x.lower()) - getHashValue(x.upper()) * 5u -
@@ -154,7 +156,8 @@ class HashEvaluateExpr {
     return getHashValue(x.GetComponent()) * 13u;
   }
   static unsigned getHashValue(const Fortran::evaluate::DataRef &x) {
-    return std::visit([&](const auto &v) { return getHashValue(v); }, x.u);
+    return Fortran::common::visit(
+        [&](const auto &v) { return getHashValue(v); }, x.u);
   }
   static unsigned getHashValue(const Fortran::evaluate::ComplexPart &x) {
     return getHashValue(x.complex()) - static_cast<unsigned>(x.part());
@@ -247,8 +250,9 @@ class HashEvaluateExpr {
     return getHashValue(sym.get());
   }
   static unsigned getHashValue(const Fortran::evaluate::Substring &x) {
-    return 61u * std::visit([&](const auto &p) { return getHashValue(p); },
-                            x.parent()) -
+    return 61u *
+               Fortran::common::visit(
+                   [&](const auto &p) { return getHashValue(p); }, x.parent()) -
            getHashValue(x.lower()) - (getHashValue(x.lower()) + 1u);
   }
   static unsigned
@@ -270,7 +274,8 @@ class HashEvaluateExpr {
   }
   static unsigned
   getHashValue(const Fortran::evaluate::ProcedureDesignator &x) {
-    return std::visit([&](const auto &v) { return getHashValue(v); }, x.u);
+    return Fortran::common::visit(
+        [&](const auto &v) { return getHashValue(v); }, x.u);
   }
   static unsigned getHashValue(const Fortran::evaluate::ProcedureRef &x) {
     unsigned args = 13u;
@@ -321,15 +326,18 @@ class HashEvaluateExpr {
   }
   template <typename A>
   static unsigned getHashValue(const Fortran::evaluate::Expr<A> &x) {
-    return std::visit([&](const auto &v) { return getHashValue(v); }, x.u);
+    return Fortran::common::visit(
+        [&](const auto &v) { return getHashValue(v); }, x.u);
   }
   static unsigned getHashValue(
       const Fortran::evaluate::Relational<Fortran::evaluate::SomeType> &x) {
-    return std::visit([&](const auto &v) { return getHashValue(v); }, x.u);
+    return Fortran::common::visit(
+        [&](const auto &v) { return getHashValue(v); }, x.u);
   }
   template <typename A>
   static unsigned getHashValue(const Fortran::evaluate::Designator<A> &x) {
-    return std::visit([&](const auto &v) { return getHashValue(v); }, x.u);
+    return Fortran::common::visit(
+        [&](const auto &v) { return getHashValue(v); }, x.u);
   }
   template <int BITS>
   static unsigned
@@ -378,7 +386,7 @@ class IsEqualEvaluateExpr {
   }
   static bool isEqual(const Fortran::evaluate::Subscript &x,
                       const Fortran::evaluate::Subscript &y) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &v, const auto &w) { return isEqual(v, w); }, x.u, y.u);
   }
   static bool isEqual(const Fortran::evaluate::Triplet &x,
@@ -411,7 +419,7 @@ class IsEqualEvaluateExpr {
   }
   static bool isEqual(const Fortran::evaluate::DataRef &x,
                       const Fortran::evaluate::DataRef &y) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &v, const auto &w) { return isEqual(v, w); }, x.u, y.u);
   }
   static bool isEqual(const Fortran::evaluate::ComplexPart &x,
@@ -499,7 +507,7 @@ class IsEqualEvaluateExpr {
   }
   static bool isEqual(const Fortran::evaluate::Substring &x,
                       const Fortran::evaluate::Substring &y) {
-    return std::visit(
+    return Fortran::common::visit(
                [&](const auto &p, const auto &q) { return isEqual(p, q); },
                x.parent(), y.parent()) &&
            isEqual(x.lower(), y.lower()) && isEqual(x.upper(), y.upper());
@@ -529,7 +537,7 @@ class IsEqualEvaluateExpr {
   }
   static bool isEqual(const Fortran::evaluate::ProcedureDesignator &x,
                       const Fortran::evaluate::ProcedureDesignator &y) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &v, const auto &w) { return isEqual(v, w); }, x.u, y.u);
   }
   static bool isEqual(const Fortran::evaluate::ProcedureRef &x,
@@ -591,19 +599,19 @@ class IsEqualEvaluateExpr {
   template <typename A>
   static bool isEqual(const Fortran::evaluate::Expr<A> &x,
                       const Fortran::evaluate::Expr<A> &y) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &v, const auto &w) { return isEqual(v, w); }, x.u, y.u);
   }
   static bool
   isEqual(const Fortran::evaluate::Relational<Fortran::evaluate::SomeType> &x,
           const Fortran::evaluate::Relational<Fortran::evaluate::SomeType> &y) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &v, const auto &w) { return isEqual(v, w); }, x.u, y.u);
   }
   template <typename A>
   static bool isEqual(const Fortran::evaluate::Designator<A> &x,
                       const Fortran::evaluate::Designator<A> &y) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &v, const auto &w) { return isEqual(v, w); }, x.u, y.u);
   }
   template <int BITS>
diff --git a/flang/include/flang/Optimizer/Support/Matcher.h b/flang/include/flang/Optimizer/Support/Matcher.h
index da1d7c21f42c4..44672d3c0a072 100644
--- a/flang/include/flang/Optimizer/Support/Matcher.h
+++ b/flang/include/flang/Optimizer/Support/Matcher.h
@@ -13,6 +13,7 @@
 #ifndef FORTRAN_OPTIMIZER_SUPPORT_MATCHER_H
 #define FORTRAN_OPTIMIZER_SUPPORT_MATCHER_H
 
+#include "flang/Common/idioms.h"
 #include <variant>
 
 // Boilerplate CRTP class for a simplified type-casing syntactic sugar. This
@@ -23,10 +24,10 @@ template<class... Ts> struct matches : Ts... { using Ts::operator()...; };
 template<class... Ts> matches(Ts...) -> matches<Ts...>;
 template<typename N> struct matcher {
   template<typename... Ts> auto match(Ts... ts) {
-    return std::visit(matches{ts...}, static_cast<N*>(this)->matchee());
+    return Fortran::common::visit(matches{ts...}, static_cast<N*>(this)->matchee());
   }
   template<typename... Ts> auto match(Ts... ts) const {
-    return std::visit(matches{ts...}, static_cast<N const*>(this)->matchee());
+    return Fortran::common::visit(matches{ts...}, static_cast<N const*>(this)->matchee());
   }
 };
 // clang-format on
diff --git a/flang/lib/Evaluate/intrinsics.cpp b/flang/lib/Evaluate/intrinsics.cpp
index ace316174a892..1bba541e8e14a 100644
--- a/flang/lib/Evaluate/intrinsics.cpp
+++ b/flang/lib/Evaluate/intrinsics.cpp
@@ -2936,7 +2936,7 @@ static bool CheckForNonPositiveValues(FoldingContext &context,
   if (arg.Rank() > 0) {
     if (const Expr<SomeType> *expr{arg.UnwrapExpr()}) {
       if (const auto *intExpr{std::get_if<Expr<SomeInteger>>(&expr->u)}) {
-        std::visit(
+        Fortran::common::visit(
             [&](const auto &kindExpr) {
               using IntType = typename std::decay_t<decltype(kindExpr)>::Result;
               if (const auto *constArray{
diff --git a/flang/lib/Lower/Allocatable.cpp b/flang/lib/Lower/Allocatable.cpp
index 068f5d25967c9..77e02898ac9fb 100644
--- a/flang/lib/Lower/Allocatable.cpp
+++ b/flang/lib/Lower/Allocatable.cpp
@@ -350,10 +350,10 @@ class AllocateStmtHelper {
   void visitAllocateOptions() {
     for (const auto &allocOption :
          std::get<std::list<Fortran::parser::AllocOpt>>(stmt.t))
-      std::visit(
+      Fortran::common::visit(
           Fortran::common::visitors{
               [&](const Fortran::parser::StatOrErrmsg &statOrErr) {
-                std::visit(
+                Fortran::common::visit(
                     Fortran::common::visitors{
                         [&](const Fortran::parser::StatVariable &statVar) {
                           statExpr = Fortran::semantics::GetExpr(statVar);
@@ -898,15 +898,16 @@ void Fortran::lower::genDeallocateStmt(
   const Fortran::lower::SomeExpr *errMsgExpr = nullptr;
   for (const Fortran::parser::StatOrErrmsg &statOrErr :
        std::get<std::list<Fortran::parser::StatOrErrmsg>>(stmt.t))
-    std::visit(Fortran::common::visitors{
-                   [&](const Fortran::parser::StatVariable &statVar) {
-                     statExpr = Fortran::semantics::GetExpr(statVar);
-                   },
-                   [&](const Fortran::parser::MsgVariable &errMsgVar) {
-                     errMsgExpr = Fortran::semantics::GetExpr(errMsgVar);
-                   },
-               },
-               statOrErr.u);
+    Fortran::common::visit(
+        Fortran::common::visitors{
+            [&](const Fortran::parser::StatVariable &statVar) {
+              statExpr = Fortran::semantics::GetExpr(statVar);
+            },
+            [&](const Fortran::parser::MsgVariable &errMsgVar) {
+              errMsgExpr = Fortran::semantics::GetExpr(errMsgVar);
+            },
+        },
+        statOrErr.u);
   ErrorManager errorManager;
   errorManager.init(converter, loc, statExpr, errMsgExpr);
   fir::FirOpBuilder &builder = converter.getFirOpBuilder();
diff --git a/flang/lib/Lower/Bridge.cpp b/flang/lib/Lower/Bridge.cpp
index 24a57812ba104..423c418889a00 100644
--- a/flang/lib/Lower/Bridge.cpp
+++ b/flang/lib/Lower/Bridge.cpp
@@ -302,7 +302,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
     bool hasMainProgram = false;
     const Fortran::semantics::Symbol *globalOmpRequiresSymbol = nullptr;
     for (Fortran::lower::pft::Program::Units &u : pft.getUnits()) {
-      std::visit(
+      Fortran::common::visit(
           Fortran::common::visitors{
               [&](Fortran::lower::pft::FunctionLikeUnit &f) {
                 if (f.isMainProgram())
@@ -336,7 +336,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
 
     // Primary translation pass.
     for (Fortran::lower::pft::Program::Units &u : pft.getUnits()) {
-      std::visit(
+      Fortran::common::visit(
           Fortran::common::visitors{
               [&](Fortran::lower::pft::FunctionLikeUnit &f) { lowerFunc(f); },
               [&](Fortran::lower::pft::ModuleLikeUnit &m) { lowerMod(m); },
@@ -2062,7 +2062,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
           handleLocalitySpecs(info);
 
         for (const auto *dir : dirs) {
-          std::visit(
+          Fortran::common::visit(
               Fortran::common::visitors{
                   [&](const Fortran::parser::CompilerDirective::VectorAlways
                           &d) { addLoopAnnotationAttr(info); },
@@ -2433,7 +2433,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
   }
 
   void genFIR(const Fortran::parser::ForallAssignmentStmt &stmt) {
-    std::visit([&](const auto &x) { genFIR(x); }, stmt.u);
+    Fortran::common::visit([&](const auto &x) { genFIR(x); }, stmt.u);
   }
 
   void genFIR(const Fortran::parser::EndForallStmt &) {
@@ -2494,7 +2494,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
             forall.t));
     for (const Fortran::parser::ForallBodyConstruct &s :
          std::get<std::list<Fortran::parser::ForallBodyConstruct>>(forall.t)) {
-      std::visit(
+      Fortran::common::visit(
           Fortran::common::visitors{
               [&](const Fortran::parser::WhereConstruct &b) { genFIR(b); },
               [&](const Fortran::common::Indirection<
@@ -2617,7 +2617,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
   void genFIR(const Fortran::parser::CompilerDirective &dir) {
     Fortran::lower::pft::Evaluation &eval = getEval();
 
-    std::visit(
+    Fortran::common::visit(
         Fortran::common::visitors{
             [&](const Fortran::parser::CompilerDirective::VectorAlways &) {
               attachDirectiveToLoop(dir, &eval);
@@ -3198,7 +3198,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
         const auto &rank = std::get<Fortran::parser::SelectRankCaseStmt::Rank>(
             rankCaseStmt->t);
         assert(e->block && "missing SelectRankCaseStmt block");
-        std::visit(
+        Fortran::common::visit(
             Fortran::common::visitors{
                 [&](const Fortran::parser::ScalarIntConstantExpr &rankExpr) {
                   blockList.emplace_back(e->block);
@@ -3229,9 +3229,9 @@ class FirConverter : public Fortran::lower::AbstractConverter {
            "selector should not yet be set");
     Fortran::lower::StatementContext &stmtCtx =
         activeConstructStack.back().stmtCtx;
-    const Fortran::lower::SomeExpr *selectorExpr =
-        std::visit([](const auto &x) { return Fortran::semantics::GetExpr(x); },
-                   std::get<Fortran::parser::Selector>(selectRankStmt.t).u);
+    const Fortran::lower::SomeExpr *selectorExpr = Fortran::common::visit(
+        [](const auto &x) { return Fortran::semantics::GetExpr(x); },
+        std::get<Fortran::parser::Selector>(selectRankStmt.t).u);
     assert(selectorExpr && "failed to retrieve selector expr");
     hlfir::Entity selector = Fortran::lower::convertExprToHLFIR(
         loc, *this, *selectorExpr, localSymbols, stmtCtx);
@@ -3663,7 +3663,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
     Fortran::parser::Label errLabel{};
     bool hasIostat{};
     for (const auto &spec : specList) {
-      std::visit(
+      Fortran::common::visit(
           Fortran::common::visitors{
               [&](const Fortran::parser::EndLabel &label) {
                 endLabel = label.v;
@@ -4373,7 +4373,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
   void genAssignment(const Fortran::evaluate::Assignment &assign) {
     mlir::Location loc = toLocation();
     if (lowerToHighLevelFIR()) {
-      std::visit(
+      Fortran::common::visit(
           Fortran::common::visitors{
               [&](const Fortran::evaluate::Assignment::Intrinsic &) {
                 genDataAssignment(assign, /*userDefinedAssignment=*/nullptr);
@@ -4401,7 +4401,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
       explicitIterSpace.genLoopNest();
     }
     Fortran::lower::StatementContext stmtCtx;
-    std::visit(
+    Fortran::common::visit(
         Fortran::common::visitors{
             // [1] Plain old assignment.
             [&](const Fortran::evaluate::Assignment::Intrinsic &) {
@@ -4670,7 +4670,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
     }
   }
   void genFIR(const Fortran::parser::WhereBodyConstruct &body) {
-    std::visit(
+    Fortran::common::visit(
         Fortran::common::visitors{
             [&](const Fortran::parser::Statement<
                 Fortran::parser::AssignmentStmt> &stmt) {
@@ -5386,18 +5386,19 @@ class FirConverter : public Fortran::lower::AbstractConverter {
     // The intrinsic module scope, if present, is the first scope.
     const Fortran::semantics::Scope *intrinsicModuleScope = nullptr;
     for (Fortran::lower::pft::Program::Units &u : pft.getUnits()) {
-      std::visit(Fortran::common::visitors{
-                     [&](Fortran::lower::pft::FunctionLikeUnit &f) {
-                       intrinsicModuleScope = &f.getScope().parent();
-                     },
-                     [&](Fortran::lower::pft::ModuleLikeUnit &m) {
-                       intrinsicModuleScope = &m.getScope().parent();
-                     },
-                     [&](Fortran::lower::pft::BlockDataUnit &b) {},
-                     [&](Fortran::lower::pft::CompilerDirectiveUnit &d) {},
-                     [&](Fortran::lower::pft::OpenACCDirectiveUnit &d) {},
-                 },
-                 u);
+      Fortran::common::visit(
+          Fortran::common::visitors{
+              [&](Fortran::lower::pft::FunctionLikeUnit &f) {
+                intrinsicModuleScope = &f.getScope().parent();
+              },
+              [&](Fortran::lower::pft::ModuleLikeUnit &m) {
+                intrinsicModuleScope = &m.getScope().parent();
+              },
+              [&](Fortran::lower::pft::BlockDataUnit &b) {},
+              [&](Fortran::lower::pft::CompilerDirectiveUnit &d) {},
+              [&](Fortran::lower::pft::OpenACCDirectiveUnit &d) {},
+          },
+          u);
       if (intrinsicModuleScope) {
         while (!intrinsicModuleScope->IsGlobal())
           intrinsicModuleScope = &intrinsicModuleScope->parent();
@@ -5531,7 +5532,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
       analyzeExplicitSpace</*LHS=*/true>(lhs);
       analyzeExplicitSpace(rhs);
     };
-    std::visit(
+    Fortran::common::visit(
         Fortran::common::visitors{
             [&](const Fortran::evaluate::ProcedureRef &procRef) {
               // Ensure the procRef expressions are the one being visited.
@@ -5549,7 +5550,8 @@ class FirConverter : public Fortran::lower::AbstractConverter {
     explicitIterSpace.endAssign();
   }
   void analyzeExplicitSpace(const Fortran::parser::ForallAssignmentStmt &stmt) {
-    std::visit([&](const auto &s) { analyzeExplicitSpace(s); }, stmt.u);
+    Fortran::common::visit([&](const auto &s) { analyzeExplicitSpace(s); },
+                           stmt.u);
   }
   void analyzeExplicitSpace(const Fortran::parser::AssignmentStmt &s) {
     analyzeExplicitSpace(s.typedAssignment->v.operator->());
@@ -5594,13 +5596,14 @@ class FirConverter : public Fortran::lower::AbstractConverter {
       analyzeExplicitSpace(e);
   }
   void analyzeExplicitSpace(const Fortran::parser::WhereBodyConstruct &body) {
-    std::visit(Fortran::common::visitors{
-                   [&](const Fortran::common::Indirection<
-                       Fortran::parser::WhereConstruct> &wc) {
-                     analyzeExplicitSpace(wc.value());
-                   },
-                   [&](const auto &s) { analyzeExplicitSpace(s.statement); }},
-               body.u);
+    Fortran::common::visit(
+        Fortran::common::visitors{
+            [&](const Fortran::common::Indirection<
+                Fortran::parser::WhereConstruct> &wc) {
+              analyzeExplicitSpace(wc.value());
+            },
+            [&](const auto &s) { analyzeExplicitSpace(s.statement); }},
+        body.u);
   }
   void analyzeExplicitSpace(const Fortran::parser::MaskedElsewhereStmt &stmt) {
     const Fortran::lower::SomeExpr *exp = Fortran::semantics::GetExpr(
@@ -5651,16 +5654,17 @@ class FirConverter : public Fortran::lower::AbstractConverter {
             .statement);
     for (const Fortran::parser::ForallBodyConstruct &s :
          std::get<std::list<Fortran::parser::ForallBodyConstruct>>(forall.t)) {
-      std::visit(Fortran::common::visitors{
-                     [&](const Fortran::common::Indirection<
-                         Fortran::parser::ForallConstruct> &b) {
-                       analyzeExplicitSpace(b.value());
-                     },
-                     [&](const Fortran::parser::WhereConstruct &w) {
-                       analyzeExplicitSpace(w);
-                     },
-                     [&](const auto &b) { analyzeExplicitSpace(b.statement); }},
-                 s.u);
+      Fortran::common::visit(
+          Fortran::common::visitors{
+              [&](const Fortran::common::Indirection<
+                  Fortran::parser::ForallConstruct> &b) {
+                analyzeExplicitSpace(b.value());
+              },
+              [&](const Fortran::parser::WhereConstruct &w) {
+                analyzeExplicitSpace(w);
+              },
+              [&](const auto &b) { analyzeExplicitSpace(b.statement); }},
+          s.u);
     }
     analyzeExplicitSpacePop();
   }
@@ -5715,7 +5719,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
   std::string getConstantExprManglePrefix(mlir::Location loc,
                                           const Fortran::lower::SomeExpr &expr,
                                           mlir::Type eleTy) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &x) -> std::string {
           using T = std::decay_t<decltype(x)>;
           if constexpr (Fortran::common::HasMember<
@@ -5730,7 +5734,7 @@ class FirConverter : public Fortran::lower::AbstractConverter {
               fir::emitFatalError(loc,
                                   "non a constant derived type expression");
             } else {
-              return std::visit(
+              return Fortran::common::visit(
                   [&](const auto &someKind) -> std::string {
                     using T = std::decay_t<decltype(someKind)>;
                     using TK = Fortran::evaluate::Type<T::Result::category,
diff --git a/flang/lib/Lower/CallInterface.cpp b/flang/lib/Lower/CallInterface.cpp
index 5f1d69c1de7ac..75a86273250d1 100644
--- a/flang/lib/Lower/CallInterface.cpp
+++ b/flang/lib/Lower/CallInterface.cpp
@@ -187,7 +187,7 @@ asImplicitArg(Fortran::evaluate::characteristics::DummyDataObject &&dummy) {
 
 static Fortran::evaluate::characteristics::DummyArgument
 asImplicitArg(Fortran::evaluate::characteristics::DummyArgument &&dummy) {
-  return std::visit(
+  return Fortran::common::visit(
       Fortran::common::visitors{
           [&](Fortran::evaluate::characteristics::DummyDataObject &obj) {
             return Fortran::evaluate::characteristics::DummyArgument(
@@ -843,7 +843,7 @@ class Fortran::lower::CallInterfaceImpl {
     for (auto pair : llvm::zip(procedure.dummyArguments, argumentEntities)) {
       const Fortran::evaluate::characteristics::DummyArgument
           &argCharacteristics = std::get<0>(pair);
-      std::visit(
+      Fortran::common::visit(
           Fortran::common::visitors{
               [&](const auto &dummy) {
                 const auto &entity = getDataObjectEntity(std::get<1>(pair));
@@ -877,7 +877,7 @@ class Fortran::lower::CallInterfaceImpl {
     for (auto pair : llvm::zip(procedure.dummyArguments, argumentEntities)) {
       const Fortran::evaluate::characteristics::DummyArgument
           &argCharacteristics = std::get<0>(pair);
-      std::visit(
+      Fortran::common::visit(
           Fortran::common::visitors{
               [&](const Fortran::evaluate::characteristics::DummyDataObject
                       &dummy) {
diff --git a/flang/lib/Lower/ComponentPath.cpp b/flang/lib/Lower/ComponentPath.cpp
index d20ea23153102..5bdbca6062e6d 100644
--- a/flang/lib/Lower/ComponentPath.cpp
+++ b/flang/lib/Lower/ComponentPath.cpp
@@ -36,7 +36,7 @@ void Fortran::lower::ComponentPath::clear() {
 
 bool Fortran::lower::isRankedArrayAccess(const Fortran::evaluate::ArrayRef &x) {
   for (const Fortran::evaluate::Subscript &sub : x.subscript()) {
-    if (std::visit(
+    if (Fortran::common::visit(
             Fortran::common::visitors{
                 [&](const Fortran::evaluate::Triplet &) { return true; },
                 [&](const Fortran::evaluate::IndirectSubscriptIntegerExpr &e) {
diff --git a/flang/lib/Lower/ConvertArrayConstructor.cpp b/flang/lib/Lower/ConvertArrayConstructor.cpp
index 341fad9a5e43c..3c43cd20eb080 100644
--- a/flang/lib/Lower/ConvertArrayConstructor.cpp
+++ b/flang/lib/Lower/ConvertArrayConstructor.cpp
@@ -438,7 +438,7 @@ class ArrayCtorLoweringStrategy {
 
   void pushValue(mlir::Location loc, fir::FirOpBuilder &builder,
                  hlfir::Entity value) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](auto &impl) { return impl.pushValue(loc, builder, value); },
         implVariant);
   }
@@ -446,7 +446,7 @@ class ArrayCtorLoweringStrategy {
   mlir::Value startImpliedDo(mlir::Location loc, fir::FirOpBuilder &builder,
                              mlir::Value lower, mlir::Value upper,
                              mlir::Value stride) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](auto &impl) {
           return impl.startImpliedDo(loc, builder, lower, upper, stride);
         },
@@ -455,13 +455,13 @@ class ArrayCtorLoweringStrategy {
 
   hlfir::Entity finishArrayCtorLowering(mlir::Location loc,
                                         fir::FirOpBuilder &builder) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](auto &impl) { return impl.finishArrayCtorLowering(loc, builder); },
         implVariant);
   }
 
   void startImpliedDoScope(llvm::StringRef doName, mlir::Value indexValue) {
-    std::visit(
+    Fortran::common::visit(
         [&](auto &impl) {
           return impl.startImpliedDoScope(doName, indexValue);
         },
@@ -469,8 +469,8 @@ class ArrayCtorLoweringStrategy {
   }
 
   void endImpliedDoScope() {
-    std::visit([&](auto &impl) { return impl.endImpliedDoScope(); },
-               implVariant);
+    Fortran::common::visit([&](auto &impl) { return impl.endImpliedDoScope(); },
+                           implVariant);
   }
 
 private:
@@ -612,16 +612,17 @@ ArrayCtorAnalysis::ArrayCtorAnalysis(
         arrayValueListStack.pop_back_val();
     for (const Fortran::evaluate::ArrayConstructorValue<T> &acValue :
          *currentArrayValueList)
-      std::visit(Fortran::common::visitors{
-                     [&](const Fortran::evaluate::ImpliedDo<T> &impledDo) {
-                       arrayValueListStack.push_back(&impledDo.values());
-                       localNumberOfImpliedDo++;
-                     },
-                     [&](const Fortran::evaluate::Expr<T> &expr) {
-                       localNumberOfExpr++;
-                       anyArrayExpr = anyArrayExpr || expr.Rank() > 0;
-                     }},
-                 acValue.u);
+      Fortran::common::visit(
+          Fortran::common::visitors{
+              [&](const Fortran::evaluate::ImpliedDo<T> &impledDo) {
+                arrayValueListStack.push_back(&impledDo.values());
+                localNumberOfImpliedDo++;
+              },
+              [&](const Fortran::evaluate::Expr<T> &expr) {
+                localNumberOfExpr++;
+                anyArrayExpr = anyArrayExpr || expr.Rank() > 0;
+              }},
+          acValue.u);
     anyImpliedDo = anyImpliedDo || localNumberOfImpliedDo > 0;
 
     if (localNumberOfImpliedDo == 0) {
@@ -765,7 +766,7 @@ static void genAcValue(mlir::Location loc,
                                    impliedDoIndexValue);
 
   for (const auto &acValue : impledDo.values())
-    std::visit(
+    Fortran::common::visit(
         [&](const auto &x) {
           genAcValue(loc, converter, x, symMap, stmtCtx, arrayBuilder);
         },
@@ -787,7 +788,7 @@ hlfir::EntityWithAttributes Fortran::lower::ArrayConstructorBuilder<T>::gen(
       loc, converter, arrayCtorExpr, symMap, stmtCtx);
   // Run the array lowering strategy through the ac-values.
   for (const auto &acValue : arrayCtorExpr)
-    std::visit(
+    Fortran::common::visit(
         [&](const auto &x) {
           genAcValue(loc, converter, x, symMap, stmtCtx, arrayBuilder);
         },
diff --git a/flang/lib/Lower/ConvertCall.cpp b/flang/lib/Lower/ConvertCall.cpp
index b1dc41f3ca838..65a2ffbea5dd1 100644
--- a/flang/lib/Lower/ConvertCall.cpp
+++ b/flang/lib/Lower/ConvertCall.cpp
@@ -935,7 +935,8 @@ struct CallCleanUp {
     mlir::Value mustFree;
   };
   void genCleanUp(mlir::Location loc, fir::FirOpBuilder &builder) {
-    std::visit([&](auto &c) { c.genCleanUp(loc, builder); }, cleanUp);
+    Fortran::common::visit([&](auto &c) { c.genCleanUp(loc, builder); },
+                           cleanUp);
   }
   std::variant<CopyIn, ExprAssociate> cleanUp;
 };
diff --git a/flang/lib/Lower/ConvertConstant.cpp b/flang/lib/Lower/ConvertConstant.cpp
index a4ace40a3a1c4..3361817ee27ee 100644
--- a/flang/lib/Lower/ConvertConstant.cpp
+++ b/flang/lib/Lower/ConvertConstant.cpp
@@ -105,7 +105,7 @@ class DenseGlobalBuilder {
                                    const Fortran::lower::SomeExpr &initExpr,
                                    cuf::DataAttributeAttr dataAttr) {
     DenseGlobalBuilder globalBuilder;
-    std::visit(
+    Fortran::common::visit(
         Fortran::common::visitors{
             [&](const Fortran::evaluate::Expr<Fortran::evaluate::SomeLogical> &
                     x) { globalBuilder.tryConvertingToAttributes(builder, x); },
@@ -164,7 +164,7 @@ class DenseGlobalBuilder {
   template <typename SomeCat>
   void tryConvertingToAttributes(fir::FirOpBuilder &builder,
                                  const Fortran::evaluate::Expr<SomeCat> &expr) {
-    std::visit(
+    Fortran::common::visit(
         [&](const auto &x) {
           using TR = Fortran::evaluate::ResultType<decltype(x)>;
           if (const auto *constant =
@@ -796,7 +796,7 @@ static fir::ExtendedValue
 genConstantValue(Fortran::lower::AbstractConverter &converter,
                  mlir::Location loc,
                  const Fortran::lower::SomeExpr &constantExpr) {
-  return std::visit(
+  return Fortran::common::visit(
       [&](const auto &x) -> fir::ExtendedValue {
         using T = std::decay_t<decltype(x)>;
         if constexpr (Fortran::common::HasMember<
@@ -805,7 +805,7 @@ genConstantValue(Fortran::lower::AbstractConverter &converter,
                         Fortran::common::TypeCategory::Derived) {
             return genConstantValue(converter, loc, x);
           } else {
-            return std::visit(
+            return Fortran::common::visit(
                 [&](const auto &preciseKind) {
                   return genConstantValue(converter, loc, preciseKind);
                 },
diff --git a/flang/lib/Lower/ConvertExpr.cpp b/flang/lib/Lower/ConvertExpr.cpp
index 9567685aa3d2e..9937e9d159886 100644
--- a/flang/lib/Lower/ConvertExpr.cpp
+++ b/flang/lib/Lower/ConvertExpr.cpp
@@ -398,8 +398,8 @@ static bool isParenthesizedVariable(const Fortran::evaluate::Expr<T> &expr) {
       return Fortran::evaluate::IsVariable(parentheses->left());
     return false;
   } else {
-    return std::visit([&](const auto &x) { return isParenthesizedVariable(x); },
-                      expr.u);
+    return Fortran::common::visit(
+        [&](const auto &x) { return isParenthesizedVariable(x); }, expr.u);
   }
 }
 
@@ -646,7 +646,7 @@ isOptimizableTranspose(Fortran::evaluate::Expr<T> expr,
   if (!isTransposeOptEnabled(converter))
     return false;
 
-  return std::visit(
+  return Fortran::common::visit(
       [&](const auto &e) { return isOptimizableTranspose(e, converter); },
       expr.u);
 }
@@ -696,7 +696,7 @@ class ScalarExprLowering {
     //    - result of NULL() or NULL(MOLD) intrinsic.
     //    NULL() requires some context to be lowered, so it is not handled
     //    here and must be lowered according to the context where it appears.
-    ExtValue exv = std::visit(
+    ExtValue exv = Fortran::common::visit(
         [&](const auto &x) { return genMutableBoxValueImpl(x); }, expr.u);
     const fir::MutableBoxValue *mutableBox =
         exv.getBoxOf<fir::MutableBoxValue>();
@@ -737,7 +737,7 @@ class ScalarExprLowering {
   template <typename T>
   ExtValue
   genMutableBoxValueImpl(const Fortran::evaluate::Designator<T> &designator) {
-    return std::visit(
+    return Fortran::common::visit(
         Fortran::common::visitors{
             [&](const Fortran::evaluate::SymbolRef &sym) -> ExtValue {
               return converter.getSymbolExtendedValue(*sym, &symMap);
@@ -754,8 +754,8 @@ class ScalarExprLowering {
 
   template <typename T>
   ExtValue genMutableBoxValueImpl(const Fortran::evaluate::Expr<T> &expr) {
-    return std::visit([&](const auto &x) { return genMutableBoxValueImpl(x); },
-                      expr.u);
+    return Fortran::common::visit(
+        [&](const auto &x) { return genMutableBoxValueImpl(x); }, expr.u);
   }
 
   mlir::Location getLoc() { return location; }
@@ -1222,7 +1222,8 @@ class ScalarExprLowering {
 
   ExtValue
   genval(const Fortran::evaluate::Relational<Fortran::evaluate::SomeType> &op) {
-    return std::visit([&](const auto &x) { return genval(x); }, op.u);
+    return Fortran::common::visit([&](const auto &x) { return genval(x); },
+                                  op.u);
   }
 
   template <Fortran::common::TypeCategory TC1, int KIND,
@@ -1341,7 +1342,7 @@ class ScalarExprLowering {
   /// Reference to a substring.
   ExtValue gen(const Fortran::evaluate::Substring &s) {
     // Get base string
-    auto baseString = std::visit(
+    auto baseString = Fortran::common::visit(
         Fortran::common::visitors{
             [&](const Fortran::evaluate::DataRef &x) { return gen(x); },
             [&](const Fortran::evaluate::StaticDataObject::Pointer &p)
@@ -1400,10 +1401,12 @@ class ScalarExprLowering {
   }
 
   ExtValue gen(const Fortran::evaluate::DataRef &dref) {
-    return std::visit([&](const auto &x) { return gen(x); }, dref.u);
+    return Fortran::common::visit([&](const auto &x) { return gen(x); },
+                                  dref.u);
   }
   ExtValue genval(const Fortran::evaluate::DataRef &dref) {
-    return std::visit([&](const auto &x) { return genval(x); }, dref.u);
+    return Fortran::common::visit([&](const auto &x) { return genval(x); },
+                                  dref.u);
   }
 
   // Helper function to turn the Component structure into a list of nested
@@ -1418,7 +1421,7 @@ class ScalarExprLowering {
                     std::list<const Fortran::evaluate::Component *> &list) {
     if (!getLastSym(cmpt).test(Fortran::semantics::Symbol::Flag::ParentComp))
       list.push_front(&cmpt);
-    return std::visit(
+    return Fortran::common::visit(
         Fortran::common::visitors{
             [&](const Fortran::evaluate::Component &x) {
               if (Fortran::semantics::IsAllocatableOrPointer(getLastSym(x)))
@@ -1713,11 +1716,12 @@ class ScalarExprLowering {
 
   template <typename A>
   ExtValue gen(const Fortran::evaluate::Designator<A> &des) {
-    return std::visit([&](const auto &x) { return gen(x); }, des.u);
+    return Fortran::common::visit([&](const auto &x) { return gen(x); }, des.u);
   }
   template <typename A>
   ExtValue genval(const Fortran::evaluate::Designator<A> &des) {
-    return std::visit([&](const auto &x) { return genval(x); }, des.u);
+    return Fortran::common::visit([&](const auto &x) { return genval(x); },
+                                  des.u);
   }
 
   mlir::Type genType(const Fortran::evaluate::DynamicType &dt) {
@@ -2900,8 +2904,8 @@ class ScalarExprLowering {
   }
   template <typename T>
   bool isTransformationalRef(Fortran::evaluate::Expr<T> expr) {
-    return std::visit([&](const auto &e) { return isTransformationalRef(e); },
-                      expr.u);
+    return Fortran::common::visit(
+        [&](const auto &e) { return isTransformationalRef(e); }, expr.u);
   }
 
   template <typename A>
@@ -2914,11 +2918,13 @@ class ScalarExprLowering {
   /// value, so it may be possible to avoid making a temporary.
   template <typename A>
   ExtValue asArrayArg(const Fortran::evaluate::Expr<A> &x) {
-    return std::visit([&](const auto &e) { return asArrayArg(e, x); }, x.u);
+    return Fortran::common::visit(
+        [&](const auto &e) { return asArrayArg(e, x); }, x.u);
   }
   template <typename A, typename B>
   ExtValue asArrayArg(const Fortran::evaluate::Expr<A> &x, const B &y) {
-    return std::visit([&](const auto &e) { return asArrayArg(e, y); }, x.u);
+    return Fortran::common::visit(
+        [&](const auto &e) { return asArrayArg(e, y); }, x.u);
   }
   template <typename A, typename B>
   ExtValue asArrayArg(const Fortran::evaluate::Designator<A> &, const B &x) {
@@ -2956,7 +2962,8 @@ class ScalarExprLowering {
     if (isScalar(x) ||
         Fortran::evaluate::UnwrapWholeSymbolOrComponentDataRef(x) ||
         (isTransformationalRef(x) && !isOptimizableTranspose(x, converter)))
-      return std::visit([&](const auto &e) { return genref(e); }, x.u);
+      return Fortran::common::visit([&](const auto &e) { return genref(e); },
+                                    x.u);
     if (useBoxArg)
       return asArrayArg(x);
     return asArray(x);
@@ -2967,7 +2974,8 @@ class ScalarExprLowering {
       return val;
     if (isScalar(x) || Fortran::evaluate::UnwrapWholeSymbolDataRef(x) ||
         inInitializer)
-      return std::visit([&](const auto &e) { return genval(e); }, x.u);
+      return Fortran::common::visit([&](const auto &e) { return genval(e); },
+                                    x.u);
     return asArray(x);
   }
 
@@ -2976,7 +2984,8 @@ class ScalarExprLowering {
                       Fortran::common::TypeCategory::Logical, KIND>> &exp) {
     if (mlir::Value val = getIfOverridenExpr(exp))
       return val;
-    return std::visit([&](const auto &e) { return genval(e); }, exp.u);
+    return Fortran::common::visit([&](const auto &e) { return genval(e); },
+                                  exp.u);
   }
 
   using RefSet =
@@ -3462,7 +3471,7 @@ class ArrayExprLowering {
 
   ExtValue lowerBoxedArrayExpr(const Fortran::lower::SomeExpr &exp) {
     PushSemantics(ConstituentSemantics::BoxValue);
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &e) {
           auto f = genarr(e);
           ExtValue exv = f(IterationSpace{});
@@ -3824,28 +3833,29 @@ class ArrayExprLowering {
         fir::factory::getExtents(loc, builder, exv);
     mlir::Value one = builder.createIntegerConstant(loc, idxTy, 1);
     for (auto ss : llvm::enumerate(x.subscript())) {
-      std::visit(Fortran::common::visitors{
-                     [&](const Fortran::evaluate::Triplet &trip) {
-                       // For a subscript of triple notation, we compute the
-                       // range of this dimension of the iteration space.
-                       auto lo = [&]() {
-                         if (auto optLo = trip.lower())
-                           return fir::getBase(asScalar(*optLo));
-                         return getLBound(exv, ss.index(), one);
-                       }();
-                       auto hi = [&]() {
-                         if (auto optHi = trip.upper())
-                           return fir::getBase(asScalar(*optHi));
-                         return getUBound(exv, ss.index(), one);
-                       }();
-                       auto step = builder.createConvert(
-                           loc, idxTy, fir::getBase(asScalar(trip.stride())));
-                       auto extent = builder.genExtentFromTriplet(loc, lo, hi,
-                                                                  step, idxTy);
-                       destShape.push_back(extent);
-                     },
-                     [&](auto) {}},
-                 ss.value().u);
+      Fortran::common::visit(
+          Fortran::common::visitors{
+              [&](const Fortran::evaluate::Triplet &trip) {
+                // For a subscript of triple notation, we compute the
+                // range of this dimension of the iteration space.
+                auto lo = [&]() {
+                  if (auto optLo = trip.lower())
+                    return fir::getBase(asScalar(*optLo));
+                  return getLBound(exv, ss.index(), one);
+                }();
+                auto hi = [&]() {
+                  if (auto optHi = trip.upper())
+                    return fir::getBase(asScalar(*optHi));
+                  return getUBound(exv, ss.index(), one);
+                }();
+                auto step = builder.createConvert(
+                    loc, idxTy, fir::getBase(asScalar(trip.stride())));
+                auto extent =
+                    builder.genExtentFromTriplet(loc, lo, hi, step, idxTy);
+                destShape.push_back(extent);
+              },
+              [&](auto) {}},
+          ss.value().u);
     }
     return true;
   }
@@ -3855,8 +3865,8 @@ class ArrayExprLowering {
     return genShapeFromDataRef(x.GetComponent());
   }
   bool genShapeFromDataRef(const Fortran::evaluate::DataRef &x) {
-    return std::visit([&](const auto &v) { return genShapeFromDataRef(v); },
-                      x.u);
+    return Fortran::common::visit(
+        [&](const auto &v) { return genShapeFromDataRef(v); }, x.u);
   }
 
   /// When in an explicit space, the ranked component must be evaluated to
@@ -3890,7 +3900,7 @@ class ArrayExprLowering {
       TODO(getLoc(),
            "polymorphic array expression lowering with vector subscript");
 
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &e) { return lowerArrayExpression(genarr(e), resTy); },
         exp.u);
   }
@@ -5012,10 +5022,12 @@ class ArrayExprLowering {
     LLVM_DEBUG(Fortran::lower::DumpEvaluateExpr::dump(llvm::dbgs(), x));
     if (isArray(x) || (explicitSpaceIsActive() && isLeftHandSide()) ||
         isElementalProcWithArrayArgs(x))
-      return std::visit([&](const auto &e) { return genarr(e); }, x.u);
+      return Fortran::common::visit([&](const auto &e) { return genarr(e); },
+                                    x.u);
     if (explicitSpaceIsActive()) {
       assert(!isArray(x) && !isLeftHandSide());
-      auto cc = std::visit([&](const auto &e) { return genarr(e); }, x.u);
+      auto cc =
+          Fortran::common::visit([&](const auto &e) { return genarr(e); }, x.u);
       auto result = cc(IterationSpace{});
       return [=](IterSpace) { return result; };
     }
@@ -5289,7 +5301,8 @@ class ArrayExprLowering {
   static Fortran::lower::SomeExpr
   ignoreEvConvert(const Fortran::evaluate::Expr<Fortran::evaluate::Type<
                       Fortran::common::TypeCategory::Integer, 8>> &x) {
-    return std::visit([&](const auto &v) { return ignoreEvConvert(v); }, x.u);
+    return Fortran::common::visit(
+        [&](const auto &v) { return ignoreEvConvert(v); }, x.u);
   }
   template <Fortran::common::TypeCategory FROM>
   static Fortran::lower::SomeExpr ignoreEvConvert(
@@ -5310,8 +5323,8 @@ class ArrayExprLowering {
   template <typename A>
   static const Fortran::semantics::Symbol *
   extractSubscriptSymbol(const Fortran::evaluate::Expr<A> &x) {
-    return std::visit([&](const auto &v) { return extractSubscriptSymbol(v); },
-                      x.u);
+    return Fortran::common::visit(
+        [&](const auto &v) { return extractSubscriptSymbol(v); }, x.u);
   }
   template <typename A>
   static const Fortran::semantics::Symbol *
@@ -5420,7 +5433,7 @@ class ArrayExprLowering {
     std::size_t shapeIndex = 0;
     for (auto sub : llvm::enumerate(x.subscript())) {
       const std::size_t subsIndex = sub.index();
-      std::visit(
+      Fortran::common::visit(
           Fortran::common::visitors{
               [&](const Fortran::evaluate::Triplet &t) {
                 mlir::Value lowerBound;
@@ -6034,8 +6047,8 @@ class ArrayExprLowering {
   /// Substrings (see 9.4.1)
   CC genarr(const Fortran::evaluate::Substring &x, ComponentPath &components) {
     components.substring = &x;
-    return std::visit([&](const auto &v) { return genarr(v, components); },
-                      x.parent());
+    return Fortran::common::visit(
+        [&](const auto &v) { return genarr(v, components); }, x.parent());
   }
 
   template <typename T>
@@ -6333,7 +6346,7 @@ class ArrayExprLowering {
     stmtCtx.pushScope();
     std::optional<mlir::Value> charLen;
     for (const Fortran::evaluate::ArrayConstructorValue<A> &acv : x.values()) {
-      auto [exv, copyNeeded] = std::visit(
+      auto [exv, copyNeeded] = Fortran::common::visit(
           [&](const auto &v) {
             return genArrayCtorInitializer(v, resTy, mem, buffPos, buffSize,
                                            stmtCtx);
@@ -6417,7 +6430,7 @@ class ArrayExprLowering {
     // Populate the buffer with the elements, growing as necessary.
     std::optional<mlir::Value> charLen;
     for (const auto &expr : x) {
-      auto [exv, copyNeeded] = std::visit(
+      auto [exv, copyNeeded] = Fortran::common::visit(
           [&](const auto &e) {
             return genArrayCtorInitializer(e, resTy, mem, buffPos, buffSize,
                                            stmtCtx);
@@ -6582,22 +6595,24 @@ class ArrayExprLowering {
   }
   CC genarr(
       const Fortran::evaluate::Relational<Fortran::evaluate::SomeType> &r) {
-    return std::visit([&](const auto &x) { return genarr(x); }, r.u);
+    return Fortran::common::visit([&](const auto &x) { return genarr(x); },
+                                  r.u);
   }
 
   template <typename A>
   CC genarr(const Fortran::evaluate::Designator<A> &des) {
     ComponentPath components(des.Rank() > 0);
-    return std::visit([&](const auto &x) { return genarr(x, components); },
-                      des.u);
+    return Fortran::common::visit(
+        [&](const auto &x) { return genarr(x, components); }, des.u);
   }
 
   /// Is the path component rank > 0?
   static bool ranked(const PathComponent &x) {
-    return std::visit(Fortran::common::visitors{
-                          [](const ImplicitSubscripts &) { return false; },
-                          [](const auto *v) { return v->Rank() > 0; }},
-                      x);
+    return Fortran::common::visit(
+        Fortran::common::visitors{
+            [](const ImplicitSubscripts &) { return false; },
+            [](const auto *v) { return v->Rank() > 0; }},
+        x);
   }
 
   void extendComponent(Fortran::lower::ComponentPath &component,
@@ -6653,7 +6668,7 @@ class ArrayExprLowering {
                       : nextPathSemantics());
     unsigned index = 0;
     for (const auto &v : llvm::reverse(revPath)) {
-      std::visit(
+      Fortran::common::visit(
           Fortran::common::visitors{
               [&](const ImplicitSubscripts &) {
                 prefix = false;
@@ -6678,7 +6693,7 @@ class ArrayExprLowering {
                   unsigned ssIndex = 0u;
                   llvm::SmallVector<mlir::Value> componentsToAdd;
                   for (const auto &ss : x->subscript()) {
-                    std::visit(
+                    Fortran::common::visit(
                         Fortran::common::visitors{
                             [&](const Fortran::evaluate::
                                     IndirectSubscriptIntegerExpr &ie) {
@@ -7099,8 +7114,8 @@ class ArrayExprLowering {
   }
 
   CC genarr(const Fortran::evaluate::DataRef &x, ComponentPath &components) {
-    return std::visit([&](const auto &v) { return genarr(v, components); },
-                      x.u);
+    return Fortran::common::visit(
+        [&](const auto &v) { return genarr(v, components); }, x.u);
   }
 
   bool pathIsEmpty(const ComponentPath &components) {
@@ -7575,13 +7590,13 @@ void Fortran::lower::createArrayLoads(
   };
   if (esp.lhsBases[counter]) {
     auto &base = *esp.lhsBases[counter];
-    auto load = std::visit(genLoad, base);
+    auto load = Fortran::common::visit(genLoad, base);
     esp.initialArgs.push_back(load);
     esp.resetInnerArgs();
     esp.bindLoad(base, load);
   }
   for (const auto &base : esp.rhsBases[counter])
-    esp.bindLoad(base, std::visit(genLoad, base));
+    esp.bindLoad(base, Fortran::common::visit(genLoad, base));
 }
 
 void Fortran::lower::createArrayMergeStores(
diff --git a/flang/lib/Lower/ConvertExprToHLFIR.cpp b/flang/lib/Lower/ConvertExprToHLFIR.cpp
index 9035856eabfe7..1933f38f735b5 100644
--- a/flang/lib/Lower/ConvertExprToHLFIR.cpp
+++ b/flang/lib/Lower/ConvertExprToHLFIR.cpp
@@ -75,7 +75,7 @@ class HlfirDesignatorBuilder {
   hlfir::EntityWithAttributes
   gen(const CharacterDesignators &designatorVariant,
       bool vectorSubscriptDesignatorToValue = true) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &x) -> hlfir::EntityWithAttributes {
           return genLeafPartRef(x, vectorSubscriptDesignatorToValue);
         },
@@ -88,7 +88,7 @@ class HlfirDesignatorBuilder {
   hlfir::EntityWithAttributes
   gen(const RealDesignators &designatorVariant,
       bool vectorSubscriptDesignatorToValue = true) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &x) -> hlfir::EntityWithAttributes {
           return genLeafPartRef(x, vectorSubscriptDesignatorToValue);
         },
@@ -101,7 +101,7 @@ class HlfirDesignatorBuilder {
   hlfir::EntityWithAttributes
   gen(const OtherDesignators &designatorVariant,
       bool vectorSubscriptDesignatorToValue = true) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](const auto &x) -> hlfir::EntityWithAttributes {
           return genLeafPartRef(x, vectorSubscriptDesignatorToValue);
         },
@@ -169,7 +169,7 @@ class HlfirDesignatorBuilder {
 
   fir::FortranVariableOpInterface
   gen(const Fortran::evaluate::DataRef &dataRef) {
-    return std::visit(
+    return Fortran::common::visit(
         Fortran::common::visitors{[&](const auto &x) { return gen(x); }},
         dataRef.u);
   }
@@ -364,7 +364,7 @@ class HlfirDesignatorBuilder {
   fir::FortranVariableOpInterface
   gen(const Fortran::evaluate::Substring &substring) {
     PartInfo partInfo;
-    mlir::Type baseStringType = std::visit(
+    mlir::Type baseStringType = Fortran::common::visit(
         [&](const auto &x) { return visit(x, partInfo); }, substring.parent());
     assert(partInfo.typeParams.size() == 1 && "expect base string length");
     // Compute the substring lower and upper bound.
@@ -436,8 +436,8 @@ class HlfirDesignatorBuilder {
 
   mlir::Type visit(const Fortran::evaluate::DataRef &dataRef,
                    PartInfo &partInfo) {
-    return std::visit([&](const auto &x) { return visit(x, partInfo); },
-                      dataRef.u);
+    return Fortran::common::visit(
+        [&](const auto &x) { return visit(x, partInfo); }, dataRef.u);
   }
 
   mlir::Type
@@ -892,7 +892,7 @@ hlfir::EntityWithAttributes HlfirDesignatorBuilder::genDesignatorExpr(
     bool vectorSubscriptDesignatorToValue) {
   // Expr<SomeType> plumbing to unwrap Designator<T> and call
   // gen(Designator<T>.u).
-  return std::visit(
+  return Fortran::common::visit(
       [&](const auto &x) -> hlfir::EntityWithAttributes {
         using T = std::decay_t<decltype(x)>;
         if constexpr (Fortran::common::HasMember<
@@ -904,7 +904,7 @@ hlfir::EntityWithAttributes HlfirDesignatorBuilder::genDesignatorExpr(
                            .u,
                        vectorSubscriptDesignatorToValue);
           } else {
-            return std::visit(
+            return Fortran::common::visit(
                 [&](const auto &preciseKind) {
                   using TK =
                       typename std::decay_t<decltype(preciseKind)>::Result;
@@ -1426,7 +1426,8 @@ class HlfirBuilder {
           return hlfir::EntityWithAttributes{match->second};
       }
     }
-    return std::visit([&](const auto &x) { return gen(x); }, expr.u);
+    return Fortran::common::visit([&](const auto &x) { return gen(x); },
+                                  expr.u);
   }
 
 private:
@@ -1594,7 +1595,7 @@ class HlfirBuilder {
 
   hlfir::EntityWithAttributes
   gen(const Fortran::evaluate::Relational<Fortran::evaluate::SomeType> &op) {
-    return std::visit([&](const auto &x) { return gen(x); }, op.u);
+    return Fortran::common::visit([&](const auto &x) { return gen(x); }, op.u);
   }
 
   hlfir::EntityWithAttributes gen(const Fortran::evaluate::TypeParamInquiry &) {
diff --git a/flang/lib/Lower/ConvertType.cpp b/flang/lib/Lower/ConvertType.cpp
index e6557d7f0b767..f64f6c93541a3 100644
--- a/flang/lib/Lower/ConvertType.cpp
+++ b/flang/lib/Lower/ConvertType.cpp
@@ -212,7 +212,7 @@ struct TypeBuilderImpl {
   }
 
   mlir::Type genTypelessExprType(const Fortran::lower::SomeExpr &expr) {
-    return std::visit(
+    return Fortran::common::visit(
         Fortran::common::visitors{
             [&](const Fortran::evaluate::BOZLiteralConstant &) -> mlir::Type {
               return mlir::NoneType::get(context);
diff --git a/flang/lib/Lower/DirectivesCommon.h b/flang/lib/Lower/DirectivesCommon.h
index 48b090f6d2dbe..f0af5f982c14f 100644
--- a/flang/lib/Lower/DirectivesCommon.h
+++ b/flang/lib/Lower/DirectivesCommon.h
@@ -836,7 +836,7 @@ struct PeelConvert {
   static Fortran::semantics::MaybeExpr visit_with_category(
       const Fortran::evaluate::Expr<Fortran::evaluate::Type<Category, Kind>>
           &expr) {
-    return std::visit(
+    return Fortran::common::visit(
         [](auto &&s) { return visit_with_category<Category, Kind>(s); },
         expr.u);
   }
@@ -859,12 +859,12 @@ struct PeelConvert {
   static Fortran::semantics::MaybeExpr
   visit(const Fortran::evaluate::Expr<Fortran::evaluate::SomeKind<Category>>
             &expr) {
-    return std::visit([](auto &&s) { return visit_with_category<Category>(s); },
-                      expr.u);
+    return Fortran::common::visit(
+        [](auto &&s) { return visit_with_category<Category>(s); }, expr.u);
   }
   static Fortran::semantics::MaybeExpr
   visit(const Fortran::evaluate::Expr<Fortran::evaluate::SomeType> &expr) {
-    return std::visit([](auto &&s) { return visit(s); }, expr.u);
+    return Fortran::common::visit([](auto &&s) { return visit(s); }, expr.u);
   }
   template <typename T> //
   static Fortran::semantics::MaybeExpr visit(const T &) {
diff --git a/flang/lib/Lower/IO.cpp b/flang/lib/Lower/IO.cpp
index 97ef991cb3990..9e98b230b676f 100644
--- a/flang/lib/Lower/IO.cpp
+++ b/flang/lib/Lower/IO.cpp
@@ -1388,7 +1388,7 @@ static void threadSpecs(Fortran::lower::AbstractConverter &converter,
   fir::FirOpBuilder &builder = converter.getFirOpBuilder();
   for (const auto &spec : specList) {
     makeNextConditionalOn(builder, loc, checkResult, ok);
-    ok = std::visit(
+    ok = Fortran::common::visit(
         Fortran::common::visitors{
             [&](const Fortran::parser::IoControlSpec::Size &x) -> mlir::Value {
               // Size must be queried after the related READ runtime calls, not
@@ -1425,7 +1425,7 @@ ConditionSpecInfo lowerErrorSpec(Fortran::lower::AbstractConverter &converter,
   ConditionSpecInfo csi;
   const Fortran::lower::SomeExpr *ioMsgExpr = nullptr;
   for (const auto &spec : specList) {
-    std::visit(
+    Fortran::common::visit(
         Fortran::common::visitors{
             [&](const Fortran::parser::StatVariable &var) {
               csi.ioStatExpr = Fortran::semantics::GetExpr(var);
@@ -2397,7 +2397,7 @@ lowerIdExpr(Fortran::lower::AbstractConverter &converter, mlir::Location loc,
             const std::list<Fortran::parser::InquireSpec> &ispecs,
             Fortran::lower::StatementContext &stmtCtx) {
   for (const Fortran::parser::InquireSpec &spec : ispecs)
-    if (mlir::Value v = std::visit(
+    if (mlir::Value v = Fortran::common::visit(
             Fortran::common::visitors{
                 [&](const Fortran::parser::IdExpr &idExpr) {
                   return fir::getBase(converter.genExprValue(
@@ -2419,11 +2419,11 @@ static void threadInquire(Fortran::lower::AbstractConverter &converter,
   mlir::Value idExpr = lowerIdExpr(converter, loc, ispecs, stmtCtx);
   for (const Fortran::parser::InquireSpec &spec : ispecs) {
     makeNextConditionalOn(builder, loc, checkResult, ok);
-    ok = std::visit(Fortran::common::visitors{[&](const auto &x) {
-                      return genInquireSpec(converter, loc, cookie, idExpr, x,
-                                            stmtCtx);
-                    }},
-                    spec.u);
+    ok = Fortran::common::visit(Fortran::common::visitors{[&](const auto &x) {
+                                  return genInquireSpec(converter, loc, cookie,
+                                                        idExpr, x, stmtCtx);
+                                }},
+                                spec.u);
   }
 }
 
diff --git a/flang/lib/Lower/IterationSpace.cpp b/flang/lib/Lower/IterationSpace.cpp
index 6bf310b5cfb76..9303536403837 100644
--- a/flang/lib/Lower/IterationSpace.cpp
+++ b/flang/lib/Lower/IterationSpace.cpp
@@ -21,14 +21,14 @@
 
 unsigned Fortran::lower::getHashValue(
     const Fortran::lower::ExplicitIterSpace::ArrayBases &x) {
-  return std::visit(
+  return Fortran::common::visit(
       [&](const auto *p) { return HashEvaluateExpr::getHashValue(*p); }, x);
 }
 
 bool Fortran::lower::isEqual(
     const Fortran::lower::ExplicitIterSpace::ArrayBases &x,
     const Fortran::lower::ExplicitIterSpace::ArrayBases &y) {
-  return std::visit(
+  return Fortran::common::visit(
       Fortran::common::visitors{
           // Fortran::semantics::Symbol * are the exception here. These pointers
           // have identity; if two Symbol * values are the same (different) then
@@ -169,7 +169,7 @@ class ArrayBaseFinder {
   }
   template <typename... A>
   RT find(const std::variant<A...> &u) {
-    return std::visit([&](const auto &v) { return find(v); }, u);
+    return Fortran::common::visit([&](const auto &v) { return find(v); }, u);
   }
   template <typename A>
   RT find(const std::vector<A> &x) {
@@ -361,22 +361,23 @@ llvm::raw_ostream &
 Fortran::lower::operator<<(llvm::raw_ostream &s,
                            const Fortran::lower::ExplicitIterSpace &e) {
   auto dump = [&](const auto &u) {
-    std::visit(Fortran::common::visitors{
-                   [&](const Fortran::semantics::Symbol *y) {
-                     s << "  " << *y << '\n';
-                   },
-                   [&](const Fortran::evaluate::ArrayRef *y) {
-                     s << "  ";
-                     if (y->base().IsSymbol())
-                       s << y->base().GetFirstSymbol();
-                     else
-                       s << y->base().GetComponent().GetLastSymbol();
-                     s << '\n';
-                   },
-                   [&](const Fortran::evaluate::Component *y) {
-                     s << "  " << y->GetLastSymbol() << '\n';
-                   }},
-               u);
+    Fortran::common::visit(
+        Fortran::common::visitors{
+            [&](const Fortran::semantics::Symbol *y) {
+              s << "  " << *y << '\n';
+            },
+            [&](const Fortran::evaluate::ArrayRef *y) {
+              s << "  ";
+              if (y->base().IsSymbol())
+                s << y->base().GetFirstSymbol();
+              else
+                s << y->base().GetComponent().GetLastSymbol();
+              s << '\n';
+            },
+            [&](const Fortran::evaluate::Component *y) {
+              s << "  " << y->GetLastSymbol() << '\n';
+            }},
+        u);
   };
   s << "LHS bases:\n";
   for (const std::optional<Fortran::lower::ExplicitIterSpace::ArrayBases> &u :
diff --git a/flang/lib/Lower/Mangler.cpp b/flang/lib/Lower/Mangler.cpp
index 9a33be318a27d..878ba6dea49b6 100644
--- a/flang/lib/Lower/Mangler.cpp
+++ b/flang/lib/Lower/Mangler.cpp
@@ -110,7 +110,7 @@ std::string Fortran::lower::mangle::mangleName(
     return fir::NameUniquer::doVariable(modules, procs, blockId, symbolName);
   };
 
-  return std::visit(
+  return Fortran::common::visit(
       Fortran::common::visitors{
           [&](const Fortran::semantics::MainProgramDetails &) {
             return fir::NameUniquer::doProgramEntry().str();
diff --git a/flang/lib/Lower/OpenACC.cpp b/flang/lib/Lower/OpenACC.cpp
index 4f5da8fb70eba..166fa686cd883 100644
--- a/flang/lib/Lower/OpenACC.cpp
+++ b/flang/lib/Lower/OpenACC.cpp
@@ -46,14 +46,15 @@ static mlir::Location
 genOperandLocation(Fortran::lower::AbstractConverter &converter,
                    const Fortran::parser::AccObject &accObject) {
   mlir::Location loc = converter.genUnknownLocation();
-  std::visit(Fortran::common::visitors{
-                 [&](const Fortran::parser::Designator &designator) {
-                   loc = converter.genLocation(designator.source);
-                 },
-                 [&](const Fortran::parser::Name &name) {
-                   loc = converter.genLocation(name.source);
-                 }},
-             accObject.u);
+  Fortran::common::visit(
+      Fortran::common::visitors{
+          [&](const Fortran::parser::Designator &designator) {
+            loc = converter.genLocation(designator.source);
+          },
+          [&](const Fortran::parser::Name &name) {
+            loc = converter.genLocation(name.source);
+          }},
+      accObject.u);
   return loc;
 }
 
@@ -297,8 +298,8 @@ genDataOperandOperations(const Fortran::parser::AccObjectList &objectList,
     std::stringstream asFortran;
     mlir::Location operandLocation = genOperandLocation(converter, accObject);
     Fortran::semantics::Symbol &symbol = getSymbolFromAccObject(accObject);
-    Fortran::semantics::MaybeExpr designator =
-        std::visit([&](auto &&s) { return ea.Analyze(s); }, accObject.u);
+    Fortran::semantics::MaybeExpr designator = Fortran::common::visit(
+        [&](auto &&s) { return ea.Analyze(s); }, accObject.u);
     Fortran::lower::AddrAndBoundsInfo info =
         Fortran::lower::gatherDataOperandAddrAndBounds<
             mlir::acc::DataBoundsOp, mlir::acc::DataBoundsType>(
@@ -335,8 +336,8 @@ static void genDeclareDataOperandOperations(
     std::stringstream asFortran;
     mlir::Location operandLocation = genOperandLocation(converter, accObject);
     Fortran::semantics::Symbol &symbol = getSymbolFromAccObject(accObject);
-    Fortran::semantics::MaybeExpr designator =
-        std::visit([&](auto &&s) { return ea.Analyze(s); }, accObject.u);
+    Fortran::semantics::MaybeExpr designator = Fortran::common::visit(
+        [&](auto &&s) { return ea.Analyze(s); }, accObject.u);
     Fortran::lower::AddrAndBoundsInfo info =
         Fortran::lower::gatherDataOperandAddrAndBounds<
             mlir::acc::DataBoundsOp, mlir::acc::DataBoundsType>(
@@ -790,8 +791,8 @@ genPrivatizations(const Fortran::parser::AccObjectList &objectList,
     std::stringstream asFortran;
     mlir::Location operandLocation = genOperandLocation(converter, accObject);
     Fortran::semantics::Symbol &symbol = getSymbolFromAccObject(accObject);
-    Fortran::semantics::MaybeExpr designator =
-        std::visit([&](auto &&s) { return ea.Analyze(s); }, accObject.u);
+    Fortran::semantics::MaybeExpr designator = Fortran::common::visit(
+        [&](auto &&s) { return ea.Analyze(s); }, accObject.u);
     Fortran::lower::AddrAndBoundsInfo info =
         Fortran::lower::gatherDataOperandAddrAndBounds<
             mlir::acc::DataBoundsOp, mlir::acc::DataBoundsType>(
@@ -1364,8 +1365,8 @@ genReductions(const Fortran::parser::AccObjectListWithReduction &objectList,
     std::stringstream asFortran;
     mlir::Location operandLocation = genOperandLocation(converter, accObject);
     Fortran::semantics::Symbol &symbol = getSymbolFromAccObject(accObject);
-    Fortran::semantics::MaybeExpr designator =
-        std::visit([&](auto &&s) { return ea.Analyze(s); }, accObject.u);
+    Fortran::semantics::MaybeExpr designator = Fortran::common::visit(
+        [&](auto &&s) { return ea.Analyze(s); }, accObject.u);
     Fortran::lower::AddrAndBoundsInfo info =
         Fortran::lower::gatherDataOperandAddrAndBounds<
             mlir::acc::DataBoundsOp, mlir::acc::DataBoundsType>(
@@ -3414,7 +3415,7 @@ static void genGlobalCtors(Fortran::lower::AbstractConverter &converter,
   fir::FirOpBuilder &builder = converter.getFirOpBuilder();
   for (const auto &accObject : accObjectList.v) {
     mlir::Location operandLocation = genOperandLocation(converter, accObject);
-    std::visit(
+    Fortran::common::visit(
         Fortran::common::visitors{
             [&](const Fortran::parser::Designator &designator) {
               if (const auto *name =
@@ -3993,7 +3994,7 @@ genACC(Fortran::lower::AbstractConverter &converter,
        const Fortran::parser::OpenACCAtomicConstruct &atomicConstruct) {
 
   mlir::Location loc = converter.genLocation(atomicConstruct.source);
-  std::visit(
+  Fortran::common::visit(
       Fortran::common::visitors{
           [&](const Fortran::parser::AccAtomicRead &atomicRead) {
             Fortran::lower::genOmpAccAtomicRead<Fortran::parser::AccAtomicRead,
@@ -4061,7 +4062,7 @@ mlir::Value Fortran::lower::genOpenACCConstruct(
     const Fortran::parser::OpenACCConstruct &accConstruct) {
 
   mlir::Value exitCond;
-  std::visit(
+  Fortran::common::visit(
       common::visitors{
           [&](const Fortran::parser::OpenACCBlockConstruct &blockConstruct) {
             genACC(converter, semanticsContext, eval, blockConstruct);
@@ -4101,7 +4102,7 @@ void Fortran::lower::genOpenACCDeclarativeConstruct(
     const Fortran::parser::OpenACCDeclarativeConstruct &accDeclConstruct,
     Fortran::lower::AccRoutineInfoMappingList &accRoutineInfos) {
 
-  std::visit(
+  Fortran::common::visit(
       common::visitors{
           [&](const Fortran::parser::OpenACCStandaloneDeclarativeConstruct
                   &standaloneDeclarativeConstruct) {
diff --git a/flang/lib/Lower/OpenMP/Clauses.cpp b/flang/lib/Lower/OpenMP/Clauses.cpp
index 6bae62cd85860..c355b63deff8a 100644
--- a/flang/lib/Lower/OpenMP/Clauses.cpp
+++ b/flang/lib/Lower/OpenMP/Clauses.cpp
@@ -38,8 +38,8 @@ llvm::omp::Clause getClauseIdForClass(C &&) {
 } // namespace detail
 
 static llvm::omp::Clause getClauseId(const Fortran::parser::OmpClause &clause) {
-  return std::visit([](auto &&s) { return detail::getClauseIdForClass(s); },
-                    clause.u);
+  return Fortran::common::visit(
+      [](auto &&s) { return detail::getClauseIdForClass(s); }, clause.u);
 }
 
 namespace Fortran::lower::omp {
@@ -83,7 +83,7 @@ struct SymbolAndDesignatorExtractor {
 
   template <typename T>
   static SymbolWithDesignator visit(const evaluate::Expr<T> &e) {
-    return std::visit([](auto &&s) { return visit(s); }, e.u);
+    return Fortran::common::visit([](auto &&s) { return visit(s); }, e.u);
   }
 
   static void verify(const SymbolWithDesignator &sd) {
@@ -112,7 +112,7 @@ struct SymbolAndDesignatorExtractor {
 SymbolWithDesignator getSymbolAndDesignator(const MaybeExpr &expr) {
   if (!expr)
     return SymbolWithDesignator{};
-  return std::visit(
+  return Fortran::common::visit(
       [](auto &&s) { return SymbolAndDesignatorExtractor::visit(s); }, expr->u);
 }
 
@@ -278,7 +278,7 @@ DefinedOperator makeDefinedOperator(const parser::DefinedOperator &inp,
       // clang-format on
   );
 
-  return std::visit(
+  return Fortran::common::visit(
       common::visitors{
           [&](const parser::DefinedOpName &s) {
             return DefinedOperator{
@@ -294,7 +294,7 @@ DefinedOperator makeDefinedOperator(const parser::DefinedOperator &inp,
 ProcedureDesignator
 makeProcedureDesignator(const parser::ProcedureDesignator &inp,
                         semantics::SemanticsContext &semaCtx) {
-  return ProcedureDesignator{std::visit(
+  return ProcedureDesignator{Fortran::common::visit(
       common::visitors{
           [&](const parser::Name &t) { return makeObject(t, semaCtx); },
           [&](const parser::ProcComponentRef &t) {
@@ -306,7 +306,7 @@ makeProcedureDesignator(const parser::ProcedureDesignator &inp,
 
 ReductionOperator makeReductionOperator(const parser::OmpReductionOperator &inp,
                                         semantics::SemanticsContext &semaCtx) {
-  return std::visit(
+  return Fortran::common::visit(
       common::visitors{
           [&](const parser::DefinedOperator &s) {
             return ReductionOperator{makeDefinedOperator(s, semaCtx)};
@@ -366,7 +366,7 @@ Allocate make(const parser::OmpClause::Allocate &inp,
 
   using Tuple = decltype(Allocate::t);
 
-  return Allocate{std::visit(
+  return Allocate{Fortran::common::visit(
       common::visitors{
           // simple-modifier
           [&](const wrapped::AllocateModifier::Allocator &v) -> Tuple {
@@ -531,7 +531,7 @@ Depend make(const parser::OmpClause::Depend &inp,
       // clang-format on
   );
 
-  return Depend{std::visit( //
+  return Depend{Fortran::common::visit( //
       common::visitors{
           // Doacross
           [&](const wrapped::Source &s) -> Variant {
@@ -793,7 +793,7 @@ Linear make(const parser::OmpClause::Linear &inp,
 
   using Tuple = decltype(Linear::t);
 
-  return Linear{std::visit(
+  return Linear{Fortran::common::visit(
       common::visitors{
           [&](const wrapped::WithModifier &s) -> Tuple {
             return {
@@ -949,7 +949,7 @@ Order make(const parser::OmpClause::Order &inp,
   auto &t1 = std::get<wrapped::Type>(inp.v.t);
 
   auto convert3 = [&](const parser::OmpOrderModifier &s) {
-    return std::visit(
+    return Fortran::common::visit(
         [&](parser::OmpOrderModifier::Kind k) { return convert1(k); }, s.u);
   };
   return Order{
@@ -1212,7 +1212,7 @@ UsesAllocators make(const parser::OmpClause::UsesAllocators &inp,
 
 Clause makeClause(const parser::OmpClause &cls,
                   semantics::SemanticsContext &semaCtx) {
-  return std::visit(
+  return Fortran::common::visit(
       [&](auto &&s) {
         return makeClause(getClauseId(cls), clause::make(s, semaCtx),
                           cls.source);
diff --git a/flang/lib/Lower/OpenMP/OpenMP.cpp b/flang/lib/Lower/OpenMP/OpenMP.cpp
index aac22f0faad37..7d75e6f67dc1b 100644
--- a/flang/lib/Lower/OpenMP/OpenMP.cpp
+++ b/flang/lib/Lower/OpenMP/OpenMP.cpp
@@ -2199,7 +2199,7 @@ static void genOMP(lower::AbstractConverter &converter, lower::SymMap &symTable,
                    semantics::SemanticsContext &semaCtx,
                    lower::pft::Evaluation &eval,
                    const parser::OpenMPDeclarativeConstruct &ompDeclConstruct) {
-  std::visit(
+  Fortran::common::visit(
       [&](auto &&s) { return genOMP(converter, symTable, semaCtx, eval, s); },
       ompDeclConstruct.u);
 }
@@ -2276,7 +2276,7 @@ static void
 genOMP(lower::AbstractConverter &converter, lower::SymMap &symTable,
        semantics::SemanticsContext &semaCtx, lower::pft::Evaluation &eval,
        const parser::OpenMPStandaloneConstruct &standaloneConstruct) {
-  std::visit(
+  Fortran::common::visit(
       [&](auto &&s) { return genOMP(converter, symTable, semaCtx, eval, s); },
       standaloneConstruct.u);
 }
@@ -2296,7 +2296,7 @@ static void genOMP(lower::AbstractConverter &converter, lower::SymMap &symTable,
                    semantics::SemanticsContext &semaCtx,
                    lower::pft::Evaluation &eval,
                    const parser::OpenMPAtomicConstruct &atomicConstruct) {
-  std::visit(
+  Fortran::common::visit(
       common::visitors{
           [&](const parser::OmpAtomicRead &atomicRead) {
             mlir::Location loc = converter.genLocation(atomicRead.source);
@@ -2487,7 +2487,7 @@ static void genOMP(lower::AbstractConverter &converter, lower::SymMap &symTable,
                    semantics::SemanticsContext &semaCtx,
                    lower::pft::Evaluation &eval,
                    const parser::OpenMPConstruct &ompConstruct) {
-  std::visit(
+  Fortran::common::visit(
       [&](auto &&s) { return genOMP(converter, symTable, semaCtx, eval, s); },
       ompConstruct.u);
 }
@@ -2649,21 +2649,22 @@ void Fortran::lower::gatherOpenMPDeferredDeclareTargets(
     const parser::OpenMPDeclarativeConstruct &ompDecl,
     llvm::SmallVectorImpl<OMPDeferredDeclareTargetInfo>
         &deferredDeclareTarget) {
-  std::visit(common::visitors{
-                 [&](const parser::OpenMPDeclareTargetConstruct &ompReq) {
-                   collectDeferredDeclareTargets(converter, semaCtx, eval,
-                                                 ompReq, deferredDeclareTarget);
-                 },
-                 [&](const auto &) {},
-             },
-             ompDecl.u);
+  Fortran::common::visit(
+      common::visitors{
+          [&](const parser::OpenMPDeclareTargetConstruct &ompReq) {
+            collectDeferredDeclareTargets(converter, semaCtx, eval, ompReq,
+                                          deferredDeclareTarget);
+          },
+          [&](const auto &) {},
+      },
+      ompDecl.u);
 }
 
 bool Fortran::lower::isOpenMPDeviceDeclareTarget(
     lower::AbstractConverter &converter, semantics::SemanticsContext &semaCtx,
     lower::pft::Evaluation &eval,
     const parser::OpenMPDeclarativeConstruct &ompDecl) {
-  return std::visit(
+  return Fortran::common::visit(
       common::visitors{
           [&](const parser::OpenMPDeclareTargetConstruct &ompReq) {
             mlir::omp::DeclareTargetDeviceType targetType =
diff --git a/flang/lib/Lower/OpenMP/Utils.cpp b/flang/lib/Lower/OpenMP/Utils.cpp
index 36d96f37ff36a..8aeef175ad2d2 100644
--- a/flang/lib/Lower/OpenMP/Utils.cpp
+++ b/flang/lib/Lower/OpenMP/Utils.cpp
@@ -325,7 +325,7 @@ void insertChildMapInfoIntoParent(
 
 semantics::Symbol *getOmpObjectSymbol(const parser::OmpObject &ompObject) {
   semantics::Symbol *sym = nullptr;
-  std::visit(
+  Fortran::common::visit(
       common::visitors{
           [&](const parser::Designator &designator) {
             if (auto *arrayEle =
diff --git a/flang/lib/Lower/PFTBuilder.cpp b/flang/lib/Lower/PFTBuilder.cpp
index fc34895d1ec75..5b3d5471925bf 100644
--- a/flang/lib/Lower/PFTBuilder.cpp
+++ b/flang/lib/Lower/PFTBuilder.cpp
@@ -103,7 +103,7 @@ class PFTBuilder {
             stmt.unwrapped, pftParentStack.back(), stmt.position, stmt.label});
         return false;
       } else if constexpr (std::is_same_v<T, parser::ActionStmt>) {
-        return std::visit(
+        return Fortran::common::visit(
             common::visitors{
                 [&](const common::Indirection<parser::CallStmt> &x) {
                   addEvaluation(lower::pft::Evaluation{
@@ -239,7 +239,7 @@ class PFTBuilder {
 
   // Get rid of production wrapper
   bool Pre(const parser::Statement<parser::ForallAssignmentStmt> &statement) {
-    addEvaluation(std::visit(
+    addEvaluation(Fortran::common::visit(
         [&](const auto &x) {
           return lower::pft::Evaluation{x, pftParentStack.back(),
                                         statement.source, statement.label};
@@ -248,7 +248,7 @@ class PFTBuilder {
     return false;
   }
   bool Pre(const parser::WhereBodyConstruct &whereBody) {
-    return std::visit(
+    return Fortran::common::visit(
         common::visitors{
             [&](const parser::Statement<parser::AssignmentStmt> &stmt) {
               // Not caught as other AssignmentStmt because it is not
@@ -469,7 +469,7 @@ class PFTBuilder {
   makeEvaluationAction(const parser::ActionStmt &statement,
                        parser::CharBlock position,
                        std::optional<parser::Label> label) {
-    return std::visit(
+    return Fortran::common::visit(
         common::visitors{
             [&](const auto &x) {
               return lower::pft::Evaluation{
@@ -664,7 +664,7 @@ class PFTBuilder {
     };
     auto analyzeSpecs{[&](const auto &specList) {
       for (const auto &spec : specList) {
-        std::visit(
+        Fortran::common::visit(
             Fortran::common::visitors{
                 [&](const Fortran::parser::Format &format) {
                   analyzeFormatSpec(format);
@@ -1172,26 +1172,27 @@ class PFTDumper {
   void dumpPFT(llvm::raw_ostream &outputStream,
                const lower::pft::Program &pft) {
     for (auto &unit : pft.getUnits()) {
-      std::visit(common::visitors{
-                     [&](const lower::pft::BlockDataUnit &unit) {
-                       outputStream << getNodeIndex(unit) << " ";
-                       outputStream << "BlockData: ";
-                       outputStream << "\nEnd BlockData\n\n";
-                     },
-                     [&](const lower::pft::FunctionLikeUnit &func) {
-                       dumpFunctionLikeUnit(outputStream, func);
-                     },
-                     [&](const lower::pft::ModuleLikeUnit &unit) {
-                       dumpModuleLikeUnit(outputStream, unit);
-                     },
-                     [&](const lower::pft::CompilerDirectiveUnit &unit) {
-                       dumpCompilerDirectiveUnit(outputStream, unit);
-                     },
-                     [&](const lower::pft::OpenACCDirectiveUnit &unit) {
-                       dumpOpenACCDirectiveUnit(outputStream, unit);
-                     },
-                 },
-                 unit);
+      Fortran::common::visit(
+          common::visitors{
+              [&](const lower::pft::BlockDataUnit &unit) {
+                outputStream << getNodeIndex(unit) << " ";
+                outputStream << "BlockData: ";
+                outputStream << "\nEnd BlockData\n\n";
+              },
+              [&](const lower::pft::FunctionLikeUnit &func) {
+                dumpFunctionLikeUnit(outputStream, func);
+              },
+              [&](const lower::pft::ModuleLikeUnit &unit) {
+                dumpModuleLikeUnit(outputStream, unit);
+              },
+              [&](const lower::pft::CompilerDirectiveUnit &unit) {
+                dumpCompilerDirectiveUnit(outputStream, unit);
+              },
+              [&](const lower::pft::OpenACCDirectiveUnit &unit) {
+                dumpOpenACCDirectiveUnit(outputStream, unit);
+              },
+          },
+          unit);
     }
   }
 
diff --git a/flang/lib/Lower/VectorSubscripts.cpp b/flang/lib/Lower/VectorSubscripts.cpp
index d7a311d32d59d..389a89ddcf102 100644
--- a/flang/lib/Lower/VectorSubscripts.cpp
+++ b/flang/lib/Lower/VectorSubscripts.cpp
@@ -55,10 +55,11 @@ class VectorSubscriptBoxBuilder {
     using Designator = Fortran::evaluate::Designator<T>;
     if constexpr (Fortran::common::HasMember<Designator, ExprVariant>) {
       const auto &designator = std::get<Designator>(expr.u);
-      return std::visit([&](const auto &x) { return gen(x); }, designator.u);
+      return Fortran::common::visit([&](const auto &x) { return gen(x); },
+                                    designator.u);
     } else {
-      return std::visit([&](const auto &x) { return genDesignator(x); },
-                        expr.u);
+      return Fortran::common::visit(
+          [&](const auto &x) { return genDesignator(x); }, expr.u);
     }
   }
 
@@ -66,8 +67,8 @@ class VectorSubscriptBoxBuilder {
   // type of X elements.
 
   mlir::Type gen(const Fortran::evaluate::DataRef &dataRef) {
-    return std::visit([&](const auto &ref) -> mlir::Type { return gen(ref); },
-                      dataRef.u);
+    return Fortran::common::visit(
+        [&](const auto &ref) -> mlir::Type { return gen(ref); }, dataRef.u);
   }
 
   mlir::Type gen(const Fortran::evaluate::SymbolRef &symRef) {
@@ -128,7 +129,7 @@ class VectorSubscriptBoxBuilder {
   mlir::Type gen(const Fortran::evaluate::ArrayRef &arrayRef) {
     auto isTripletOrVector =
         [](const Fortran::evaluate::Subscript &subscript) -> bool {
-      return std::visit(
+      return Fortran::common::visit(
           Fortran::common::visitors{
               [](const Fortran::evaluate::IndirectSubscriptIntegerExpr &expr) {
                 return expr.value().Rank() != 0;
@@ -165,7 +166,7 @@ class VectorSubscriptBoxBuilder {
     mlir::Type idxTy = builder.getIndexType();
     mlir::Value one = builder.createIntegerConstant(loc, idxTy, 1);
     for (const auto &subscript : llvm::enumerate(arrayRef.subscript())) {
-      std::visit(
+      Fortran::common::visit(
           Fortran::common::visitors{
               [&](const Fortran::evaluate::IndirectSubscriptIntegerExpr &expr) {
                 if (expr.value().Rank() == 0) {
@@ -327,24 +328,24 @@ Fortran::lower::VectorSubscriptBox::createSlice(fir::FirOpBuilder &builder,
   mlir::Value one = builder.createIntegerConstant(loc, idxTy, 1);
   auto undef = builder.create<fir::UndefOp>(loc, idxTy);
   for (const LoweredSubscript &subscript : loweredSubscripts)
-    std::visit(Fortran::common::visitors{
-                   [&](const LoweredTriplet &triplet) {
-                     triples.emplace_back(triplet.lb);
-                     triples.emplace_back(triplet.ub);
-                     triples.emplace_back(triplet.stride);
-                   },
-                   [&](const LoweredVectorSubscript &vector) {
-                     triples.emplace_back(one);
-                     triples.emplace_back(vector.size);
-                     triples.emplace_back(one);
-                   },
-                   [&](const mlir::Value &i) {
-                     triples.emplace_back(i);
-                     triples.emplace_back(undef);
-                     triples.emplace_back(undef);
-                   },
-               },
-               subscript);
+    Fortran::common::visit(Fortran::common::visitors{
+                               [&](const LoweredTriplet &triplet) {
+                                 triples.emplace_back(triplet.lb);
+                                 triples.emplace_back(triplet.ub);
+                                 triples.emplace_back(triplet.stride);
+                               },
+                               [&](const LoweredVectorSubscript &vector) {
+                                 triples.emplace_back(one);
+                                 triples.emplace_back(vector.size);
+                                 triples.emplace_back(one);
+                               },
+                               [&](const mlir::Value &i) {
+                                 triples.emplace_back(i);
+                                 triples.emplace_back(undef);
+                                 triples.emplace_back(undef);
+                               },
+                           },
+                           subscript);
   return builder.create<fir::SliceOp>(loc, triples, componentPath);
 }
 
@@ -390,28 +391,28 @@ fir::ExtendedValue Fortran::lower::VectorSubscriptBox::getElementAt(
   llvm::SmallVector<mlir::Value> indexes;
   size_t inductionIdx = inductionVariables.size() - 1;
   for (const LoweredSubscript &subscript : loweredSubscripts)
-    std::visit(Fortran::common::visitors{
-                   [&](const LoweredTriplet &triplet) {
-                     indexes.emplace_back(inductionVariables[inductionIdx--]);
-                   },
-                   [&](const LoweredVectorSubscript &vector) {
-                     mlir::Value vecIndex = inductionVariables[inductionIdx--];
-                     mlir::Value vecBase = fir::getBase(vector.vector);
-                     mlir::Type vecEleTy = fir::unwrapSequenceType(
-                         fir::unwrapPassByRefType(vecBase.getType()));
-                     mlir::Type refTy = builder.getRefType(vecEleTy);
-                     auto vecEltRef = builder.create<fir::CoordinateOp>(
-                         loc, refTy, vecBase, vecIndex);
-                     auto vecElt =
-                         builder.create<fir::LoadOp>(loc, vecEleTy, vecEltRef);
-                     indexes.emplace_back(
-                         builder.createConvert(loc, idxTy, vecElt));
-                   },
-                   [&](const mlir::Value &i) {
-                     indexes.emplace_back(builder.createConvert(loc, idxTy, i));
-                   },
-               },
-               subscript);
+    Fortran::common::visit(
+        Fortran::common::visitors{
+            [&](const LoweredTriplet &triplet) {
+              indexes.emplace_back(inductionVariables[inductionIdx--]);
+            },
+            [&](const LoweredVectorSubscript &vector) {
+              mlir::Value vecIndex = inductionVariables[inductionIdx--];
+              mlir::Value vecBase = fir::getBase(vector.vector);
+              mlir::Type vecEleTy = fir::unwrapSequenceType(
+                  fir::unwrapPassByRefType(vecBase.getType()));
+              mlir::Type refTy = builder.getRefType(vecEleTy);
+              auto vecEltRef = builder.create<fir::CoordinateOp>(
+                  loc, refTy, vecBase, vecIndex);
+              auto vecElt =
+                  builder.create<fir::LoadOp>(loc, vecEleTy, vecEltRef);
+              indexes.emplace_back(builder.createConvert(loc, idxTy, vecElt));
+            },
+            [&](const mlir::Value &i) {
+              indexes.emplace_back(builder.createConvert(loc, idxTy, i));
+            },
+        },
+        subscript);
   mlir::Type refTy = builder.getRefType(getElementType());
   auto elementAddr = builder.create<fir::ArrayCoorOp>(
       loc, refTy, fir::getBase(loweredBase), shape, slice, indexes,
diff --git a/flang/lib/Optimizer/Builder/IntrinsicCall.cpp b/flang/lib/Optimizer/Builder/IntrinsicCall.cpp
index ab106f62aecfb..c929d05038462 100644
--- a/flang/lib/Optimizer/Builder/IntrinsicCall.cpp
+++ b/flang/lib/Optimizer/Builder/IntrinsicCall.cpp
@@ -1788,7 +1788,7 @@ IntrinsicLibrary::genIntrinsicCall(llvm::StringRef specificName,
   llvm::StringRef name = genericName(specificName);
   if (const IntrinsicHandler *handler = findIntrinsicHandler(name)) {
     bool outline = handler->outline || outlineAllIntrinsics;
-    return {std::visit(
+    return {Fortran::common::visit(
                 [&](auto &generator) -> fir::ExtendedValue {
                   return invokeHandler(generator, *handler, resultType, args,
                                        outline, *this);
@@ -1802,7 +1802,7 @@ IntrinsicLibrary::genIntrinsicCall(llvm::StringRef specificName,
   if (fir::getTargetTriple(mod).isPPC()) {
     if (const IntrinsicHandler *ppcHandler = findPPCIntrinsicHandler(name)) {
       bool outline = ppcHandler->outline || outlineAllIntrinsics;
-      return {std::visit(
+      return {Fortran::common::visit(
                   [&](auto &generator) -> fir::ExtendedValue {
                     return invokeHandler(generator, *ppcHandler, resultType,
                                          args, outline, *this);
@@ -2136,7 +2136,7 @@ mlir::SymbolRefAttr IntrinsicLibrary::getUnrestrictedIntrinsicSymbolRefAttr(
   bool loadRefArguments = true;
   mlir::func::FuncOp funcOp;
   if (const IntrinsicHandler *handler = findIntrinsicHandler(name))
-    funcOp = std::visit(
+    funcOp = Fortran::common::visit(
         [&](auto generator) {
           return getWrapper(generator, name, signature, loadRefArguments);
         },
diff --git a/flang/lib/Semantics/check-acc-structure.cpp b/flang/lib/Semantics/check-acc-structure.cpp
index 69b9fe17e6a88..25140a0473749 100644
--- a/flang/lib/Semantics/check-acc-structure.cpp
+++ b/flang/lib/Semantics/check-acc-structure.cpp
@@ -403,9 +403,9 @@ void AccStructureChecker::CheckMultipleOccurrenceInDeclare(
   if (GetContext().directive != llvm::acc::Directive::ACCD_declare)
     return;
   for (const auto &object : list.v) {
-    std::visit(
-        Fortran::common::visitors{
-            [&](const Fortran::parser::Designator &designator) {
+    common::visit(
+        common::visitors{
+            [&](const parser::Designator &designator) {
               if (const auto *name = getDesignatorNameIfDataRef(designator)) {
                 if (declareSymbols.contains(&name->symbol->GetUltimate())) {
                   if (declareSymbols[&name->symbol->GetUltimate()] == clause) {
@@ -435,7 +435,7 @@ void AccStructureChecker::CheckMultipleOccurrenceInDeclare(
                 declareSymbols.insert({&name->symbol->GetUltimate(), clause});
               }
             },
-            [&](const Fortran::parser::Name &name) {
+            [&](const parser::Name &name) {
               // TODO: check common block
             }},
         object.u);
@@ -674,9 +674,9 @@ void AccStructureChecker::Enter(const parser::AccClause::Reduction &reduction) {
   const auto &objects{std::get<parser::AccObjectList>(list.t)};
 
   for (const auto &object : objects.v) {
-    std::visit(
-        Fortran::common::visitors{
-            [&](const Fortran::parser::Designator &designator) {
+    common::visit(
+        common::visitors{
+            [&](const parser::Designator &designator) {
               if (const auto *name = getDesignatorNameIfDataRef(designator)) {
                 const auto *type{name->symbol->GetType()};
                 if (type->IsNumeric(TypeCategory::Integer) &&
diff --git a/flang/lib/Semantics/check-coarray.cpp b/flang/lib/Semantics/check-coarray.cpp
index 106af7960fa94..6cf61a6b923db 100644
--- a/flang/lib/Semantics/check-coarray.cpp
+++ b/flang/lib/Semantics/check-coarray.cpp
@@ -93,7 +93,7 @@ static void CheckCoindexedStatOrErrmsg(SemanticsContext &context,
       }
     }
   }};
-  std::visit(CoindexedCheck, statOrErrmsg.u);
+  Fortran::common::visit(CoindexedCheck, statOrErrmsg.u);
 }
 
 static void CheckSyncStatList(

>From 7ddff3a586baaf6f4403183ba51121951ce0602e Mon Sep 17 00:00:00 2001
From: Abid Qadeer <haqadeer at amd.com>
Date: Mon, 17 Jun 2024 21:00:09 +0100
Subject: [PATCH 16/26] [flang] Fix comments and formatting. (NFC) (#95786)

As mentioned in
[here](https://github.com/llvm/llvm-project/pull/95462#discussion_r1640126721),
the formatting of the comments have been fixed. Also added comments
before literal arguments.
---
 .../Transforms/DebugTypeGenerator.cpp         | 24 +++++++++----------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/flang/lib/Optimizer/Transforms/DebugTypeGenerator.cpp b/flang/lib/Optimizer/Transforms/DebugTypeGenerator.cpp
index 70ff8b386ac3d..407ecc8e327b4 100644
--- a/flang/lib/Optimizer/Transforms/DebugTypeGenerator.cpp
+++ b/flang/lib/Optimizer/Transforms/DebugTypeGenerator.cpp
@@ -75,8 +75,8 @@ static mlir::LLVM::DITypeAttr genBasicType(mlir::MLIRContext *context,
 }
 
 static mlir::LLVM::DITypeAttr genPlaceholderType(mlir::MLIRContext *context) {
-  return genBasicType(context, mlir::StringAttr::get(context, "integer"), 32,
-                      llvm::dwarf::DW_ATE_signed);
+  return genBasicType(context, mlir::StringAttr::get(context, "integer"),
+                      /*bitSize=*/32, llvm::dwarf::DW_ATE_signed);
 }
 
 mlir::LLVM::DITypeAttr DebugTypeGenerator::convertBoxedSequenceType(
@@ -145,11 +145,10 @@ mlir::LLVM::DITypeAttr DebugTypeGenerator::convertBoxedSequenceType(
     elements.push_back(subrangeTy);
   }
   return mlir::LLVM::DICompositeTypeAttr::get(
-      context, llvm::dwarf::DW_TAG_array_type, /*recursive id*/ {},
-      /* name */ nullptr, /* file */ nullptr, /* line */ 0,
-      /* scope */ nullptr, elemTy, mlir::LLVM::DIFlags::Zero,
-      /* sizeInBits */ 0, /*alignInBits*/ 0, elements, dataLocation,
-      /* rank */ nullptr, allocated, associated);
+      context, llvm::dwarf::DW_TAG_array_type, /*recursive_id=*/{},
+      /*name=*/nullptr, /*file=*/nullptr, /*line=*/0, /*scope=*/nullptr, elemTy,
+      mlir::LLVM::DIFlags::Zero, /*sizeInBits=*/0, /*alignInBits=*/0, elements,
+      dataLocation, /*rank=*/nullptr, allocated, associated);
 }
 
 mlir::LLVM::DITypeAttr DebugTypeGenerator::convertSequenceType(
@@ -184,12 +183,11 @@ mlir::LLVM::DITypeAttr DebugTypeGenerator::convertSequenceType(
   // have been set to some valid default values.
 
   return mlir::LLVM::DICompositeTypeAttr::get(
-      context, llvm::dwarf::DW_TAG_array_type, /*recursive id*/ {},
-      /* name */ nullptr, /* file */ nullptr, /* line */ 0, /* scope */ nullptr,
-      elemTy, mlir::LLVM::DIFlags::Zero, /* sizeInBits */ 0,
-      /*alignInBits*/ 0, elements, /* dataLocation */ nullptr,
-      /* rank */ nullptr, /* allocated */ nullptr,
-      /* associated */ nullptr);
+      context, llvm::dwarf::DW_TAG_array_type, /*recursive_id=*/{},
+      /*name=*/nullptr, /*file=*/nullptr, /*line=*/0, /*scope=*/nullptr, elemTy,
+      mlir::LLVM::DIFlags::Zero, /*sizeInBits=*/0, /*alignInBits=*/0, elements,
+      /*dataLocation=*/nullptr, /*rank=*/nullptr, /*allocated=*/nullptr,
+      /*associated=*/nullptr);
 }
 
 mlir::LLVM::DITypeAttr DebugTypeGenerator::convertCharacterType(

>From 7620fe0d2d1e0257611c0ab0d96f3bf1bf7a1079 Mon Sep 17 00:00:00 2001
From: Louis Dionne <ldionne.2 at gmail.com>
Date: Mon, 17 Jun 2024 16:20:58 -0400
Subject: [PATCH 17/26] [CI][format] Explicitly pass extensions to
 git-clang-format (#95794)

This ensures that the CI script controls which file extensions are
considered instead of letting git-clang-format apply its own filtering
rules. In particular, this properly handles libc++ extension-less
headers which were passed to git-clang-format, but then dropped by that
tool as having an unrecognized extension.
---
 llvm/utils/git/code-format-helper.py | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/llvm/utils/git/code-format-helper.py b/llvm/utils/git/code-format-helper.py
index f1207026704e8..d60d4131bc94b 100755
--- a/llvm/utils/git/code-format-helper.py
+++ b/llvm/utils/git/code-format-helper.py
@@ -216,6 +216,17 @@ def format_run(self, changed_files: List[str], args: FormatArgs) -> Optional[str
             cf_cmd.append(args.start_rev)
             cf_cmd.append(args.end_rev)
 
+        # Gather the extension of all modified files and pass them explicitly to git-clang-format.
+        # This prevents git-clang-format from applying its own filtering rules on top of ours.
+        extensions = set()
+        for file in cpp_files:
+            _, ext = os.path.splitext(file)
+            extensions.add(
+                ext.strip(".")
+            )  # Exclude periods since git-clang-format takes extensions without them
+        cf_cmd.append("--extensions")
+        cf_cmd.append("'{}'".format(",".join(extensions)))
+
         cf_cmd.append("--")
         cf_cmd += cpp_files
 

>From 5b04b6fe3fabba8f76d730da3c0d528e1dd0c184 Mon Sep 17 00:00:00 2001
From: Florian Mayer <fmayer at google.com>
Date: Mon, 17 Jun 2024 13:21:34 -0700
Subject: [PATCH 18/26] [HWASan] [compiler-rt] support non-4k pages on Android
 (#95069)

---
 compiler-rt/lib/asan/asan_linux.cpp             |  3 ++-
 compiler-rt/lib/asan/asan_mac.cpp               |  3 ++-
 compiler-rt/lib/asan/asan_premap_shadow.cpp     |  3 ++-
 .../lib/hwasan/hwasan_dynamic_shadow.cpp        | 17 +++++++++++------
 compiler-rt/lib/memprof/memprof_linux.cpp       |  3 ++-
 .../lib/sanitizer_common/sanitizer_common.h     | 17 ++++++++---------
 .../lib/sanitizer_common/sanitizer_linux.cpp    |  4 ++--
 .../sanitizer_linux_libcdep.cpp                 |  5 ++---
 .../lib/sanitizer_common/sanitizer_mac.cpp      |  4 ++--
 .../lib/sanitizer_common/sanitizer_win.cpp      |  5 ++---
 10 files changed, 35 insertions(+), 29 deletions(-)

diff --git a/compiler-rt/lib/asan/asan_linux.cpp b/compiler-rt/lib/asan/asan_linux.cpp
index a517de5af00dc..0b470db86748f 100644
--- a/compiler-rt/lib/asan/asan_linux.cpp
+++ b/compiler-rt/lib/asan/asan_linux.cpp
@@ -93,7 +93,8 @@ uptr FindDynamicShadowStart() {
 #  endif
 
   return MapDynamicShadow(shadow_size_bytes, ASAN_SHADOW_SCALE,
-                          /*min_shadow_base_alignment*/ 0, kHighMemEnd);
+                          /*min_shadow_base_alignment*/ 0, kHighMemEnd,
+                          GetMmapGranularity());
 }
 
 void AsanApplyToGlobals(globals_op_fptr op, const void *needle) {
diff --git a/compiler-rt/lib/asan/asan_mac.cpp b/compiler-rt/lib/asan/asan_mac.cpp
index b250f796e165f..bfc349223258b 100644
--- a/compiler-rt/lib/asan/asan_mac.cpp
+++ b/compiler-rt/lib/asan/asan_mac.cpp
@@ -51,7 +51,8 @@ bool IsSystemHeapAddress (uptr addr) { return false; }
 
 uptr FindDynamicShadowStart() {
   return MapDynamicShadow(MemToShadowSize(kHighMemEnd), ASAN_SHADOW_SCALE,
-                          /*min_shadow_base_alignment*/ 0, kHighMemEnd);
+                          /*min_shadow_base_alignment*/ 0, kHighMemEnd,
+                          GetMmapGranularity());
 }
 
 // No-op. Mac does not support static linkage anyway.
diff --git a/compiler-rt/lib/asan/asan_premap_shadow.cpp b/compiler-rt/lib/asan/asan_premap_shadow.cpp
index bed2f62a22511..6e08b8f966507 100644
--- a/compiler-rt/lib/asan/asan_premap_shadow.cpp
+++ b/compiler-rt/lib/asan/asan_premap_shadow.cpp
@@ -33,7 +33,8 @@ uptr PremapShadowSize() {
 // PremapShadowSize() bytes on the right of it are mapped r/o.
 uptr PremapShadow() {
   return MapDynamicShadow(PremapShadowSize(), /*mmap_alignment_scale*/ 3,
-                          /*min_shadow_base_alignment*/ 0, kHighMemEnd);
+                          /*min_shadow_base_alignment*/ 0, kHighMemEnd,
+                          GetMmapGranularity());
 }
 
 bool PremapShadowFailed() {
diff --git a/compiler-rt/lib/hwasan/hwasan_dynamic_shadow.cpp b/compiler-rt/lib/hwasan/hwasan_dynamic_shadow.cpp
index 7642ba6c0bf08..48bc3b631ac07 100644
--- a/compiler-rt/lib/hwasan/hwasan_dynamic_shadow.cpp
+++ b/compiler-rt/lib/hwasan/hwasan_dynamic_shadow.cpp
@@ -36,15 +36,20 @@ decltype(__hwasan_shadow)* __hwasan_premap_shadow();
 
 namespace __hwasan {
 
+// We cannot call anything in libc here (see comment above), so we need to
+// assume the biggest allowed page size.
+// Android max page size is defined as 16k here:
+// https://android.googlesource.com/platform/bionic/+/main/libc/platform/bionic/page.h#41
+static constexpr uptr kMaxGranularity = 16384;
+
 // Conservative upper limit.
 static uptr PremapShadowSize() {
-  return RoundUpTo(GetMaxVirtualAddress() >> kShadowScale,
-                   GetMmapGranularity());
+  return RoundUpTo(GetMaxVirtualAddress() >> kShadowScale, kMaxGranularity);
 }
 
 static uptr PremapShadow() {
   return MapDynamicShadow(PremapShadowSize(), kShadowScale,
-                          kShadowBaseAlignment, kHighMemEnd);
+                          kShadowBaseAlignment, kHighMemEnd, kMaxGranularity);
 }
 
 static bool IsPremapShadowAvailable() {
@@ -56,7 +61,7 @@ static bool IsPremapShadowAvailable() {
 }
 
 static uptr FindPremappedShadowStart(uptr shadow_size_bytes) {
-  const uptr granularity = GetMmapGranularity();
+  const uptr granularity = kMaxGranularity;
   const uptr shadow_start = reinterpret_cast<uptr>(&__hwasan_shadow);
   const uptr premap_shadow_size = PremapShadowSize();
   const uptr shadow_size = RoundUpTo(shadow_size_bytes, granularity);
@@ -109,7 +114,7 @@ uptr FindDynamicShadowStart(uptr shadow_size_bytes) {
   if (IsPremapShadowAvailable())
     return FindPremappedShadowStart(shadow_size_bytes);
   return MapDynamicShadow(shadow_size_bytes, kShadowScale, kShadowBaseAlignment,
-                          kHighMemEnd);
+                          kHighMemEnd, kMaxGranularity);
 }
 
 }  // namespace __hwasan
@@ -135,7 +140,7 @@ uptr FindDynamicShadowStart(uptr shadow_size_bytes) {
                                     RingBufferSize());
 #  endif
   return MapDynamicShadow(shadow_size_bytes, kShadowScale, kShadowBaseAlignment,
-                          kHighMemEnd);
+                          kHighMemEnd, GetMmapGranularity());
 }
 
 }  // namespace __hwasan
diff --git a/compiler-rt/lib/memprof/memprof_linux.cpp b/compiler-rt/lib/memprof/memprof_linux.cpp
index 26a2b456b874e..fbe5d250f840b 100644
--- a/compiler-rt/lib/memprof/memprof_linux.cpp
+++ b/compiler-rt/lib/memprof/memprof_linux.cpp
@@ -58,7 +58,8 @@ void InitializePlatformExceptionHandlers() {}
 uptr FindDynamicShadowStart() {
   uptr shadow_size_bytes = MemToShadowSize(kHighMemEnd);
   return MapDynamicShadow(shadow_size_bytes, SHADOW_SCALE,
-                          /*min_shadow_base_alignment*/ 0, kHighMemEnd);
+                          /*min_shadow_base_alignment*/ 0, kHighMemEnd,
+                          GetMmapGranularity());
 }
 
 void *MemprofDlSymNext(const char *sym) { return dlsym(RTLD_NEXT, sym); }
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_common.h b/compiler-rt/lib/sanitizer_common/sanitizer_common.h
index c451fc962c529..2d1059140c303 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_common.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_common.h
@@ -60,14 +60,10 @@ inline int Verbosity() {
   return atomic_load(&current_verbosity, memory_order_relaxed);
 }
 
-#if SANITIZER_ANDROID
-inline uptr GetPageSize() {
-// Android post-M sysconf(_SC_PAGESIZE) crashes if called from .preinit_array.
-  return 4096;
-}
-inline uptr GetPageSizeCached() {
-  return 4096;
-}
+#if SANITIZER_ANDROID && !defined(__aarch64__)
+// 32-bit Android only has 4k pages.
+inline uptr GetPageSize() { return 4096; }
+inline uptr GetPageSizeCached() { return 4096; }
 #else
 uptr GetPageSize();
 extern uptr PageSizeCached;
@@ -77,6 +73,7 @@ inline uptr GetPageSizeCached() {
   return PageSizeCached;
 }
 #endif
+
 uptr GetMmapGranularity();
 uptr GetMaxVirtualAddress();
 uptr GetMaxUserVirtualAddress();
@@ -91,6 +88,7 @@ void GetThreadStackAndTls(bool main, uptr *stk_addr, uptr *stk_size,
 
 // Memory management
 void *MmapOrDie(uptr size, const char *mem_type, bool raw_report = false);
+
 inline void *MmapOrDieQuietly(uptr size, const char *mem_type) {
   return MmapOrDie(size, mem_type, /*raw_report*/ true);
 }
@@ -139,7 +137,8 @@ void UnmapFromTo(uptr from, uptr to);
 // shadow_size_bytes bytes on the right, which on linux is mapped no access.
 // The high_mem_end may be updated if the original shadow size doesn't fit.
 uptr MapDynamicShadow(uptr shadow_size_bytes, uptr shadow_scale,
-                      uptr min_shadow_base_alignment, uptr &high_mem_end);
+                      uptr min_shadow_base_alignment, uptr &high_mem_end,
+                      uptr granularity);
 
 // Let S = max(shadow_size, num_aliases * alias_size, ring_buffer_size).
 // Reserves 2*S bytes of address space to the right of the returned address and
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_linux.cpp b/compiler-rt/lib/sanitizer_common/sanitizer_linux.cpp
index 5d2dd3a7a658f..d15caa76efb06 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_linux.cpp
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_linux.cpp
@@ -1136,7 +1136,7 @@ uptr GetMaxUserVirtualAddress() {
   return addr;
 }
 
-#  if !SANITIZER_ANDROID
+#  if !SANITIZER_ANDROID || defined(__aarch64__)
 uptr GetPageSize() {
 #    if SANITIZER_LINUX && (defined(__x86_64__) || defined(__i386__)) && \
         defined(EXEC_PAGESIZE)
@@ -1155,7 +1155,7 @@ uptr GetPageSize() {
   return sysconf(_SC_PAGESIZE);  // EXEC_PAGESIZE may not be trustworthy.
 #    endif
 }
-#  endif  // !SANITIZER_ANDROID
+#  endif
 
 uptr ReadBinaryName(/*out*/ char *buf, uptr buf_len) {
 #  if SANITIZER_SOLARIS
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_linux_libcdep.cpp b/compiler-rt/lib/sanitizer_common/sanitizer_linux_libcdep.cpp
index 6d05411222d9e..175362183fd78 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_linux_libcdep.cpp
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_linux_libcdep.cpp
@@ -995,9 +995,8 @@ void UnmapFromTo(uptr from, uptr to) {
 }
 
 uptr MapDynamicShadow(uptr shadow_size_bytes, uptr shadow_scale,
-                      uptr min_shadow_base_alignment,
-                      UNUSED uptr &high_mem_end) {
-  const uptr granularity = GetMmapGranularity();
+                      uptr min_shadow_base_alignment, UNUSED uptr &high_mem_end,
+                      uptr granularity) {
   const uptr alignment =
       Max<uptr>(granularity << shadow_scale, 1ULL << min_shadow_base_alignment);
   const uptr left_padding =
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_mac.cpp b/compiler-rt/lib/sanitizer_common/sanitizer_mac.cpp
index 24e3d1112520e..cbdf3e95925bf 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_mac.cpp
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_mac.cpp
@@ -1188,8 +1188,8 @@ uptr GetMaxVirtualAddress() {
 }
 
 uptr MapDynamicShadow(uptr shadow_size_bytes, uptr shadow_scale,
-                      uptr min_shadow_base_alignment, uptr &high_mem_end) {
-  const uptr granularity = GetMmapGranularity();
+                      uptr min_shadow_base_alignment, uptr &high_mem_end,
+                      uptr granularity) {
   const uptr alignment =
       Max<uptr>(granularity << shadow_scale, 1ULL << min_shadow_base_alignment);
   const uptr left_padding =
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_win.cpp b/compiler-rt/lib/sanitizer_common/sanitizer_win.cpp
index 4e5ad8e4693b4..0b198890fc798 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_win.cpp
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_win.cpp
@@ -384,9 +384,8 @@ bool DontDumpShadowMemory(uptr addr, uptr length) {
 }
 
 uptr MapDynamicShadow(uptr shadow_size_bytes, uptr shadow_scale,
-                      uptr min_shadow_base_alignment,
-                      UNUSED uptr &high_mem_end) {
-  const uptr granularity = GetMmapGranularity();
+                      uptr min_shadow_base_alignment, UNUSED uptr &high_mem_end,
+                      uptr granularity) {
   const uptr alignment =
       Max<uptr>(granularity << shadow_scale, 1ULL << min_shadow_base_alignment);
   const uptr left_padding =

>From d6cc35f7f67575f2d3534ea385c2f36f48f49aea Mon Sep 17 00:00:00 2001
From: Peiming Liu <peiming at google.com>
Date: Mon, 17 Jun 2024 13:29:53 -0700
Subject: [PATCH 19/26] Reapply "[mlir][sparse] implement lowering rules for
 IterateOp." (#95836)

---
 .../Transforms/SparseIterationToScf.cpp       | 121 +++++++++++++++++-
 .../Transforms/Utils/SparseTensorIterator.cpp |  40 ++++++
 .../Transforms/Utils/SparseTensorIterator.h   |  26 +++-
 .../SparseTensor/sparse_iteration_to_scf.mlir |  54 ++++++--
 4 files changed, 224 insertions(+), 17 deletions(-)

diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/SparseIterationToScf.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/SparseIterationToScf.cpp
index 62887c75c872b..4224925147c84 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/SparseIterationToScf.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/SparseIterationToScf.cpp
@@ -34,6 +34,20 @@ convertIterSpaceType(IterSpaceType itSp, SmallVectorImpl<Type> &fields) {
   return success();
 }
 
+static std::optional<LogicalResult>
+convertIteratorType(IteratorType itTp, SmallVectorImpl<Type> &fields) {
+  // The actually Iterator Values (that are updated every iteration).
+  auto idxTp = IndexType::get(itTp.getContext());
+  // TODO: handle batch dimension.
+  assert(itTp.getEncoding().getBatchLvlRank() == 0);
+  if (!itTp.isUnique()) {
+    // Segment high for non-unique iterator.
+    fields.push_back(idxTp);
+  }
+  fields.push_back(idxTp);
+  return success();
+}
+
 namespace {
 
 /// Sparse codegen rule for number of entries operator.
@@ -57,10 +71,114 @@ class ExtractIterSpaceConverter
   }
 };
 
+class SparseIterateOpConverter : public OneToNOpConversionPattern<IterateOp> {
+public:
+  using OneToNOpConversionPattern::OneToNOpConversionPattern;
+  LogicalResult
+  matchAndRewrite(IterateOp op, OpAdaptor adaptor,
+                  OneToNPatternRewriter &rewriter) const override {
+    if (!op.getCrdUsedLvls().empty())
+      return rewriter.notifyMatchFailure(
+          op, "non-empty coordinates list not implemented.");
+
+    Location loc = op.getLoc();
+
+    auto iterSpace = SparseIterationSpace::fromValues(
+        op.getIterSpace().getType(), adaptor.getIterSpace(), 0);
+
+    std::unique_ptr<SparseIterator> it =
+        iterSpace.extractIterator(rewriter, loc);
+
+    if (it->iteratableByFor()) {
+      auto [lo, hi] = it->genForCond(rewriter, loc);
+      Value step = constantIndex(rewriter, loc, 1);
+      SmallVector<Value> ivs;
+      for (ValueRange inits : adaptor.getInitArgs())
+        llvm::append_range(ivs, inits);
+      scf::ForOp forOp = rewriter.create<scf::ForOp>(loc, lo, hi, step, ivs);
+
+      Block *loopBody = op.getBody();
+      OneToNTypeMapping bodyTypeMapping(loopBody->getArgumentTypes());
+      if (failed(typeConverter->convertSignatureArgs(
+              loopBody->getArgumentTypes(), bodyTypeMapping)))
+        return failure();
+      rewriter.applySignatureConversion(loopBody, bodyTypeMapping);
+
+      rewriter.eraseBlock(forOp.getBody());
+      Region &dstRegion = forOp.getRegion();
+      rewriter.inlineRegionBefore(op.getRegion(), dstRegion, dstRegion.end());
+
+      auto yieldOp =
+          llvm::cast<sparse_tensor::YieldOp>(forOp.getBody()->getTerminator());
+
+      rewriter.setInsertionPointToEnd(forOp.getBody());
+      // replace sparse_tensor.yield with scf.yield.
+      rewriter.create<scf::YieldOp>(loc, yieldOp.getResults());
+      rewriter.eraseOp(yieldOp);
+
+      const OneToNTypeMapping &resultMapping = adaptor.getResultMapping();
+      rewriter.replaceOp(op, forOp.getResults(), resultMapping);
+    } else {
+      SmallVector<Value> ivs;
+      llvm::append_range(ivs, it->getCursor());
+      for (ValueRange inits : adaptor.getInitArgs())
+        llvm::append_range(ivs, inits);
+
+      assert(llvm::all_of(ivs, [](Value v) { return v != nullptr; }));
+
+      TypeRange types = ValueRange(ivs).getTypes();
+      auto whileOp = rewriter.create<scf::WhileOp>(loc, types, ivs);
+      SmallVector<Location> l(types.size(), op.getIterator().getLoc());
+
+      // Generates loop conditions.
+      Block *before = rewriter.createBlock(&whileOp.getBefore(), {}, types, l);
+      rewriter.setInsertionPointToStart(before);
+      ValueRange bArgs = before->getArguments();
+      auto [whileCond, remArgs] = it->genWhileCond(rewriter, loc, bArgs);
+      assert(remArgs.size() == adaptor.getInitArgs().size());
+      rewriter.create<scf::ConditionOp>(loc, whileCond, before->getArguments());
+
+      // Generates loop body.
+      Block *loopBody = op.getBody();
+      OneToNTypeMapping bodyTypeMapping(loopBody->getArgumentTypes());
+      if (failed(typeConverter->convertSignatureArgs(
+              loopBody->getArgumentTypes(), bodyTypeMapping)))
+        return failure();
+      rewriter.applySignatureConversion(loopBody, bodyTypeMapping);
+
+      Region &dstRegion = whileOp.getAfter();
+      // TODO: handle uses of coordinate!
+      rewriter.inlineRegionBefore(op.getRegion(), dstRegion, dstRegion.end());
+      ValueRange aArgs = whileOp.getAfterArguments();
+      auto yieldOp = llvm::cast<sparse_tensor::YieldOp>(
+          whileOp.getAfterBody()->getTerminator());
+
+      rewriter.setInsertionPointToEnd(whileOp.getAfterBody());
+
+      aArgs = it->linkNewScope(aArgs);
+      ValueRange nx = it->forward(rewriter, loc);
+      SmallVector<Value> yields;
+      llvm::append_range(yields, nx);
+      llvm::append_range(yields, yieldOp.getResults());
+
+      // replace sparse_tensor.yield with scf.yield.
+      rewriter.eraseOp(yieldOp);
+      rewriter.create<scf::YieldOp>(loc, yields);
+
+      const OneToNTypeMapping &resultMapping = adaptor.getResultMapping();
+      rewriter.replaceOp(
+          op, whileOp.getResults().drop_front(it->getCursor().size()),
+          resultMapping);
+    }
+    return success();
+  }
+};
+
 } // namespace
 
 mlir::SparseIterationTypeConverter::SparseIterationTypeConverter() {
   addConversion([](Type type) { return type; });
+  addConversion(convertIteratorType);
   addConversion(convertIterSpaceType);
 
   addSourceMaterialization([](OpBuilder &builder, IterSpaceType spTp,
@@ -74,5 +192,6 @@ mlir::SparseIterationTypeConverter::SparseIterationTypeConverter() {
 
 void mlir::populateLowerSparseIterationToSCFPatterns(
     TypeConverter &converter, RewritePatternSet &patterns) {
-  patterns.add<ExtractIterSpaceConverter>(converter, patterns.getContext());
+  patterns.add<ExtractIterSpaceConverter, SparseIterateOpConverter>(
+      converter, patterns.getContext());
 }
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
index be8e15d6ae6f4..ef95fcc84bd90 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
@@ -331,6 +331,13 @@ class TrivialIterator : public ConcreteIterator {
   TrivialIterator(const SparseTensorLevel &stl)
       : ConcreteIterator(stl, IterKind::kTrivial, /*itValCnt=*/1) {}
 
+  TrivialIterator(OpBuilder &b, Location l, const SparseTensorLevel &stl,
+                  Value posLo, Value posHi)
+      : ConcreteIterator(stl, IterKind::kTrivial, /*itValCnt=*/1), posLo(posLo),
+        posHi(posHi) {
+    seek(posLo);
+  }
+
   std::string getDebugInterfacePrefix() const override {
     return std::string("trivial<") + stl.toString() + ">";
   }
@@ -420,6 +427,14 @@ class DedupIterator : public ConcreteIterator {
       : ConcreteIterator(stl, IterKind::kDedup, /*itValCnt=*/2) {
     assert(!stl.isUnique());
   }
+
+  DedupIterator(OpBuilder &b, Location l, const SparseTensorLevel &stl,
+                Value posLo, Value posHi)
+      : ConcreteIterator(stl, IterKind::kDedup, /*itValCnt=*/2), posHi(posHi) {
+    assert(!stl.isUnique());
+    seek({posLo, genSegmentHigh(b, l, posLo)});
+  }
+
   // For LLVM-style RTTI.
   static bool classof(const SparseIterator *from) {
     return from->kind == IterKind::kDedup;
@@ -1532,6 +1547,11 @@ SparseIterationSpace mlir::sparse_tensor::SparseIterationSpace::fromValues(
   return space;
 }
 
+std::unique_ptr<SparseIterator>
+SparseIterationSpace::extractIterator(OpBuilder &b, Location l) const {
+  return makeSimpleIterator(b, l, *this);
+}
+
 //===----------------------------------------------------------------------===//
 // SparseIterator factory functions.
 //===----------------------------------------------------------------------===//
@@ -1590,6 +1610,26 @@ sparse_tensor::makeSynLevelAndIterator(Value sz, unsigned tid, unsigned lvl,
   return std::make_pair(std::move(stl), std::move(it));
 }
 
+std::unique_ptr<SparseIterator>
+sparse_tensor::makeSimpleIterator(OpBuilder &b, Location l,
+                                  const SparseIterationSpace &iterSpace) {
+  // assert(iterSpace.getSpaceDim() == 1);
+  std::unique_ptr<SparseIterator> ret;
+  if (!iterSpace.isUnique()) {
+    // We always dedupliate the non-unique level, but we should optimize it away
+    // if possible.
+    ret = std::make_unique<DedupIterator>(b, l, iterSpace.getLastLvl(),
+                                          iterSpace.getBoundLo(),
+                                          iterSpace.getBoundHi());
+  } else {
+    ret = std::make_unique<TrivialIterator>(b, l, iterSpace.getLastLvl(),
+                                            iterSpace.getBoundLo(),
+                                            iterSpace.getBoundHi());
+  }
+  ret->setSparseEmitStrategy(SparseEmitStrategy::kFunctional);
+  return ret;
+}
+
 std::unique_ptr<SparseIterator>
 sparse_tensor::makeSimpleIterator(const SparseTensorLevel &stl,
                                   SparseEmitStrategy strategy) {
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
index 17636af2b2f9d..91f363db93f1d 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
@@ -132,6 +132,10 @@ class SparseIterationSpace {
   Value getBoundLo() const { return bound.first; }
   Value getBoundHi() const { return bound.second; }
 
+  // Extract an iterator to iterate over the sparse iteration space.
+  std::unique_ptr<SparseIterator> extractIterator(OpBuilder &b,
+                                                  Location l) const;
+
 private:
   SmallVector<std::unique_ptr<SparseTensorLevel>> lvls;
   std::pair<Value, Value> bound;
@@ -192,6 +196,13 @@ class SparseIterator {
     crd = nullptr;
   }
 
+  // Reconstructs a iteration space directly from the provided ValueRange.
+  static std::unique_ptr<SparseIterator>
+  fromValues(IteratorType dstTp, ValueRange values, unsigned tid);
+
+  // The inverse operation of `fromValues`.
+  SmallVector<Value> toValues() const { llvm_unreachable("Not implemented"); }
+
   //
   // Iterator properties.
   //
@@ -345,12 +356,21 @@ std::unique_ptr<SparseTensorLevel> makeSparseTensorLevel(OpBuilder &b,
                                                          unsigned tid,
                                                          Level lvl);
 
-/// Helper function to create a TensorLevel object from given `tensor`.
+/// Helper function to create a TensorLevel object from given ValueRange.
 std::unique_ptr<SparseTensorLevel> makeSparseTensorLevel(LevelType lt, Value sz,
                                                          ValueRange buffers,
                                                          unsigned tid, Level l);
-/// Helper function to create a simple SparseIterator object that iterates
-/// over the SparseTensorLevel.
+
+/// Helper function to create a simple SparseIterator object that iterate
+/// over the entire iteration space.
+std::unique_ptr<SparseIterator>
+makeSimpleIterator(OpBuilder &b, Location l,
+                   const SparseIterationSpace &iterSpace);
+
+/// Helper function to create a simple SparseIterator object that iterate
+/// over the sparse tensor level.
+/// TODO: switch to `SparseIterationSpace` (which support N-D iterator) when
+/// feature complete.
 std::unique_ptr<SparseIterator> makeSimpleIterator(
     const SparseTensorLevel &stl,
     SparseEmitStrategy strategy = SparseEmitStrategy::kFunctional);
diff --git a/mlir/test/Dialect/SparseTensor/sparse_iteration_to_scf.mlir b/mlir/test/Dialect/SparseTensor/sparse_iteration_to_scf.mlir
index 5fcd661bb69b2..77a0e89dc7c81 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_iteration_to_scf.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_iteration_to_scf.mlir
@@ -1,4 +1,5 @@
 // RUN: mlir-opt %s --lower-sparse-iteration-to-scf | FileCheck %s
+// RUN: mlir-opt %s --sparse-space-collapse --lower-sparse-iteration-to-scf | FileCheck %s --check-prefix COLLAPSED
 
 #COO = #sparse_tensor.encoding<{
   map = (i, j) -> (
@@ -7,17 +8,44 @@
   )
 }>
 
-// CHECK-LABEL:   func.func @sparse_1D_space(
-// CHECK-SAME:      %[[VAL_0:.*]]: tensor<?x?xf32, #sparse{{[0-9]*}}>) -> !sparse_tensor.iter_space<#sparse{{[0-9]*}}, lvls = 0> {
-// CHECK-DAG:       %[[C0:.*]] = arith.constant 0 : index
-// CHECK-DAG:       %[[C1:.*]] = arith.constant 1 : index
-// CHECK-DAG:       %[[LVL_SIZE:.*]] = sparse_tensor.lvl %[[VAL_0]], %[[C0]] : tensor<?x?xf32, #sparse{{[0-9]*}}>
-// CHECK:           %[[POS_MEM:.*]] = sparse_tensor.positions %[[VAL_0]] {level = 0 : index} : tensor<?x?xf32, #sparse{{[0-9]*}}> to memref<?xindex>
-// CHECK:           %[[CRD_MEM:.*]] = sparse_tensor.coordinates %[[VAL_0]] {level = 0 : index} : tensor<?x?xf32, #sparse{{[0-9]*}}> to memref<?xindex>
-// CHECK:           %[[POS_LO:.*]] = memref.load %[[POS_MEM]]{{\[}}%[[C0]]] : memref<?xindex>
-// CHECK:           %[[POS_HI:.*]] = memref.load %[[POS_MEM]]{{\[}}%[[C1]]] : memref<?xindex>
-// CHECK:           %[[ITER_SPACE:.*]] = builtin.unrealized_conversion_cast %[[POS_MEM]], %[[CRD_MEM]], %[[LVL_SIZE]], %[[POS_LO]], %[[POS_HI]]
-func.func @sparse_1D_space(%sp : tensor<?x?xf32, #COO>) -> !sparse_tensor.iter_space<#COO, lvls = 0> {
-  %l1 = sparse_tensor.extract_iteration_space %sp lvls = 0 : tensor<?x?xf32, #COO> -> !sparse_tensor.iter_space<#COO, lvls = 0>
-  return %l1 : !sparse_tensor.iter_space<#COO, lvls = 0>
+// CHECK-LABEL:   @sparse_iteration_to_scf
+//                  // deduplication
+// CHECK:           scf.while {{.*}} {
+// CHECK:           } do {
+// CHECK:           }
+// CHECK:           scf.while {{.*}} {
+// CHECK:           } do {
+//                    // actual computation
+// CHECK:             scf.for {{.*}} {
+// CHECK:               arith.addi
+// CHECK:             }
+//                    // deduplication
+// CHECK:             scf.while {{.*}} {
+// CHECK:             } do {
+// CHECK:             }
+// CHECK:             scf.yield
+// CHECK:           }
+// CHECK:           return
+
+// COLLAPSED-LABEL:   @sparse_iteration_to_scf
+// COLLAPSED:           %[[RET:.*]] = scf.for {{.*}} {
+// COLLAPSED:             %[[VAL:.*]] = arith.addi
+// COLLAPSED:             scf.yield %[[VAL]] : index
+// COLLAPSED:           }
+// COLLAPSED:           return %[[RET]] : index
+func.func @sparse_iteration_to_scf(%sp : tensor<4x8xf32, #COO>) -> index {
+  %i = arith.constant 0 : index
+  %c1 = arith.constant 1 : index
+  %l1 = sparse_tensor.extract_iteration_space %sp lvls = 0
+      : tensor<4x8xf32, #COO> -> !sparse_tensor.iter_space<#COO, lvls = 0>
+  %r1 = sparse_tensor.iterate %it1 in %l1 iter_args(%outer = %i): !sparse_tensor.iter_space<#COO, lvls = 0 to 1> -> index {
+    %l2 = sparse_tensor.extract_iteration_space %sp at %it1 lvls = 1
+        : tensor<4x8xf32, #COO>, !sparse_tensor.iterator<#COO, lvls = 0 to 1> -> !sparse_tensor.iter_space<#COO, lvls = 1>
+    %r2 = sparse_tensor.iterate %it2 in %l2 iter_args(%inner = %outer): !sparse_tensor.iter_space<#COO, lvls = 1 to 2> -> index {
+      %k = arith.addi %inner, %c1 : index
+      sparse_tensor.yield %k : index
+    }
+    sparse_tensor.yield %r2 : index
+  }
+  return %r1 : index
 }

>From dcb6c0d71c8dbb6bb17391c968c3716cfafd3765 Mon Sep 17 00:00:00 2001
From: Fabian Mora <fmora.dev at gmail.com>
Date: Mon, 17 Jun 2024 15:44:35 -0500
Subject: [PATCH 20/26] Reland [mlir][Target] Improve ROCDL gpu serialization
 API (#95813)

Reland: https://github.com/llvm/llvm-project/pull/95456

This patch improves the ROCDL gpu serialization API by:
- Introducing the enum `AMDGCNLibraries` for specifying the AMD GCN
device code libraries to use during linking.
- Removing `getCommonBitcodeLibs` in favor of `AMDGCNLibraries`.
Previously `getCommonBitcodeLibs` would try to load all AMD GCN bitcode
librariesm now it will only load the requested libraries.
- Exposing the `compileToBinary` method and making it virtual, allowing
downstream users to re-use this method.
- Exposing `moduleToObjectImpl`, this method provides a prototype flow
for compiling to binary, allowing downstream users to re-use this
method.
- It also avoids constructing the control variables if no device
libraries are being used.
- Changes the style of the error messages to be composable, ie no full
stops.
- Adds an error message for when the ROCm toolkit can't be found but it
was required.
---
 mlir/include/mlir/Target/LLVM/ROCDL/Utils.h |  41 ++-
 mlir/lib/Dialect/GPU/CMakeLists.txt         |   2 +-
 mlir/lib/Target/LLVM/CMakeLists.txt         |  13 +-
 mlir/lib/Target/LLVM/ROCDL/Target.cpp       | 306 +++++++++++---------
 4 files changed, 213 insertions(+), 149 deletions(-)

diff --git a/mlir/include/mlir/Target/LLVM/ROCDL/Utils.h b/mlir/include/mlir/Target/LLVM/ROCDL/Utils.h
index 374fa65bd02e3..44c9ded317fa5 100644
--- a/mlir/include/mlir/Target/LLVM/ROCDL/Utils.h
+++ b/mlir/include/mlir/Target/LLVM/ROCDL/Utils.h
@@ -27,6 +27,19 @@ namespace ROCDL {
 /// 5. Returns an empty string.
 StringRef getROCMPath();
 
+/// Helper enum for specifying the AMD GCN device libraries required for
+/// compilation.
+enum class AMDGCNLibraries : uint32_t {
+  None = 0,
+  Ockl = 1,
+  Ocml = 2,
+  OpenCL = 4,
+  Hip = 8,
+  LastLib = Hip,
+  LLVM_MARK_AS_BITMASK_ENUM(LastLib),
+  All = (LastLib << 1) - 1
+};
+
 /// Base class for all ROCDL serializations from GPU modules into binary
 /// strings. By default this class serializes into LLVM bitcode.
 class SerializeGPUModuleBase : public LLVM::ModuleToObject {
@@ -49,8 +62,8 @@ class SerializeGPUModuleBase : public LLVM::ModuleToObject {
   /// Returns the bitcode files to be loaded.
   ArrayRef<std::string> getFileList() const;
 
-  /// Appends standard ROCm device libraries like `ocml.bc`, `ockl.bc`, etc.
-  LogicalResult appendStandardLibs();
+  /// Appends standard ROCm device libraries to `fileList`.
+  LogicalResult appendStandardLibs(AMDGCNLibraries libs);
 
   /// Loads the bitcode files in `fileList`.
   virtual std::optional<SmallVector<std::unique_ptr<llvm::Module>>>
@@ -63,15 +76,20 @@ class SerializeGPUModuleBase : public LLVM::ModuleToObject {
   LogicalResult handleBitcodeFile(llvm::Module &module) override;
 
 protected:
-  /// Appends the paths of common ROCm device libraries to `libs`.
-  LogicalResult getCommonBitcodeLibs(llvm::SmallVector<std::string> &libs,
-                                     SmallVector<char, 256> &libPath,
-                                     StringRef isaVersion);
-
   /// Adds `oclc` control variables to the LLVM module.
-  void addControlVariables(llvm::Module &module, bool wave64, bool daz,
-                           bool finiteOnly, bool unsafeMath, bool fastMath,
-                           bool correctSqrt, StringRef abiVer);
+  void addControlVariables(llvm::Module &module, AMDGCNLibraries libs,
+                           bool wave64, bool daz, bool finiteOnly,
+                           bool unsafeMath, bool fastMath, bool correctSqrt,
+                           StringRef abiVer);
+
+  /// Compiles assembly to a binary.
+  virtual std::optional<SmallVector<char, 0>>
+  compileToBinary(const std::string &serializedISA);
+
+  /// Default implementation of `ModuleToObject::moduleToObject`.
+  std::optional<SmallVector<char, 0>>
+  moduleToObjectImpl(const gpu::TargetOptions &targetOptions,
+                     llvm::Module &llvmModule);
 
   /// Returns the assembled ISA.
   std::optional<SmallVector<char, 0>> assembleIsa(StringRef isa);
@@ -84,6 +102,9 @@ class SerializeGPUModuleBase : public LLVM::ModuleToObject {
 
   /// List of LLVM bitcode files to link to.
   SmallVector<std::string> fileList;
+
+  /// AMD GCN libraries to use when linking, the default is using none.
+  AMDGCNLibraries deviceLibs = AMDGCNLibraries::None;
 };
 } // namespace ROCDL
 } // namespace mlir
diff --git a/mlir/lib/Dialect/GPU/CMakeLists.txt b/mlir/lib/Dialect/GPU/CMakeLists.txt
index 61ab298ebfb98..08c8aea36fac9 100644
--- a/mlir/lib/Dialect/GPU/CMakeLists.txt
+++ b/mlir/lib/Dialect/GPU/CMakeLists.txt
@@ -106,7 +106,7 @@ if(MLIR_ENABLE_ROCM_CONVERSIONS)
       "Building mlir with ROCm support requires the AMDGPU backend")
   endif()
 
-  set(DEFAULT_ROCM_PATH "/opt/rocm" CACHE PATH "Fallback path to search for ROCm installs")
+  set(DEFAULT_ROCM_PATH "" CACHE PATH "Fallback path to search for ROCm installs")
   target_compile_definitions(obj.MLIRGPUTransforms
     PRIVATE
     __DEFAULT_ROCM_PATH__="${DEFAULT_ROCM_PATH}"
diff --git a/mlir/lib/Target/LLVM/CMakeLists.txt b/mlir/lib/Target/LLVM/CMakeLists.txt
index 5a3fa160850b4..6e146710d67af 100644
--- a/mlir/lib/Target/LLVM/CMakeLists.txt
+++ b/mlir/lib/Target/LLVM/CMakeLists.txt
@@ -123,17 +123,18 @@ add_mlir_dialect_library(MLIRROCDLTarget
   )
 
 if(MLIR_ENABLE_ROCM_CONVERSIONS)
-  if (NOT ("AMDGPU" IN_LIST LLVM_TARGETS_TO_BUILD))
-    message(SEND_ERROR
-      "Building mlir with ROCm support requires the AMDGPU backend")
-  endif()
-
   if (DEFINED ROCM_PATH)
     set(DEFAULT_ROCM_PATH "${ROCM_PATH}" CACHE PATH "Fallback path to search for ROCm installs")
   elseif(DEFINED ENV{ROCM_PATH})
     set(DEFAULT_ROCM_PATH "$ENV{ROCM_PATH}" CACHE PATH "Fallback path to search for ROCm installs")
   else()
-    set(DEFAULT_ROCM_PATH "/opt/rocm" CACHE PATH "Fallback path to search for ROCm installs")
+    IF (WIN32)
+      # Avoid setting an UNIX path for Windows.
+      # TODO: Eventually migrate to FindHIP once it becomes a part of CMake.
+      set(DEFAULT_ROCM_PATH "" CACHE PATH "Fallback path to search for ROCm installs")
+    else()
+      set(DEFAULT_ROCM_PATH "/opt/rocm" CACHE PATH "Fallback path to search for ROCm installs")
+    endif()
   endif()
   message(VERBOSE "MLIR Default ROCM toolkit path: ${DEFAULT_ROCM_PATH}")
 
diff --git a/mlir/lib/Target/LLVM/ROCDL/Target.cpp b/mlir/lib/Target/LLVM/ROCDL/Target.cpp
index cc13e5b7436ea..6784f3668bde3 100644
--- a/mlir/lib/Target/LLVM/ROCDL/Target.cpp
+++ b/mlir/lib/Target/LLVM/ROCDL/Target.cpp
@@ -17,9 +17,6 @@
 #include "mlir/Dialect/LLVMIR/ROCDLDialect.h"
 #include "mlir/Support/FileUtilities.h"
 #include "mlir/Target/LLVM/ROCDL/Utils.h"
-#include "mlir/Target/LLVMIR/Dialect/GPU/GPUToLLVMIRTranslation.h"
-#include "mlir/Target/LLVMIR/Dialect/LLVMIR/LLVMToLLVMIRTranslation.h"
-#include "mlir/Target/LLVMIR/Dialect/ROCDL/ROCDLToLLVMIRTranslation.h"
 #include "mlir/Target/LLVMIR/Export.h"
 
 #include "llvm/IR/Constants.h"
@@ -112,8 +109,9 @@ SerializeGPUModuleBase::SerializeGPUModuleBase(
       if (auto file = dyn_cast<StringAttr>(attr))
         fileList.push_back(file.str());
 
-  // Append standard ROCm device bitcode libraries to the files to be loaded.
-  (void)appendStandardLibs();
+  // By default add all libraries if the toolkit path is not empty.
+  if (!getToolkitPath().empty())
+    deviceLibs = AMDGCNLibraries::All;
 }
 
 void SerializeGPUModuleBase::init() {
@@ -138,29 +136,67 @@ ArrayRef<std::string> SerializeGPUModuleBase::getFileList() const {
   return fileList;
 }
 
-LogicalResult SerializeGPUModuleBase::appendStandardLibs() {
+LogicalResult SerializeGPUModuleBase::appendStandardLibs(AMDGCNLibraries libs) {
+  if (libs == AMDGCNLibraries::None)
+    return success();
   StringRef pathRef = getToolkitPath();
-  if (!pathRef.empty()) {
-    SmallVector<char, 256> path;
-    path.insert(path.begin(), pathRef.begin(), pathRef.end());
-    llvm::sys::path::append(path, "amdgcn", "bitcode");
-    pathRef = StringRef(path.data(), path.size());
-    if (!llvm::sys::fs::is_directory(pathRef)) {
-      getOperation().emitRemark() << "ROCm amdgcn bitcode path: " << pathRef
-                                  << " does not exist or is not a directory.";
-      return failure();
-    }
-    StringRef isaVersion =
-        llvm::AMDGPU::getArchNameAMDGCN(llvm::AMDGPU::parseArchAMDGCN(chip));
-    isaVersion.consume_front("gfx");
-    return getCommonBitcodeLibs(fileList, path, isaVersion);
+  // Fail if the toolkit is empty.
+  if (pathRef.empty())
+    return failure();
+
+  // Get the path for the device libraries
+  SmallString<256> path;
+  path.insert(path.begin(), pathRef.begin(), pathRef.end());
+  llvm::sys::path::append(path, "amdgcn", "bitcode");
+  pathRef = StringRef(path.data(), path.size());
+
+  // Fail if the path is invalid.
+  if (!llvm::sys::fs::is_directory(pathRef)) {
+    getOperation().emitRemark() << "ROCm amdgcn bitcode path: " << pathRef
+                                << " does not exist or is not a directory";
+    return failure();
   }
+
+  // Get the ISA version.
+  StringRef isaVersion =
+      llvm::AMDGPU::getArchNameAMDGCN(llvm::AMDGPU::parseArchAMDGCN(chip));
+  isaVersion.consume_front("gfx");
+
+  // Helper function for adding a library.
+  auto addLib = [&](const Twine &lib) -> bool {
+    auto baseSize = path.size();
+    llvm::sys::path::append(path, lib);
+    StringRef pathRef(path.data(), path.size());
+    if (!llvm::sys::fs::is_regular_file(pathRef)) {
+      getOperation().emitRemark() << "bitcode library path: " << pathRef
+                                  << " does not exist or is not a file";
+      return true;
+    }
+    fileList.push_back(pathRef.str());
+    path.truncate(baseSize);
+    return false;
+  };
+
+  // Add ROCm device libraries. Fail if any of the libraries is not found, ie.
+  // if any of the `addLib` failed.
+  if ((any(libs & AMDGCNLibraries::Ocml) && addLib("ocml.bc")) ||
+      (any(libs & AMDGCNLibraries::Ockl) && addLib("ockl.bc")) ||
+      (any(libs & AMDGCNLibraries::Hip) && addLib("hip.bc")) ||
+      (any(libs & AMDGCNLibraries::OpenCL) && addLib("opencl.bc")) ||
+      (any(libs & (AMDGCNLibraries::Ocml | AMDGCNLibraries::Ockl)) &&
+       addLib("oclc_isa_version_" + isaVersion + ".bc")))
+    return failure();
   return success();
 }
 
 std::optional<SmallVector<std::unique_ptr<llvm::Module>>>
 SerializeGPUModuleBase::loadBitcodeFiles(llvm::Module &module) {
   SmallVector<std::unique_ptr<llvm::Module>> bcFiles;
+  // Return if there are no libs to load.
+  if (deviceLibs == AMDGCNLibraries::None && fileList.empty())
+    return bcFiles;
+  if (failed(appendStandardLibs(deviceLibs)))
+    return std::nullopt;
   if (failed(loadBitcodeFilesFromList(module.getContext(), fileList, bcFiles,
                                       true)))
     return std::nullopt;
@@ -174,80 +210,76 @@ LogicalResult SerializeGPUModuleBase::handleBitcodeFile(llvm::Module &module) {
   // Stop spamming us with clang version numbers
   if (auto *ident = module.getNamedMetadata("llvm.ident"))
     module.eraseNamedMetadata(ident);
+  // Override the libModules datalayout and target triple with the compiler's
+  // data layout should there be a discrepency.
+  setDataLayoutAndTriple(module);
   return success();
 }
 
 void SerializeGPUModuleBase::handleModulePreLink(llvm::Module &module) {
-  [[maybe_unused]] std::optional<llvm::TargetMachine *> targetMachine =
-      getOrCreateTargetMachine();
-  assert(targetMachine && "expect a TargetMachine");
-  addControlVariables(module, target.hasWave64(), target.hasDaz(),
+  // If all libraries are not set, traverse the module to determine which
+  // libraries are required.
+  if (deviceLibs != AMDGCNLibraries::All) {
+    for (llvm::Function &f : module.functions()) {
+      if (f.hasExternalLinkage() && f.hasName() && !f.hasExactDefinition()) {
+        StringRef funcName = f.getName();
+        if ("printf" == funcName)
+          deviceLibs |= AMDGCNLibraries::OpenCL | AMDGCNLibraries::Ockl |
+                        AMDGCNLibraries::Ocml;
+        if (funcName.starts_with("__ockl_"))
+          deviceLibs |= AMDGCNLibraries::Ockl;
+        if (funcName.starts_with("__ocml_"))
+          deviceLibs |= AMDGCNLibraries::Ocml;
+      }
+    }
+  }
+  addControlVariables(module, deviceLibs, target.hasWave64(), target.hasDaz(),
                       target.hasFiniteOnly(), target.hasUnsafeMath(),
                       target.hasFastMath(), target.hasCorrectSqrt(),
                       target.getAbi());
 }
 
-// Get the paths of ROCm device libraries.
-LogicalResult SerializeGPUModuleBase::getCommonBitcodeLibs(
-    llvm::SmallVector<std::string> &libs, SmallVector<char, 256> &libPath,
-    StringRef isaVersion) {
-  auto addLib = [&](StringRef path) -> bool {
-    if (!llvm::sys::fs::is_regular_file(path)) {
-      getOperation().emitRemark() << "Bitcode library path: " << path
-                                  << " does not exist or is not a file.\n";
-      return true;
-    }
-    libs.push_back(path.str());
-    return false;
-  };
-  auto getLibPath = [&libPath](Twine lib) {
-    auto baseSize = libPath.size();
-    llvm::sys::path::append(libPath, lib + ".bc");
-    std::string path(StringRef(libPath.data(), libPath.size()).str());
-    libPath.truncate(baseSize);
-    return path;
-  };
-
-  // Add ROCm device libraries. Fail if any of the libraries is not found.
-  if (addLib(getLibPath("ocml")) || addLib(getLibPath("ockl")) ||
-      addLib(getLibPath("hip")) || addLib(getLibPath("opencl")) ||
-      addLib(getLibPath("oclc_isa_version_" + isaVersion)))
-    return failure();
-  return success();
-}
-
 void SerializeGPUModuleBase::addControlVariables(
-    llvm::Module &module, bool wave64, bool daz, bool finiteOnly,
-    bool unsafeMath, bool fastMath, bool correctSqrt, StringRef abiVer) {
-  llvm::Type *i8Ty = llvm::Type::getInt8Ty(module.getContext());
-  auto addControlVariable = [i8Ty, &module](StringRef name, bool enable) {
+    llvm::Module &module, AMDGCNLibraries libs, bool wave64, bool daz,
+    bool finiteOnly, bool unsafeMath, bool fastMath, bool correctSqrt,
+    StringRef abiVer) {
+  // Return if no device libraries are required.
+  if (libs == AMDGCNLibraries::None)
+    return;
+  // Helper function for adding control variables.
+  auto addControlVariable = [&module](StringRef name, uint32_t value,
+                                      uint32_t bitwidth) {
+    if (module.getNamedGlobal(name)) {
+      return;
+    }
+    llvm::IntegerType *type =
+        llvm::IntegerType::getIntNTy(module.getContext(), bitwidth);
     llvm::GlobalVariable *controlVariable = new llvm::GlobalVariable(
-        module, i8Ty, true, llvm::GlobalValue::LinkageTypes::LinkOnceODRLinkage,
-        llvm::ConstantInt::get(i8Ty, enable), name, nullptr,
-        llvm::GlobalValue::ThreadLocalMode::NotThreadLocal, 4);
+        module, /*isConstant=*/type, true,
+        llvm::GlobalValue::LinkageTypes::LinkOnceODRLinkage,
+        llvm::ConstantInt::get(type, value), name, /*before=*/nullptr,
+        /*threadLocalMode=*/llvm::GlobalValue::ThreadLocalMode::NotThreadLocal,
+        /*addressSpace=*/4);
     controlVariable->setVisibility(
         llvm::GlobalValue::VisibilityTypes::ProtectedVisibility);
-    controlVariable->setAlignment(llvm::MaybeAlign(1));
+    controlVariable->setAlignment(llvm::MaybeAlign(bitwidth / 8));
     controlVariable->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Local);
   };
-  addControlVariable("__oclc_finite_only_opt", finiteOnly || fastMath);
-  addControlVariable("__oclc_unsafe_math_opt", unsafeMath || fastMath);
-  addControlVariable("__oclc_daz_opt", daz || fastMath);
-  addControlVariable("__oclc_correctly_rounded_sqrt32",
-                     correctSqrt && !fastMath);
-  addControlVariable("__oclc_wavefrontsize64", wave64);
-
-  llvm::Type *i32Ty = llvm::Type::getInt32Ty(module.getContext());
-  int abi = 500;
-  abiVer.getAsInteger(0, abi);
-  llvm::GlobalVariable *abiVersion = new llvm::GlobalVariable(
-      module, i32Ty, true, llvm::GlobalValue::LinkageTypes::LinkOnceODRLinkage,
-      llvm::ConstantInt::get(i32Ty, abi), "__oclc_ABI_version", nullptr,
-      llvm::GlobalValue::ThreadLocalMode::NotThreadLocal, 4);
-  abiVersion->setVisibility(
-      llvm::GlobalValue::VisibilityTypes::ProtectedVisibility);
-  abiVersion->setAlignment(llvm::MaybeAlign(4));
-  abiVersion->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Local);
+  // Add ocml related control variables.
+  if (any(libs & AMDGCNLibraries::Ocml)) {
+    addControlVariable("__oclc_finite_only_opt", finiteOnly || fastMath, 8);
+    addControlVariable("__oclc_daz_opt", daz || fastMath, 8);
+    addControlVariable("__oclc_correctly_rounded_sqrt32",
+                       correctSqrt && !fastMath, 8);
+    addControlVariable("__oclc_unsafe_math_opt", unsafeMath || fastMath, 8);
+  }
+  // Add ocml or ockl related control variables.
+  if (any(libs & (AMDGCNLibraries::Ocml | AMDGCNLibraries::Ockl))) {
+    addControlVariable("__oclc_wavefrontsize64", wave64, 8);
+    int abi = 500;
+    abiVer.getAsInteger(0, abi);
+    addControlVariable("__oclc_ABI_version", abi, 32);
+  }
 }
 
 std::optional<SmallVector<char, 0>>
@@ -312,48 +344,16 @@ SerializeGPUModuleBase::assembleIsa(StringRef isa) {
 
   parser->setTargetParser(*tap);
   parser->Run(false);
-
   return result;
 }
 
-#if MLIR_ENABLE_ROCM_CONVERSIONS
-namespace {
-class AMDGPUSerializer : public SerializeGPUModuleBase {
-public:
-  AMDGPUSerializer(Operation &module, ROCDLTargetAttr target,
-                   const gpu::TargetOptions &targetOptions);
-
-  gpu::GPUModuleOp getOperation();
-
-  // Compile to HSA.
-  std::optional<SmallVector<char, 0>>
-  compileToBinary(const std::string &serializedISA);
-
-  std::optional<SmallVector<char, 0>>
-  moduleToObject(llvm::Module &llvmModule) override;
-
-private:
-  // Target options.
-  gpu::TargetOptions targetOptions;
-};
-} // namespace
-
-AMDGPUSerializer::AMDGPUSerializer(Operation &module, ROCDLTargetAttr target,
-                                   const gpu::TargetOptions &targetOptions)
-    : SerializeGPUModuleBase(module, target, targetOptions),
-      targetOptions(targetOptions) {}
-
-gpu::GPUModuleOp AMDGPUSerializer::getOperation() {
-  return dyn_cast<gpu::GPUModuleOp>(&SerializeGPUModuleBase::getOperation());
-}
-
 std::optional<SmallVector<char, 0>>
-AMDGPUSerializer::compileToBinary(const std::string &serializedISA) {
+SerializeGPUModuleBase::compileToBinary(const std::string &serializedISA) {
   // Assemble the ISA.
   std::optional<SmallVector<char, 0>> isaBinary = assembleIsa(serializedISA);
 
   if (!isaBinary) {
-    getOperation().emitError() << "Failed during ISA assembling.";
+    getOperation().emitError() << "failed during ISA assembling";
     return std::nullopt;
   }
 
@@ -363,7 +363,7 @@ AMDGPUSerializer::compileToBinary(const std::string &serializedISA) {
   if (llvm::sys::fs::createTemporaryFile("kernel%%", "o", tempIsaBinaryFd,
                                          tempIsaBinaryFilename)) {
     getOperation().emitError()
-        << "Failed to create a temporary file for dumping the ISA binary.";
+        << "failed to create a temporary file for dumping the ISA binary";
     return std::nullopt;
   }
   llvm::FileRemover cleanupIsaBinary(tempIsaBinaryFilename);
@@ -378,7 +378,7 @@ AMDGPUSerializer::compileToBinary(const std::string &serializedISA) {
   if (llvm::sys::fs::createTemporaryFile("kernel", "hsaco",
                                          tempHsacoFilename)) {
     getOperation().emitError()
-        << "Failed to create a temporary file for the HSA code object.";
+        << "failed to create a temporary file for the HSA code object";
     return std::nullopt;
   }
   llvm::FileRemover cleanupHsaco(tempHsacoFilename);
@@ -389,7 +389,7 @@ AMDGPUSerializer::compileToBinary(const std::string &serializedISA) {
       lldPath,
       {"ld.lld", "-shared", tempIsaBinaryFilename, "-o", tempHsacoFilename});
   if (lldResult != 0) {
-    getOperation().emitError() << "lld invocation failed.";
+    getOperation().emitError() << "lld invocation failed";
     return std::nullopt;
   }
 
@@ -398,7 +398,7 @@ AMDGPUSerializer::compileToBinary(const std::string &serializedISA) {
       llvm::MemoryBuffer::getFile(tempHsacoFilename, /*IsText=*/false);
   if (!hsacoFile) {
     getOperation().emitError()
-        << "Failed to read the HSA code object from the temp file.";
+        << "failed to read the HSA code object from the temp file";
     return std::nullopt;
   }
 
@@ -407,13 +407,13 @@ AMDGPUSerializer::compileToBinary(const std::string &serializedISA) {
   return SmallVector<char, 0>(buffer.begin(), buffer.end());
 }
 
-std::optional<SmallVector<char, 0>>
-AMDGPUSerializer::moduleToObject(llvm::Module &llvmModule) {
+std::optional<SmallVector<char, 0>> SerializeGPUModuleBase::moduleToObjectImpl(
+    const gpu::TargetOptions &targetOptions, llvm::Module &llvmModule) {
   // Return LLVM IR if the compilation target is offload.
 #define DEBUG_TYPE "serialize-to-llvm"
   LLVM_DEBUG({
-    llvm::dbgs() << "LLVM IR for module: " << getOperation().getNameAttr()
-                 << "\n"
+    llvm::dbgs() << "LLVM IR for module: "
+                 << cast<gpu::GPUModuleOp>(getOperation()).getNameAttr() << "\n"
                  << llvmModule << "\n";
   });
 #undef DEBUG_TYPE
@@ -423,8 +423,8 @@ AMDGPUSerializer::moduleToObject(llvm::Module &llvmModule) {
   std::optional<llvm::TargetMachine *> targetMachine =
       getOrCreateTargetMachine();
   if (!targetMachine) {
-    getOperation().emitError() << "Target Machine unavailable for triple "
-                               << triple << ", can't compile with LLVM\n";
+    getOperation().emitError() << "target Machine unavailable for triple "
+                               << triple << ", can't compile with LLVM";
     return std::nullopt;
   }
 
@@ -432,12 +432,13 @@ AMDGPUSerializer::moduleToObject(llvm::Module &llvmModule) {
   std::optional<std::string> serializedISA =
       translateToISA(llvmModule, **targetMachine);
   if (!serializedISA) {
-    getOperation().emitError() << "Failed translating the module to ISA.";
+    getOperation().emitError() << "failed translating the module to ISA";
     return std::nullopt;
   }
 #define DEBUG_TYPE "serialize-to-isa"
   LLVM_DEBUG({
-    llvm::dbgs() << "ISA for module: " << getOperation().getNameAttr() << "\n"
+    llvm::dbgs() << "ISA for module: "
+                 << cast<gpu::GPUModuleOp>(getOperation()).getNameAttr() << "\n"
                  << *serializedISA << "\n";
   });
 #undef DEBUG_TYPE
@@ -445,9 +446,45 @@ AMDGPUSerializer::moduleToObject(llvm::Module &llvmModule) {
   if (targetOptions.getCompilationTarget() == gpu::CompilationTarget::Assembly)
     return SmallVector<char, 0>(serializedISA->begin(), serializedISA->end());
 
+  // Compiling to binary requires a valid ROCm path, fail if it's not found.
+  if (getToolkitPath().empty())
+    getOperation().emitError() << "invalid ROCm path, please set a valid path";
+
   // Compile to binary.
   return compileToBinary(*serializedISA);
 }
+
+#if MLIR_ENABLE_ROCM_CONVERSIONS
+namespace {
+class AMDGPUSerializer : public SerializeGPUModuleBase {
+public:
+  AMDGPUSerializer(Operation &module, ROCDLTargetAttr target,
+                   const gpu::TargetOptions &targetOptions);
+
+  gpu::GPUModuleOp getOperation();
+
+  std::optional<SmallVector<char, 0>>
+  moduleToObject(llvm::Module &llvmModule) override;
+
+private:
+  // Target options.
+  gpu::TargetOptions targetOptions;
+};
+} // namespace
+
+AMDGPUSerializer::AMDGPUSerializer(Operation &module, ROCDLTargetAttr target,
+                                   const gpu::TargetOptions &targetOptions)
+    : SerializeGPUModuleBase(module, target, targetOptions),
+      targetOptions(targetOptions) {}
+
+gpu::GPUModuleOp AMDGPUSerializer::getOperation() {
+  return dyn_cast<gpu::GPUModuleOp>(&SerializeGPUModuleBase::getOperation());
+}
+
+std::optional<SmallVector<char, 0>>
+AMDGPUSerializer::moduleToObject(llvm::Module &llvmModule) {
+  return moduleToObjectImpl(targetOptions, llvmModule);
+}
 #endif // MLIR_ENABLE_ROCM_CONVERSIONS
 
 std::optional<SmallVector<char, 0>> ROCDLTargetAttrImpl::serializeToObject(
@@ -457,7 +494,7 @@ std::optional<SmallVector<char, 0>> ROCDLTargetAttrImpl::serializeToObject(
   if (!module)
     return std::nullopt;
   if (!mlir::isa<gpu::GPUModuleOp>(module)) {
-    module->emitError("Module must be a GPU module.");
+    module->emitError("module must be a GPU module");
     return std::nullopt;
   }
 #if MLIR_ENABLE_ROCM_CONVERSIONS
@@ -466,8 +503,8 @@ std::optional<SmallVector<char, 0>> ROCDLTargetAttrImpl::serializeToObject(
   serializer.init();
   return serializer.run();
 #else
-  module->emitError("The `AMDGPU` target was not built. Please enable it when "
-                    "building LLVM.");
+  module->emitError("the `AMDGPU` target was not built. Please enable it when "
+                    "building LLVM");
   return std::nullopt;
 #endif // MLIR_ENABLE_ROCM_CONVERSIONS
 }
@@ -477,10 +514,15 @@ ROCDLTargetAttrImpl::createObject(Attribute attribute,
                                   const SmallVector<char, 0> &object,
                                   const gpu::TargetOptions &options) const {
   gpu::CompilationTarget format = options.getCompilationTarget();
+  // If format is `fatbin` transform it to binary as `fatbin` is not yet
+  // supported.
+  if (format > gpu::CompilationTarget::Binary)
+    format = gpu::CompilationTarget::Binary;
+
+  DictionaryAttr properties{};
   Builder builder(attribute.getContext());
   return builder.getAttr<gpu::ObjectAttr>(
-      attribute,
-      format > gpu::CompilationTarget::Binary ? gpu::CompilationTarget::Binary
-                                              : format,
-      builder.getStringAttr(StringRef(object.data(), object.size())), nullptr);
+      attribute, format,
+      builder.getStringAttr(StringRef(object.data(), object.size())),
+      properties);
 }

>From 44ca65661e2cdd5636f592f573a2837e6ae948ba Mon Sep 17 00:00:00 2001
From: Joseph Huber <huberjn at outlook.com>
Date: Mon, 17 Jun 2024 15:51:26 -0500
Subject: [PATCH 21/26] [libc] Only include getauxval on AARCH64 targets
 (#95844)

Summary:
Not all platforms support this function or header, but it was being
included by every test. Move it inside of the `ifdef` for the only user,
which is aarch64.
---
 libc/test/IntegrationTest/test.cpp       | 3 ++-
 libc/test/UnitTest/HermeticTestUtils.cpp | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/libc/test/IntegrationTest/test.cpp b/libc/test/IntegrationTest/test.cpp
index a8b2f2911fd8e..0c961dbafb840 100644
--- a/libc/test/IntegrationTest/test.cpp
+++ b/libc/test/IntegrationTest/test.cpp
@@ -7,7 +7,6 @@
 //===----------------------------------------------------------------------===//
 
 #include "src/__support/common.h"
-#include "src/sys/auxv/getauxval.h"
 #include <stddef.h>
 #include <stdint.h>
 
@@ -83,6 +82,8 @@ void *realloc(void *ptr, size_t s) {
 void *__dso_handle = nullptr;
 
 #ifdef LIBC_TARGET_ARCH_IS_AARCH64
+#include "src/sys/auxv/getauxval.h"
+
 // Due to historical reasons, libgcc on aarch64 may expect __getauxval to be
 // defined. See also https://gcc.gnu.org/pipermail/gcc-cvs/2020-June/300635.html
 unsigned long __getauxval(unsigned long id) {
diff --git a/libc/test/UnitTest/HermeticTestUtils.cpp b/libc/test/UnitTest/HermeticTestUtils.cpp
index 6e815e6c8aab0..191e54b7344a6 100644
--- a/libc/test/UnitTest/HermeticTestUtils.cpp
+++ b/libc/test/UnitTest/HermeticTestUtils.cpp
@@ -7,7 +7,6 @@
 //===----------------------------------------------------------------------===//
 
 #include "src/__support/common.h"
-#include "src/sys/auxv/getauxval.h"
 #include <stddef.h>
 #include <stdint.h>
 
@@ -111,6 +110,8 @@ void __cxa_pure_virtual() {
 void *__dso_handle = nullptr;
 
 #ifdef LIBC_TARGET_ARCH_IS_AARCH64
+#include "src/sys/auxv/getauxval.h"
+
 // Due to historical reasons, libgcc on aarch64 may expect __getauxval to be
 // defined. See also https://gcc.gnu.org/pipermail/gcc-cvs/2020-June/300635.html
 unsigned long __getauxval(unsigned long id) {

>From a50bcc03cbaecf6473c6bf41f4497758a7876f3d Mon Sep 17 00:00:00 2001
From: lntue <35648136+lntue at users.noreply.github.com>
Date: Mon, 17 Jun 2024 16:56:19 -0400
Subject: [PATCH 22/26] [libc][stdlib] Only add internal malloc in full build
 mode.  Use the system malloc in overlay mode. (#95845)

This causes an issue in overlay mode:
https://github.com/llvm/llvm-project/pull/95736#issuecomment-2172739705
---
 libc/src/stdlib/CMakeLists.txt | 26 +++++++++++++++++---------
 1 file changed, 17 insertions(+), 9 deletions(-)

diff --git a/libc/src/stdlib/CMakeLists.txt b/libc/src/stdlib/CMakeLists.txt
index e26c19f03f5ab..fdbf7b75e72f4 100644
--- a/libc/src/stdlib/CMakeLists.txt
+++ b/libc/src/stdlib/CMakeLists.txt
@@ -418,15 +418,23 @@ else()
       libc.src.string.memory_utils.inline_memcpy
       libc.src.string.memory_utils.inline_memset
   )
-  add_entrypoint_object(
-    malloc
-    SRCS
-      freelist_malloc.cpp
-    HDRS
-      malloc.h
-    DEPENDS
-      .freelist_heap
-  )
+  # Only add malloc in full build mode.  Use the system malloc in overlay mode.
+  if(LLVM_LIBC_FULL_BUILD)
+    add_entrypoint_object(
+      malloc
+      SRCS
+        freelist_malloc.cpp
+      HDRS
+        malloc.h
+      DEPENDS
+        .freelist_heap
+    )
+  else()
+    add_entrypoint_external(
+      malloc
+    )
+  endif()
+
   add_entrypoint_external(
     free
   )

>From 16aa39ad94350670f4d72dace0a4866fbe10d716 Mon Sep 17 00:00:00 2001
From: Joshua Baehring <98630690+JoshuaMBa at users.noreply.github.com>
Date: Mon, 17 Jun 2024 14:06:31 -0700
Subject: [PATCH 23/26] [scudo] Update error handling for seondary cache entry
 count (#95595)

Initially, the scudo allocator would return an error if the user
attempted to set the cache capacity
(i.e. the number of possible entries in the cache) above the maximum
cache capacity.
Now the allocator will resort to using the maximum cache capacity in
this event.
An error will still be returned if the user attempts to set the number
of entries to a negative value.
---
 compiler-rt/lib/scudo/standalone/secondary.h              | 7 ++++---
 compiler-rt/lib/scudo/standalone/tests/secondary_test.cpp | 6 +++---
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/compiler-rt/lib/scudo/standalone/secondary.h b/compiler-rt/lib/scudo/standalone/secondary.h
index d8c9f5bcfcaf6..9a8e53be388b7 100644
--- a/compiler-rt/lib/scudo/standalone/secondary.h
+++ b/compiler-rt/lib/scudo/standalone/secondary.h
@@ -391,10 +391,11 @@ template <typename Config> class MapAllocatorCache {
       return true;
     }
     if (O == Option::MaxCacheEntriesCount) {
-      const u32 MaxCount = static_cast<u32>(Value);
-      if (MaxCount > Config::getEntriesArraySize())
+      if (Value < 0)
         return false;
-      atomic_store_relaxed(&MaxEntriesCount, MaxCount);
+      atomic_store_relaxed(
+          &MaxEntriesCount,
+          Min<u32>(static_cast<u32>(Value), Config::getEntriesArraySize()));
       return true;
     }
     if (O == Option::MaxCacheEntrySize) {
diff --git a/compiler-rt/lib/scudo/standalone/tests/secondary_test.cpp b/compiler-rt/lib/scudo/standalone/tests/secondary_test.cpp
index 8f0250e88ebf3..af69313214ea6 100644
--- a/compiler-rt/lib/scudo/standalone/tests/secondary_test.cpp
+++ b/compiler-rt/lib/scudo/standalone/tests/secondary_test.cpp
@@ -192,9 +192,9 @@ TEST_F(MapAllocatorTest, SecondaryIterate) {
 
 TEST_F(MapAllocatorTest, SecondaryOptions) {
   // Attempt to set a maximum number of entries higher than the array size.
-  EXPECT_FALSE(
-      Allocator->setOption(scudo::Option::MaxCacheEntriesCount, 4096U));
-  // A negative number will be cast to a scudo::u32, and fail.
+  EXPECT_TRUE(Allocator->setOption(scudo::Option::MaxCacheEntriesCount, 4096U));
+
+  // Attempt to set an invalid (negative) number of entries
   EXPECT_FALSE(Allocator->setOption(scudo::Option::MaxCacheEntriesCount, -1));
   if (Allocator->canCache(0U)) {
     // Various valid combinations.

>From 6037a698b919e0c8dbf39673d68835c49dc5130a Mon Sep 17 00:00:00 2001
From: Mircea Trofin <mtrofin at google.com>
Date: Mon, 17 Jun 2024 14:18:55 -0700
Subject: [PATCH 24/26] [mlgo] inline for size: add bypass mechanism for
 perserving performance (#95616)

This allows shrinking for size the cold part of the code, without sacrificing performance.
---
 llvm/include/llvm/Analysis/MLInlineAdvisor.h  |  2 +
 llvm/lib/Analysis/MLInlineAdvisor.cpp         | 18 ++++-
 .../models/gen-inline-oz-test-model.py        | 14 +++-
 llvm/test/Transforms/Inline/ML/bypass.ll      | 78 +++++++++++++++++++
 4 files changed, 107 insertions(+), 5 deletions(-)
 create mode 100644 llvm/test/Transforms/Inline/ML/bypass.ll

diff --git a/llvm/include/llvm/Analysis/MLInlineAdvisor.h b/llvm/include/llvm/Analysis/MLInlineAdvisor.h
index f58862e533529..2aa077fe0e035 100644
--- a/llvm/include/llvm/Analysis/MLInlineAdvisor.h
+++ b/llvm/include/llvm/Analysis/MLInlineAdvisor.h
@@ -13,6 +13,7 @@
 #include "llvm/Analysis/InlineAdvisor.h"
 #include "llvm/Analysis/LazyCallGraph.h"
 #include "llvm/Analysis/MLModelRunner.h"
+#include "llvm/Analysis/ProfileSummaryInfo.h"
 #include "llvm/IR/PassManager.h"
 
 #include <deque>
@@ -89,6 +90,7 @@ class MLInlineAdvisor : public InlineAdvisor {
   llvm::SmallPtrSet<const LazyCallGraph::Node *, 1> NodesInLastSCC;
   DenseSet<const LazyCallGraph::Node *> AllNodes;
   bool ForceStop = false;
+  ProfileSummaryInfo &PSI;
 };
 
 /// InlineAdvice that tracks changes post inlining. For that reason, it only
diff --git a/llvm/lib/Analysis/MLInlineAdvisor.cpp b/llvm/lib/Analysis/MLInlineAdvisor.cpp
index 75eb8ece2e447..21946572339b9 100644
--- a/llvm/lib/Analysis/MLInlineAdvisor.cpp
+++ b/llvm/lib/Analysis/MLInlineAdvisor.cpp
@@ -14,6 +14,7 @@
 #include "llvm/Analysis/MLInlineAdvisor.h"
 #include "llvm/ADT/SCCIterator.h"
 #include "llvm/Analysis/AssumptionCache.h"
+#include "llvm/Analysis/BlockFrequencyInfo.h"
 #include "llvm/Analysis/CallGraph.h"
 #include "llvm/Analysis/FunctionPropertiesAnalysis.h"
 #include "llvm/Analysis/InlineCost.h"
@@ -23,6 +24,7 @@
 #include "llvm/Analysis/LoopInfo.h"
 #include "llvm/Analysis/MLModelRunner.h"
 #include "llvm/Analysis/OptimizationRemarkEmitter.h"
+#include "llvm/Analysis/ProfileSummaryInfo.h"
 #include "llvm/Analysis/ReleaseModeModelRunner.h"
 #include "llvm/Analysis/TargetTransformInfo.h"
 #include "llvm/IR/Dominators.h"
@@ -46,6 +48,14 @@ static cl::opt<bool>
     InteractiveIncludeDefault("inliner-interactive-include-default", cl::Hidden,
                               cl::desc(InclDefaultMsg));
 
+enum class SkipMLPolicyCriteria { Never, IfCallerIsNotCold };
+
+static cl::opt<SkipMLPolicyCriteria> SkipPolicy(
+    "ml-inliner-skip-policy", cl::Hidden, cl::init(SkipMLPolicyCriteria::Never),
+    cl::values(clEnumValN(SkipMLPolicyCriteria::Never, "never", "never"),
+               clEnumValN(SkipMLPolicyCriteria::IfCallerIsNotCold,
+                          "if-caller-not-cold", "if the caller is not cold")));
+
 #if defined(LLVM_HAVE_TF_AOT_INLINERSIZEMODEL)
 // codegen-ed file
 #include "InlinerSizeModel.h" // NOLINT
@@ -129,7 +139,8 @@ MLInlineAdvisor::MLInlineAdvisor(
           M, MAM.getResult<FunctionAnalysisManagerModuleProxy>(M).getManager()),
       ModelRunner(std::move(Runner)), GetDefaultAdvice(GetDefaultAdvice),
       CG(MAM.getResult<LazyCallGraphAnalysis>(M)),
-      InitialIRSize(getModuleIRSize()), CurrentIRSize(InitialIRSize) {
+      InitialIRSize(getModuleIRSize()), CurrentIRSize(InitialIRSize),
+      PSI(MAM.getResult<ProfileSummaryAnalysis>(M)) {
   assert(ModelRunner);
   ModelRunner->switchContext("");
   // Extract the 'call site height' feature - the position of a call site
@@ -334,6 +345,11 @@ std::unique_ptr<InlineAdvice> MLInlineAdvisor::getAdviceImpl(CallBase &CB) {
   auto &TIR = FAM.getResult<TargetIRAnalysis>(Callee);
   auto &ORE = FAM.getResult<OptimizationRemarkEmitterAnalysis>(Caller);
 
+  if (SkipPolicy == SkipMLPolicyCriteria::IfCallerIsNotCold) {
+    if (!PSI.isFunctionEntryCold(&Caller))
+      return std::make_unique<InlineAdvice>(this, CB, ORE,
+                                            GetDefaultAdvice(CB));
+  }
   auto MandatoryKind = InlineAdvisor::getMandatoryKind(CB, FAM, ORE);
   // If this is a "never inline" case, there won't be any changes to internal
   // state we need to track, so we can just return the base InlineAdvice, which
diff --git a/llvm/lib/Analysis/models/gen-inline-oz-test-model.py b/llvm/lib/Analysis/models/gen-inline-oz-test-model.py
index 4898509ea544f..83055890283e8 100644
--- a/llvm/lib/Analysis/models/gen-inline-oz-test-model.py
+++ b/llvm/lib/Analysis/models/gen-inline-oz-test-model.py
@@ -102,12 +102,12 @@ def get_output_spec_path(path):
     return os.path.join(path, "output_spec.json")
 
 
-def build_mock_model(path, signature):
+def build_mock_model(path, signature, advice):
     """Build and save the mock model with the given signature"""
     module = tf.Module()
 
     def action(*inputs):
-        return {signature["output"]: tf.constant(value=1, dtype=tf.int64)}
+        return {signature["output"]: tf.constant(value=advice, dtype=tf.int64)}
 
     module.action = tf.function()(action)
     action = {"action": module.action.get_concrete_function(signature["inputs"])}
@@ -128,12 +128,18 @@ def get_signature():
 
 
 def main(argv):
-    assert len(argv) == 2
+    assert len(argv) == 2 or (len(argv) == 3 and argv[2] == "never")
     model_path = argv[1]
 
     print(f"Output model to: [{argv[1]}]")
+
+    constant_advice = 1
+    if len(argv) == 3:
+        constant_advice = 0
+    print(f"The model will always return: {constant_advice}")
+
     signature = get_signature()
-    build_mock_model(model_path, signature)
+    build_mock_model(model_path, signature, constant_advice)
 
 
 if __name__ == "__main__":
diff --git a/llvm/test/Transforms/Inline/ML/bypass.ll b/llvm/test/Transforms/Inline/ML/bypass.ll
new file mode 100644
index 0000000000000..ccdefdcc93bfe
--- /dev/null
+++ b/llvm/test/Transforms/Inline/ML/bypass.ll
@@ -0,0 +1,78 @@
+; REQUIRES: have_tflite
+; RUN: rm -rf %t.runfiles %t.tflite %t.model_out
+; RUN: mkdir %t.runfiles
+; RUN: cp %S/../../../../lib/Analysis/models/gen-inline-oz-test-model.py %t.runfiles
+; RUN: cp %S/../../../../lib/Analysis/models/saved-model-to-tflite.py %t.runfiles
+; RUN: %python %t.runfiles/gen-inline-oz-test-model.py %t.model_out never
+; RUN: %python %t.runfiles/saved-model-to-tflite.py %t.model_out %t.tflite
+
+; When running O2, we expect both callers to inline callee.
+; RUN: opt < %s -passes='default<O2>' -inline-threshold=0 -hot-callsite-threshold=100 -S | FileCheck %s --check-prefixes=O2-HOT,O2-COLD
+
+; The ML model we use always blocks inlining (by construction)
+; RUN: opt < %s -passes='default<O2>' -inline-threshold=0 -hot-callsite-threshold=100 \
+; RUN:  -enable-ml-inliner=development -ml-inliner-model-under-training=%t.tflite \
+; RUN:  -S | FileCheck %s --check-prefixes=ML-HOT,ML-COLD
+
+; When bypassing ML for non-cold callers, the hot caller will have its callee inlined, but the cold one won't
+; RUN: opt < %s -passes='default<O2>' -inline-threshold=0 -hot-callsite-threshold=100 \
+; RUN:  -enable-ml-inliner=development -ml-inliner-model-under-training=%t.tflite \
+; RUN: -ml-inliner-skip-policy=if-caller-not-cold -S | FileCheck %s --check-prefixes=O2-HOT,ML-COLD
+
+declare void @extern()
+
+define i32 @callee(i32 %x) {
+  %x1 = add i32 %x, 1
+  %x2 = add i32 %x1, 1
+  %x3 = add i32 %x2, 1
+  call void @extern()
+  call void @extern()
+  ret i32 %x3
+}
+
+define i32 @hot_caller(i32 %y1) !prof !15 {
+  %y = call i32 @callee(i32 %y1), !prof !16
+  ret i32 %y
+}
+
+define i32 @cold_caller(i32 %y1) !prof !17 {
+  %y = call i32 @callee(i32 %y1), !prof !16
+  ret i32 %y
+}
+
+
+!llvm.module.flags = !{!1}
+!15 = !{!"function_entry_count", i64 300}
+!16 = !{!"branch_weights", i64 300}
+!17 = !{!"function_entry_count", i64 1}
+
+!1 = !{i32 1, !"ProfileSummary", !2}
+!2 = !{!3, !4, !5, !6, !7, !8, !9, !10}
+!3 = !{!"ProfileFormat", !"SampleProfile"}
+!4 = !{!"TotalCount", i64 10000}
+!5 = !{!"MaxCount", i64 1000}
+!6 = !{!"MaxInternalCount", i64 1}
+!7 = !{!"MaxFunctionCount", i64 1000}
+!8 = !{!"NumCounts", i64 3}
+!9 = !{!"NumFunctions", i64 3}
+!10 = !{!"DetailedSummary", !11}
+!11 = !{!12, !13, !14}
+!12 = !{i32 10000, i64 100, i32 1}
+!13 = !{i32 999000, i64 100, i32 1}
+!14 = !{i32 999999, i64 1, i32 2}
+
+; O2-HOT-LABEL: @hot_caller
+; O2-HOT-NOT: call i32 @callee
+; O2-HOT: call void @extern
+; O2-HOT-NEXT: call void @extern
+; O2-HOT-NEXT: ret
+; O2-COLD-LABEL: @cold_caller
+; O2-COLD-NOT: call i32 @callee
+; O2-COLD: call void @extern
+; O2-COLD-NEXT: call void @extern
+; O2-COLD-NEXT: ret
+
+; ML-HOT-LABEL: @hot_caller
+; ML-HOT-NEXT: call i32 @callee
+; ML-COLD-LABEL: @cold_caller
+; ML-COLD-NEXT: call i32 @callee
\ No newline at end of file

>From 3a2f7d8a9f84db380af5122418098cb28a57443f Mon Sep 17 00:00:00 2001
From: Fabian Mora <fmora.dev at gmail.com>
Date: Mon, 17 Jun 2024 16:19:21 -0500
Subject: [PATCH 25/26] Revert "Reland [mlir][Target] Improve ROCDL gpu
 serialization API" (#95847)

Reverts llvm/llvm-project#95813
---
 mlir/include/mlir/Target/LLVM/ROCDL/Utils.h |  41 +--
 mlir/lib/Dialect/GPU/CMakeLists.txt         |   2 +-
 mlir/lib/Target/LLVM/CMakeLists.txt         |  13 +-
 mlir/lib/Target/LLVM/ROCDL/Target.cpp       | 306 +++++++++-----------
 4 files changed, 149 insertions(+), 213 deletions(-)

diff --git a/mlir/include/mlir/Target/LLVM/ROCDL/Utils.h b/mlir/include/mlir/Target/LLVM/ROCDL/Utils.h
index 44c9ded317fa5..374fa65bd02e3 100644
--- a/mlir/include/mlir/Target/LLVM/ROCDL/Utils.h
+++ b/mlir/include/mlir/Target/LLVM/ROCDL/Utils.h
@@ -27,19 +27,6 @@ namespace ROCDL {
 /// 5. Returns an empty string.
 StringRef getROCMPath();
 
-/// Helper enum for specifying the AMD GCN device libraries required for
-/// compilation.
-enum class AMDGCNLibraries : uint32_t {
-  None = 0,
-  Ockl = 1,
-  Ocml = 2,
-  OpenCL = 4,
-  Hip = 8,
-  LastLib = Hip,
-  LLVM_MARK_AS_BITMASK_ENUM(LastLib),
-  All = (LastLib << 1) - 1
-};
-
 /// Base class for all ROCDL serializations from GPU modules into binary
 /// strings. By default this class serializes into LLVM bitcode.
 class SerializeGPUModuleBase : public LLVM::ModuleToObject {
@@ -62,8 +49,8 @@ class SerializeGPUModuleBase : public LLVM::ModuleToObject {
   /// Returns the bitcode files to be loaded.
   ArrayRef<std::string> getFileList() const;
 
-  /// Appends standard ROCm device libraries to `fileList`.
-  LogicalResult appendStandardLibs(AMDGCNLibraries libs);
+  /// Appends standard ROCm device libraries like `ocml.bc`, `ockl.bc`, etc.
+  LogicalResult appendStandardLibs();
 
   /// Loads the bitcode files in `fileList`.
   virtual std::optional<SmallVector<std::unique_ptr<llvm::Module>>>
@@ -76,20 +63,15 @@ class SerializeGPUModuleBase : public LLVM::ModuleToObject {
   LogicalResult handleBitcodeFile(llvm::Module &module) override;
 
 protected:
-  /// Adds `oclc` control variables to the LLVM module.
-  void addControlVariables(llvm::Module &module, AMDGCNLibraries libs,
-                           bool wave64, bool daz, bool finiteOnly,
-                           bool unsafeMath, bool fastMath, bool correctSqrt,
-                           StringRef abiVer);
+  /// Appends the paths of common ROCm device libraries to `libs`.
+  LogicalResult getCommonBitcodeLibs(llvm::SmallVector<std::string> &libs,
+                                     SmallVector<char, 256> &libPath,
+                                     StringRef isaVersion);
 
-  /// Compiles assembly to a binary.
-  virtual std::optional<SmallVector<char, 0>>
-  compileToBinary(const std::string &serializedISA);
-
-  /// Default implementation of `ModuleToObject::moduleToObject`.
-  std::optional<SmallVector<char, 0>>
-  moduleToObjectImpl(const gpu::TargetOptions &targetOptions,
-                     llvm::Module &llvmModule);
+  /// Adds `oclc` control variables to the LLVM module.
+  void addControlVariables(llvm::Module &module, bool wave64, bool daz,
+                           bool finiteOnly, bool unsafeMath, bool fastMath,
+                           bool correctSqrt, StringRef abiVer);
 
   /// Returns the assembled ISA.
   std::optional<SmallVector<char, 0>> assembleIsa(StringRef isa);
@@ -102,9 +84,6 @@ class SerializeGPUModuleBase : public LLVM::ModuleToObject {
 
   /// List of LLVM bitcode files to link to.
   SmallVector<std::string> fileList;
-
-  /// AMD GCN libraries to use when linking, the default is using none.
-  AMDGCNLibraries deviceLibs = AMDGCNLibraries::None;
 };
 } // namespace ROCDL
 } // namespace mlir
diff --git a/mlir/lib/Dialect/GPU/CMakeLists.txt b/mlir/lib/Dialect/GPU/CMakeLists.txt
index 08c8aea36fac9..61ab298ebfb98 100644
--- a/mlir/lib/Dialect/GPU/CMakeLists.txt
+++ b/mlir/lib/Dialect/GPU/CMakeLists.txt
@@ -106,7 +106,7 @@ if(MLIR_ENABLE_ROCM_CONVERSIONS)
       "Building mlir with ROCm support requires the AMDGPU backend")
   endif()
 
-  set(DEFAULT_ROCM_PATH "" CACHE PATH "Fallback path to search for ROCm installs")
+  set(DEFAULT_ROCM_PATH "/opt/rocm" CACHE PATH "Fallback path to search for ROCm installs")
   target_compile_definitions(obj.MLIRGPUTransforms
     PRIVATE
     __DEFAULT_ROCM_PATH__="${DEFAULT_ROCM_PATH}"
diff --git a/mlir/lib/Target/LLVM/CMakeLists.txt b/mlir/lib/Target/LLVM/CMakeLists.txt
index 6e146710d67af..5a3fa160850b4 100644
--- a/mlir/lib/Target/LLVM/CMakeLists.txt
+++ b/mlir/lib/Target/LLVM/CMakeLists.txt
@@ -123,18 +123,17 @@ add_mlir_dialect_library(MLIRROCDLTarget
   )
 
 if(MLIR_ENABLE_ROCM_CONVERSIONS)
+  if (NOT ("AMDGPU" IN_LIST LLVM_TARGETS_TO_BUILD))
+    message(SEND_ERROR
+      "Building mlir with ROCm support requires the AMDGPU backend")
+  endif()
+
   if (DEFINED ROCM_PATH)
     set(DEFAULT_ROCM_PATH "${ROCM_PATH}" CACHE PATH "Fallback path to search for ROCm installs")
   elseif(DEFINED ENV{ROCM_PATH})
     set(DEFAULT_ROCM_PATH "$ENV{ROCM_PATH}" CACHE PATH "Fallback path to search for ROCm installs")
   else()
-    IF (WIN32)
-      # Avoid setting an UNIX path for Windows.
-      # TODO: Eventually migrate to FindHIP once it becomes a part of CMake.
-      set(DEFAULT_ROCM_PATH "" CACHE PATH "Fallback path to search for ROCm installs")
-    else()
-      set(DEFAULT_ROCM_PATH "/opt/rocm" CACHE PATH "Fallback path to search for ROCm installs")
-    endif()
+    set(DEFAULT_ROCM_PATH "/opt/rocm" CACHE PATH "Fallback path to search for ROCm installs")
   endif()
   message(VERBOSE "MLIR Default ROCM toolkit path: ${DEFAULT_ROCM_PATH}")
 
diff --git a/mlir/lib/Target/LLVM/ROCDL/Target.cpp b/mlir/lib/Target/LLVM/ROCDL/Target.cpp
index 6784f3668bde3..cc13e5b7436ea 100644
--- a/mlir/lib/Target/LLVM/ROCDL/Target.cpp
+++ b/mlir/lib/Target/LLVM/ROCDL/Target.cpp
@@ -17,6 +17,9 @@
 #include "mlir/Dialect/LLVMIR/ROCDLDialect.h"
 #include "mlir/Support/FileUtilities.h"
 #include "mlir/Target/LLVM/ROCDL/Utils.h"
+#include "mlir/Target/LLVMIR/Dialect/GPU/GPUToLLVMIRTranslation.h"
+#include "mlir/Target/LLVMIR/Dialect/LLVMIR/LLVMToLLVMIRTranslation.h"
+#include "mlir/Target/LLVMIR/Dialect/ROCDL/ROCDLToLLVMIRTranslation.h"
 #include "mlir/Target/LLVMIR/Export.h"
 
 #include "llvm/IR/Constants.h"
@@ -109,9 +112,8 @@ SerializeGPUModuleBase::SerializeGPUModuleBase(
       if (auto file = dyn_cast<StringAttr>(attr))
         fileList.push_back(file.str());
 
-  // By default add all libraries if the toolkit path is not empty.
-  if (!getToolkitPath().empty())
-    deviceLibs = AMDGCNLibraries::All;
+  // Append standard ROCm device bitcode libraries to the files to be loaded.
+  (void)appendStandardLibs();
 }
 
 void SerializeGPUModuleBase::init() {
@@ -136,67 +138,29 @@ ArrayRef<std::string> SerializeGPUModuleBase::getFileList() const {
   return fileList;
 }
 
-LogicalResult SerializeGPUModuleBase::appendStandardLibs(AMDGCNLibraries libs) {
-  if (libs == AMDGCNLibraries::None)
-    return success();
+LogicalResult SerializeGPUModuleBase::appendStandardLibs() {
   StringRef pathRef = getToolkitPath();
-  // Fail if the toolkit is empty.
-  if (pathRef.empty())
-    return failure();
-
-  // Get the path for the device libraries
-  SmallString<256> path;
-  path.insert(path.begin(), pathRef.begin(), pathRef.end());
-  llvm::sys::path::append(path, "amdgcn", "bitcode");
-  pathRef = StringRef(path.data(), path.size());
-
-  // Fail if the path is invalid.
-  if (!llvm::sys::fs::is_directory(pathRef)) {
-    getOperation().emitRemark() << "ROCm amdgcn bitcode path: " << pathRef
-                                << " does not exist or is not a directory";
-    return failure();
-  }
-
-  // Get the ISA version.
-  StringRef isaVersion =
-      llvm::AMDGPU::getArchNameAMDGCN(llvm::AMDGPU::parseArchAMDGCN(chip));
-  isaVersion.consume_front("gfx");
-
-  // Helper function for adding a library.
-  auto addLib = [&](const Twine &lib) -> bool {
-    auto baseSize = path.size();
-    llvm::sys::path::append(path, lib);
-    StringRef pathRef(path.data(), path.size());
-    if (!llvm::sys::fs::is_regular_file(pathRef)) {
-      getOperation().emitRemark() << "bitcode library path: " << pathRef
-                                  << " does not exist or is not a file";
-      return true;
+  if (!pathRef.empty()) {
+    SmallVector<char, 256> path;
+    path.insert(path.begin(), pathRef.begin(), pathRef.end());
+    llvm::sys::path::append(path, "amdgcn", "bitcode");
+    pathRef = StringRef(path.data(), path.size());
+    if (!llvm::sys::fs::is_directory(pathRef)) {
+      getOperation().emitRemark() << "ROCm amdgcn bitcode path: " << pathRef
+                                  << " does not exist or is not a directory.";
+      return failure();
     }
-    fileList.push_back(pathRef.str());
-    path.truncate(baseSize);
-    return false;
-  };
-
-  // Add ROCm device libraries. Fail if any of the libraries is not found, ie.
-  // if any of the `addLib` failed.
-  if ((any(libs & AMDGCNLibraries::Ocml) && addLib("ocml.bc")) ||
-      (any(libs & AMDGCNLibraries::Ockl) && addLib("ockl.bc")) ||
-      (any(libs & AMDGCNLibraries::Hip) && addLib("hip.bc")) ||
-      (any(libs & AMDGCNLibraries::OpenCL) && addLib("opencl.bc")) ||
-      (any(libs & (AMDGCNLibraries::Ocml | AMDGCNLibraries::Ockl)) &&
-       addLib("oclc_isa_version_" + isaVersion + ".bc")))
-    return failure();
+    StringRef isaVersion =
+        llvm::AMDGPU::getArchNameAMDGCN(llvm::AMDGPU::parseArchAMDGCN(chip));
+    isaVersion.consume_front("gfx");
+    return getCommonBitcodeLibs(fileList, path, isaVersion);
+  }
   return success();
 }
 
 std::optional<SmallVector<std::unique_ptr<llvm::Module>>>
 SerializeGPUModuleBase::loadBitcodeFiles(llvm::Module &module) {
   SmallVector<std::unique_ptr<llvm::Module>> bcFiles;
-  // Return if there are no libs to load.
-  if (deviceLibs == AMDGCNLibraries::None && fileList.empty())
-    return bcFiles;
-  if (failed(appendStandardLibs(deviceLibs)))
-    return std::nullopt;
   if (failed(loadBitcodeFilesFromList(module.getContext(), fileList, bcFiles,
                                       true)))
     return std::nullopt;
@@ -210,76 +174,80 @@ LogicalResult SerializeGPUModuleBase::handleBitcodeFile(llvm::Module &module) {
   // Stop spamming us with clang version numbers
   if (auto *ident = module.getNamedMetadata("llvm.ident"))
     module.eraseNamedMetadata(ident);
-  // Override the libModules datalayout and target triple with the compiler's
-  // data layout should there be a discrepency.
-  setDataLayoutAndTriple(module);
   return success();
 }
 
 void SerializeGPUModuleBase::handleModulePreLink(llvm::Module &module) {
-  // If all libraries are not set, traverse the module to determine which
-  // libraries are required.
-  if (deviceLibs != AMDGCNLibraries::All) {
-    for (llvm::Function &f : module.functions()) {
-      if (f.hasExternalLinkage() && f.hasName() && !f.hasExactDefinition()) {
-        StringRef funcName = f.getName();
-        if ("printf" == funcName)
-          deviceLibs |= AMDGCNLibraries::OpenCL | AMDGCNLibraries::Ockl |
-                        AMDGCNLibraries::Ocml;
-        if (funcName.starts_with("__ockl_"))
-          deviceLibs |= AMDGCNLibraries::Ockl;
-        if (funcName.starts_with("__ocml_"))
-          deviceLibs |= AMDGCNLibraries::Ocml;
-      }
-    }
-  }
-  addControlVariables(module, deviceLibs, target.hasWave64(), target.hasDaz(),
+  [[maybe_unused]] std::optional<llvm::TargetMachine *> targetMachine =
+      getOrCreateTargetMachine();
+  assert(targetMachine && "expect a TargetMachine");
+  addControlVariables(module, target.hasWave64(), target.hasDaz(),
                       target.hasFiniteOnly(), target.hasUnsafeMath(),
                       target.hasFastMath(), target.hasCorrectSqrt(),
                       target.getAbi());
 }
 
-void SerializeGPUModuleBase::addControlVariables(
-    llvm::Module &module, AMDGCNLibraries libs, bool wave64, bool daz,
-    bool finiteOnly, bool unsafeMath, bool fastMath, bool correctSqrt,
-    StringRef abiVer) {
-  // Return if no device libraries are required.
-  if (libs == AMDGCNLibraries::None)
-    return;
-  // Helper function for adding control variables.
-  auto addControlVariable = [&module](StringRef name, uint32_t value,
-                                      uint32_t bitwidth) {
-    if (module.getNamedGlobal(name)) {
-      return;
+// Get the paths of ROCm device libraries.
+LogicalResult SerializeGPUModuleBase::getCommonBitcodeLibs(
+    llvm::SmallVector<std::string> &libs, SmallVector<char, 256> &libPath,
+    StringRef isaVersion) {
+  auto addLib = [&](StringRef path) -> bool {
+    if (!llvm::sys::fs::is_regular_file(path)) {
+      getOperation().emitRemark() << "Bitcode library path: " << path
+                                  << " does not exist or is not a file.\n";
+      return true;
     }
-    llvm::IntegerType *type =
-        llvm::IntegerType::getIntNTy(module.getContext(), bitwidth);
+    libs.push_back(path.str());
+    return false;
+  };
+  auto getLibPath = [&libPath](Twine lib) {
+    auto baseSize = libPath.size();
+    llvm::sys::path::append(libPath, lib + ".bc");
+    std::string path(StringRef(libPath.data(), libPath.size()).str());
+    libPath.truncate(baseSize);
+    return path;
+  };
+
+  // Add ROCm device libraries. Fail if any of the libraries is not found.
+  if (addLib(getLibPath("ocml")) || addLib(getLibPath("ockl")) ||
+      addLib(getLibPath("hip")) || addLib(getLibPath("opencl")) ||
+      addLib(getLibPath("oclc_isa_version_" + isaVersion)))
+    return failure();
+  return success();
+}
+
+void SerializeGPUModuleBase::addControlVariables(
+    llvm::Module &module, bool wave64, bool daz, bool finiteOnly,
+    bool unsafeMath, bool fastMath, bool correctSqrt, StringRef abiVer) {
+  llvm::Type *i8Ty = llvm::Type::getInt8Ty(module.getContext());
+  auto addControlVariable = [i8Ty, &module](StringRef name, bool enable) {
     llvm::GlobalVariable *controlVariable = new llvm::GlobalVariable(
-        module, /*isConstant=*/type, true,
-        llvm::GlobalValue::LinkageTypes::LinkOnceODRLinkage,
-        llvm::ConstantInt::get(type, value), name, /*before=*/nullptr,
-        /*threadLocalMode=*/llvm::GlobalValue::ThreadLocalMode::NotThreadLocal,
-        /*addressSpace=*/4);
+        module, i8Ty, true, llvm::GlobalValue::LinkageTypes::LinkOnceODRLinkage,
+        llvm::ConstantInt::get(i8Ty, enable), name, nullptr,
+        llvm::GlobalValue::ThreadLocalMode::NotThreadLocal, 4);
     controlVariable->setVisibility(
         llvm::GlobalValue::VisibilityTypes::ProtectedVisibility);
-    controlVariable->setAlignment(llvm::MaybeAlign(bitwidth / 8));
+    controlVariable->setAlignment(llvm::MaybeAlign(1));
     controlVariable->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Local);
   };
-  // Add ocml related control variables.
-  if (any(libs & AMDGCNLibraries::Ocml)) {
-    addControlVariable("__oclc_finite_only_opt", finiteOnly || fastMath, 8);
-    addControlVariable("__oclc_daz_opt", daz || fastMath, 8);
-    addControlVariable("__oclc_correctly_rounded_sqrt32",
-                       correctSqrt && !fastMath, 8);
-    addControlVariable("__oclc_unsafe_math_opt", unsafeMath || fastMath, 8);
-  }
-  // Add ocml or ockl related control variables.
-  if (any(libs & (AMDGCNLibraries::Ocml | AMDGCNLibraries::Ockl))) {
-    addControlVariable("__oclc_wavefrontsize64", wave64, 8);
-    int abi = 500;
-    abiVer.getAsInteger(0, abi);
-    addControlVariable("__oclc_ABI_version", abi, 32);
-  }
+  addControlVariable("__oclc_finite_only_opt", finiteOnly || fastMath);
+  addControlVariable("__oclc_unsafe_math_opt", unsafeMath || fastMath);
+  addControlVariable("__oclc_daz_opt", daz || fastMath);
+  addControlVariable("__oclc_correctly_rounded_sqrt32",
+                     correctSqrt && !fastMath);
+  addControlVariable("__oclc_wavefrontsize64", wave64);
+
+  llvm::Type *i32Ty = llvm::Type::getInt32Ty(module.getContext());
+  int abi = 500;
+  abiVer.getAsInteger(0, abi);
+  llvm::GlobalVariable *abiVersion = new llvm::GlobalVariable(
+      module, i32Ty, true, llvm::GlobalValue::LinkageTypes::LinkOnceODRLinkage,
+      llvm::ConstantInt::get(i32Ty, abi), "__oclc_ABI_version", nullptr,
+      llvm::GlobalValue::ThreadLocalMode::NotThreadLocal, 4);
+  abiVersion->setVisibility(
+      llvm::GlobalValue::VisibilityTypes::ProtectedVisibility);
+  abiVersion->setAlignment(llvm::MaybeAlign(4));
+  abiVersion->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Local);
 }
 
 std::optional<SmallVector<char, 0>>
@@ -344,16 +312,48 @@ SerializeGPUModuleBase::assembleIsa(StringRef isa) {
 
   parser->setTargetParser(*tap);
   parser->Run(false);
+
   return result;
 }
 
+#if MLIR_ENABLE_ROCM_CONVERSIONS
+namespace {
+class AMDGPUSerializer : public SerializeGPUModuleBase {
+public:
+  AMDGPUSerializer(Operation &module, ROCDLTargetAttr target,
+                   const gpu::TargetOptions &targetOptions);
+
+  gpu::GPUModuleOp getOperation();
+
+  // Compile to HSA.
+  std::optional<SmallVector<char, 0>>
+  compileToBinary(const std::string &serializedISA);
+
+  std::optional<SmallVector<char, 0>>
+  moduleToObject(llvm::Module &llvmModule) override;
+
+private:
+  // Target options.
+  gpu::TargetOptions targetOptions;
+};
+} // namespace
+
+AMDGPUSerializer::AMDGPUSerializer(Operation &module, ROCDLTargetAttr target,
+                                   const gpu::TargetOptions &targetOptions)
+    : SerializeGPUModuleBase(module, target, targetOptions),
+      targetOptions(targetOptions) {}
+
+gpu::GPUModuleOp AMDGPUSerializer::getOperation() {
+  return dyn_cast<gpu::GPUModuleOp>(&SerializeGPUModuleBase::getOperation());
+}
+
 std::optional<SmallVector<char, 0>>
-SerializeGPUModuleBase::compileToBinary(const std::string &serializedISA) {
+AMDGPUSerializer::compileToBinary(const std::string &serializedISA) {
   // Assemble the ISA.
   std::optional<SmallVector<char, 0>> isaBinary = assembleIsa(serializedISA);
 
   if (!isaBinary) {
-    getOperation().emitError() << "failed during ISA assembling";
+    getOperation().emitError() << "Failed during ISA assembling.";
     return std::nullopt;
   }
 
@@ -363,7 +363,7 @@ SerializeGPUModuleBase::compileToBinary(const std::string &serializedISA) {
   if (llvm::sys::fs::createTemporaryFile("kernel%%", "o", tempIsaBinaryFd,
                                          tempIsaBinaryFilename)) {
     getOperation().emitError()
-        << "failed to create a temporary file for dumping the ISA binary";
+        << "Failed to create a temporary file for dumping the ISA binary.";
     return std::nullopt;
   }
   llvm::FileRemover cleanupIsaBinary(tempIsaBinaryFilename);
@@ -378,7 +378,7 @@ SerializeGPUModuleBase::compileToBinary(const std::string &serializedISA) {
   if (llvm::sys::fs::createTemporaryFile("kernel", "hsaco",
                                          tempHsacoFilename)) {
     getOperation().emitError()
-        << "failed to create a temporary file for the HSA code object";
+        << "Failed to create a temporary file for the HSA code object.";
     return std::nullopt;
   }
   llvm::FileRemover cleanupHsaco(tempHsacoFilename);
@@ -389,7 +389,7 @@ SerializeGPUModuleBase::compileToBinary(const std::string &serializedISA) {
       lldPath,
       {"ld.lld", "-shared", tempIsaBinaryFilename, "-o", tempHsacoFilename});
   if (lldResult != 0) {
-    getOperation().emitError() << "lld invocation failed";
+    getOperation().emitError() << "lld invocation failed.";
     return std::nullopt;
   }
 
@@ -398,7 +398,7 @@ SerializeGPUModuleBase::compileToBinary(const std::string &serializedISA) {
       llvm::MemoryBuffer::getFile(tempHsacoFilename, /*IsText=*/false);
   if (!hsacoFile) {
     getOperation().emitError()
-        << "failed to read the HSA code object from the temp file";
+        << "Failed to read the HSA code object from the temp file.";
     return std::nullopt;
   }
 
@@ -407,13 +407,13 @@ SerializeGPUModuleBase::compileToBinary(const std::string &serializedISA) {
   return SmallVector<char, 0>(buffer.begin(), buffer.end());
 }
 
-std::optional<SmallVector<char, 0>> SerializeGPUModuleBase::moduleToObjectImpl(
-    const gpu::TargetOptions &targetOptions, llvm::Module &llvmModule) {
+std::optional<SmallVector<char, 0>>
+AMDGPUSerializer::moduleToObject(llvm::Module &llvmModule) {
   // Return LLVM IR if the compilation target is offload.
 #define DEBUG_TYPE "serialize-to-llvm"
   LLVM_DEBUG({
-    llvm::dbgs() << "LLVM IR for module: "
-                 << cast<gpu::GPUModuleOp>(getOperation()).getNameAttr() << "\n"
+    llvm::dbgs() << "LLVM IR for module: " << getOperation().getNameAttr()
+                 << "\n"
                  << llvmModule << "\n";
   });
 #undef DEBUG_TYPE
@@ -423,8 +423,8 @@ std::optional<SmallVector<char, 0>> SerializeGPUModuleBase::moduleToObjectImpl(
   std::optional<llvm::TargetMachine *> targetMachine =
       getOrCreateTargetMachine();
   if (!targetMachine) {
-    getOperation().emitError() << "target Machine unavailable for triple "
-                               << triple << ", can't compile with LLVM";
+    getOperation().emitError() << "Target Machine unavailable for triple "
+                               << triple << ", can't compile with LLVM\n";
     return std::nullopt;
   }
 
@@ -432,13 +432,12 @@ std::optional<SmallVector<char, 0>> SerializeGPUModuleBase::moduleToObjectImpl(
   std::optional<std::string> serializedISA =
       translateToISA(llvmModule, **targetMachine);
   if (!serializedISA) {
-    getOperation().emitError() << "failed translating the module to ISA";
+    getOperation().emitError() << "Failed translating the module to ISA.";
     return std::nullopt;
   }
 #define DEBUG_TYPE "serialize-to-isa"
   LLVM_DEBUG({
-    llvm::dbgs() << "ISA for module: "
-                 << cast<gpu::GPUModuleOp>(getOperation()).getNameAttr() << "\n"
+    llvm::dbgs() << "ISA for module: " << getOperation().getNameAttr() << "\n"
                  << *serializedISA << "\n";
   });
 #undef DEBUG_TYPE
@@ -446,45 +445,9 @@ std::optional<SmallVector<char, 0>> SerializeGPUModuleBase::moduleToObjectImpl(
   if (targetOptions.getCompilationTarget() == gpu::CompilationTarget::Assembly)
     return SmallVector<char, 0>(serializedISA->begin(), serializedISA->end());
 
-  // Compiling to binary requires a valid ROCm path, fail if it's not found.
-  if (getToolkitPath().empty())
-    getOperation().emitError() << "invalid ROCm path, please set a valid path";
-
   // Compile to binary.
   return compileToBinary(*serializedISA);
 }
-
-#if MLIR_ENABLE_ROCM_CONVERSIONS
-namespace {
-class AMDGPUSerializer : public SerializeGPUModuleBase {
-public:
-  AMDGPUSerializer(Operation &module, ROCDLTargetAttr target,
-                   const gpu::TargetOptions &targetOptions);
-
-  gpu::GPUModuleOp getOperation();
-
-  std::optional<SmallVector<char, 0>>
-  moduleToObject(llvm::Module &llvmModule) override;
-
-private:
-  // Target options.
-  gpu::TargetOptions targetOptions;
-};
-} // namespace
-
-AMDGPUSerializer::AMDGPUSerializer(Operation &module, ROCDLTargetAttr target,
-                                   const gpu::TargetOptions &targetOptions)
-    : SerializeGPUModuleBase(module, target, targetOptions),
-      targetOptions(targetOptions) {}
-
-gpu::GPUModuleOp AMDGPUSerializer::getOperation() {
-  return dyn_cast<gpu::GPUModuleOp>(&SerializeGPUModuleBase::getOperation());
-}
-
-std::optional<SmallVector<char, 0>>
-AMDGPUSerializer::moduleToObject(llvm::Module &llvmModule) {
-  return moduleToObjectImpl(targetOptions, llvmModule);
-}
 #endif // MLIR_ENABLE_ROCM_CONVERSIONS
 
 std::optional<SmallVector<char, 0>> ROCDLTargetAttrImpl::serializeToObject(
@@ -494,7 +457,7 @@ std::optional<SmallVector<char, 0>> ROCDLTargetAttrImpl::serializeToObject(
   if (!module)
     return std::nullopt;
   if (!mlir::isa<gpu::GPUModuleOp>(module)) {
-    module->emitError("module must be a GPU module");
+    module->emitError("Module must be a GPU module.");
     return std::nullopt;
   }
 #if MLIR_ENABLE_ROCM_CONVERSIONS
@@ -503,8 +466,8 @@ std::optional<SmallVector<char, 0>> ROCDLTargetAttrImpl::serializeToObject(
   serializer.init();
   return serializer.run();
 #else
-  module->emitError("the `AMDGPU` target was not built. Please enable it when "
-                    "building LLVM");
+  module->emitError("The `AMDGPU` target was not built. Please enable it when "
+                    "building LLVM.");
   return std::nullopt;
 #endif // MLIR_ENABLE_ROCM_CONVERSIONS
 }
@@ -514,15 +477,10 @@ ROCDLTargetAttrImpl::createObject(Attribute attribute,
                                   const SmallVector<char, 0> &object,
                                   const gpu::TargetOptions &options) const {
   gpu::CompilationTarget format = options.getCompilationTarget();
-  // If format is `fatbin` transform it to binary as `fatbin` is not yet
-  // supported.
-  if (format > gpu::CompilationTarget::Binary)
-    format = gpu::CompilationTarget::Binary;
-
-  DictionaryAttr properties{};
   Builder builder(attribute.getContext());
   return builder.getAttr<gpu::ObjectAttr>(
-      attribute, format,
-      builder.getStringAttr(StringRef(object.data(), object.size())),
-      properties);
+      attribute,
+      format > gpu::CompilationTarget::Binary ? gpu::CompilationTarget::Binary
+                                              : format,
+      builder.getStringAttr(StringRef(object.data(), object.size())), nullptr);
 }

>From 0187cd4da25716002775c9bf2066059a5fdc320b Mon Sep 17 00:00:00 2001
From: Matt Arsenault <Matthew.Arsenault at amd.com>
Date: Thu, 13 Jun 2024 18:33:11 +0200
Subject: [PATCH 26/26] AMDGPU: Create pseudo to real mapping for flat/buffer
 atomic fmin/fmax

The global/flat/buffer atomic fmin/fmax situation is a mess. These
instructions have been renamed 3 times. We currently have
separate pseudos defined for the same opcodes with the different names
(e.g. GLOBAL_ATOMIC_MIN_F64 from gfx90a and GLOBAL_ATOMIC_FMIN_X2 from gfx10).

Use the _FMIN versions as the canonical name for the f32 versions. Use the
_MIN_F64 style as the canonical name for the f64 case. This is because
gfx90a has the most sensible names, but does not have the f32 versions.t sho

Wire through the pseudo to use for the instruction properties vs. the assembly
name like in other cases. This will simplify handling of direct atomicrmw selection.

This will simplify directly selecting these from atomicrmw.
---
 llvm/lib/Target/AMDGPU/AMDGPU.td              |   4 +-
 llvm/lib/Target/AMDGPU/BUFInstructions.td     | 103 +++++++++--------
 llvm/lib/Target/AMDGPU/FLATInstructions.td    | 107 +++++++++---------
 .../AMDGPU/fp-atomic-to-s_denormmode.mir      |  40 +++----
 4 files changed, 129 insertions(+), 125 deletions(-)

diff --git a/llvm/lib/Target/AMDGPU/AMDGPU.td b/llvm/lib/Target/AMDGPU/AMDGPU.td
index 65c4abef2bf8a..cb5ceb9959325 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPU.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPU.td
@@ -1864,7 +1864,9 @@ def HasFlatAddressSpace : Predicate<"Subtarget->hasFlatAddressSpace()">,
 
 def HasBufferFlatGlobalAtomicsF64 :
   Predicate<"Subtarget->hasBufferFlatGlobalAtomicsF64()">,
-  AssemblerPredicate<(any_of FeatureGFX90AInsts)>;
+  // FIXME: This is too coarse, and working around using pseudo's predicates on real instruction.
+  AssemblerPredicate<(any_of FeatureGFX90AInsts, FeatureGFX10Insts, FeatureSouthernIslands, FeatureSeaIslands)>;
+
 def HasLdsAtomicAddF64 :
   Predicate<"Subtarget->hasLdsAtomicAddF64()">,
   AssemblerPredicate<(any_of FeatureGFX90AInsts)>;
diff --git a/llvm/lib/Target/AMDGPU/BUFInstructions.td b/llvm/lib/Target/AMDGPU/BUFInstructions.td
index dff19b6a93286..21335e9b64647 100644
--- a/llvm/lib/Target/AMDGPU/BUFInstructions.td
+++ b/llvm/lib/Target/AMDGPU/BUFInstructions.td
@@ -1163,12 +1163,6 @@ let SubtargetPredicate = isGFX6GFX7GFX10 in {
 defm BUFFER_ATOMIC_FCMPSWAP_X2 : MUBUF_Pseudo_Atomics <
   "buffer_atomic_fcmpswap_x2", VReg_128, v2f64, null_frag
 >;
-defm BUFFER_ATOMIC_FMIN_X2 : MUBUF_Pseudo_Atomics <
-  "buffer_atomic_fmin_x2", VReg_64, f64, null_frag
->;
-defm BUFFER_ATOMIC_FMAX_X2 : MUBUF_Pseudo_Atomics <
-  "buffer_atomic_fmax_x2", VReg_64, f64, null_frag
->;
 
 }
 
@@ -1318,6 +1312,9 @@ let SubtargetPredicate = isGFX90APlus in {
 
 let SubtargetPredicate = HasBufferFlatGlobalAtomicsF64 in {
   defm BUFFER_ATOMIC_ADD_F64 : MUBUF_Pseudo_Atomics<"buffer_atomic_add_f64", VReg_64, f64>;
+
+  // Note the names can be buffer_atomic_fmin_x2/buffer_atomic_fmax_x2
+  // depending on some subtargets.
   defm BUFFER_ATOMIC_MIN_F64 : MUBUF_Pseudo_Atomics<"buffer_atomic_min_f64", VReg_64, f64>;
   defm BUFFER_ATOMIC_MAX_F64 : MUBUF_Pseudo_Atomics<"buffer_atomic_max_f64", VReg_64, f64>;
 } // End SubtargetPredicate = HasBufferFlatGlobalAtomicsF64
@@ -1751,8 +1748,8 @@ let OtherPredicates = [isGFX6GFX7GFX10Plus] in {
   defm : SIBufferAtomicPat<"SIbuffer_atomic_fmax", f32, "BUFFER_ATOMIC_FMAX">;
 }
 let SubtargetPredicate = isGFX6GFX7GFX10 in {
-  defm : SIBufferAtomicPat<"SIbuffer_atomic_fmin", f64, "BUFFER_ATOMIC_FMIN_X2">;
-  defm : SIBufferAtomicPat<"SIbuffer_atomic_fmax", f64, "BUFFER_ATOMIC_FMAX_X2">;
+  defm : SIBufferAtomicPat<"SIbuffer_atomic_fmin", f64, "BUFFER_ATOMIC_MIN_F64">;
+  defm : SIBufferAtomicPat<"SIbuffer_atomic_fmax", f64, "BUFFER_ATOMIC_MAX_F64">;
 }
 
 class NoUseBufferAtomic<SDPatternOperator Op, ValueType vt> : PatFrag <
@@ -2303,6 +2300,12 @@ let OtherPredicates = [HasPackedD16VMem] in {
 // Target-specific instruction encodings.
 //===----------------------------------------------------------------------===//
 
+// Shortcut to default Mnemonic from BUF_Pseudo. Hides the cast to the
+// specific pseudo (bothen in this case) since any of them will work.
+class get_BUF_ps<string name> {
+  string Mnemonic = !cast<BUF_Pseudo>(name # "_OFFSET").Mnemonic;
+}
+
 //===----------------------------------------------------------------------===//
 // Base ENC_MUBUF for GFX6, GFX7, GFX10, GFX11.
 //===----------------------------------------------------------------------===//
@@ -2334,8 +2337,8 @@ multiclass MUBUF_Real_gfx11<bits<8> op, string real_name = !cast<MUBUF_Pseudo>(N
   }
 }
 
-class Base_MUBUF_Real_gfx6_gfx7_gfx10<bits<7> op, MUBUF_Pseudo ps, int ef> :
-  Base_MUBUF_Real_gfx6_gfx7_gfx10_gfx11<ps, ef> {
+class Base_MUBUF_Real_gfx6_gfx7_gfx10<bits<7> op, MUBUF_Pseudo ps, int ef, string asmName> :
+  Base_MUBUF_Real_gfx6_gfx7_gfx10_gfx11<ps, ef, asmName> {
   let Inst{12}    = ps.offen;
   let Inst{13}    = ps.idxen;
   let Inst{14}    = !if(ps.has_glc, cpol{CPolBit.GLC}, ps.glc_value);
@@ -2345,9 +2348,10 @@ class Base_MUBUF_Real_gfx6_gfx7_gfx10<bits<7> op, MUBUF_Pseudo ps, int ef> :
   let Inst{55}    = ps.tfe;
 }
 
-multiclass MUBUF_Real_gfx10<bits<8> op> {
-  defvar ps = !cast<MUBUF_Pseudo>(NAME);
-  def _gfx10 : Base_MUBUF_Real_gfx6_gfx7_gfx10<op{6-0}, ps, SIEncodingFamily.GFX10> {
+multiclass MUBUF_Real_gfx10<bits<8> op, string psName = NAME,
+                            string asmName = !cast<MUBUF_Pseudo>(psName).Mnemonic> {
+  defvar ps = !cast<MUBUF_Pseudo>(psName);
+  def _gfx10 : Base_MUBUF_Real_gfx6_gfx7_gfx10<op{6-0}, ps, SIEncodingFamily.GFX10, asmName> {
     let Inst{15} = !if(ps.has_dlc, cpol{CPolBit.DLC}, ps.dlc_value);
     let Inst{25} = op{7};
     let AssemblerPredicate = isGFX10Only;
@@ -2355,9 +2359,10 @@ multiclass MUBUF_Real_gfx10<bits<8> op> {
   }
 }
 
-multiclass MUBUF_Real_gfx6_gfx7<bits<8> op> {
-  defvar ps = !cast<MUBUF_Pseudo>(NAME);
-  def _gfx6_gfx7 : Base_MUBUF_Real_gfx6_gfx7_gfx10<op{6-0}, ps, SIEncodingFamily.SI> {
+multiclass MUBUF_Real_gfx6_gfx7<bits<8> op, string psName = NAME,
+                                string asmName = !cast<MUBUF_Pseudo>(psName).Mnemonic> {
+  defvar ps = !cast<MUBUF_Pseudo>(psName);
+  def _gfx6_gfx7 : Base_MUBUF_Real_gfx6_gfx7_gfx10<op{6-0}, ps, SIEncodingFamily.SI, asmName> {
     let Inst{15} = ps.addr64;
     let AssemblerPredicate = isGFX6GFX7;
     let DecoderNamespace = "GFX6GFX7";
@@ -2366,7 +2371,7 @@ multiclass MUBUF_Real_gfx6_gfx7<bits<8> op> {
 
 multiclass MUBUF_Real_gfx6<bits<8> op> {
   defvar ps = !cast<MUBUF_Pseudo>(NAME);
-  def _gfx6 : Base_MUBUF_Real_gfx6_gfx7_gfx10<op{6-0}, ps, SIEncodingFamily.SI> {
+  def _gfx6 : Base_MUBUF_Real_gfx6_gfx7_gfx10<op{6-0}, ps, SIEncodingFamily.SI, ps.Mnemonic> {
     let Inst{15} = ps.addr64;
     let AssemblerPredicate = isGFX6;
     let DecoderNamespace = "GFX6";
@@ -2375,7 +2380,7 @@ multiclass MUBUF_Real_gfx6<bits<8> op> {
 
 multiclass MUBUF_Real_gfx7<bits<8> op> {
   defvar ps = !cast<MUBUF_Pseudo>(NAME);
-  def _gfx7 : Base_MUBUF_Real_gfx6_gfx7_gfx10<op{6-0}, ps, SIEncodingFamily.SI> {
+  def _gfx7 : Base_MUBUF_Real_gfx6_gfx7_gfx10<op{6-0}, ps, SIEncodingFamily.SI, ps.Mnemonic> {
     let Inst{15} = ps.addr64;
     let AssemblerPredicate = isGFX7Only;
     let DecoderNamespace = "GFX7";
@@ -2476,12 +2481,6 @@ multiclass VBUFFER_MTBUF_Real_gfx12<bits<4> op, string real_name> {
 // MUBUF - GFX11, GFX12.
 //===----------------------------------------------------------------------===//
 
-// Shortcut to default Mnemonic from BUF_Pseudo. Hides the cast to the
-// specific pseudo (bothen in this case) since any of them will work.
-class get_BUF_ps<string name> {
-  string Mnemonic = !cast<BUF_Pseudo>(name # "_BOTHEN").Mnemonic;
-}
-
 // gfx11 instruction that accept both old and new assembler name.
 class Mnem_gfx11_gfx12 <string mnemonic, string real_name> :
     AMDGPUMnemonicAlias<mnemonic, real_name> {
@@ -2703,18 +2702,20 @@ multiclass MUBUF_Real_AllAddr_Lds_gfx10<bits<8> op, bit isTFE = 0> {
     defm _LDS_BOTHEN : MUBUF_Real_gfx10<op>;
   }
 }
-multiclass MUBUF_Real_Atomics_RTN_gfx10<bits<8> op> {
-  defm _BOTHEN_RTN : MUBUF_Real_gfx10<op>;
-  defm _IDXEN_RTN  : MUBUF_Real_gfx10<op>;
-  defm _OFFEN_RTN  : MUBUF_Real_gfx10<op>;
-  defm _OFFSET_RTN : MUBUF_Real_gfx10<op>;
+multiclass MUBUF_Real_Atomics_RTN_gfx10<bits<8> op, string psName = NAME,
+                                        string asmName = !cast<MUBUF_Pseudo>(psName).Mnemonic> {
+  defm _BOTHEN_RTN : MUBUF_Real_gfx10<op, psName#"_BOTHEN_RTN", asmName>;
+  defm _IDXEN_RTN  : MUBUF_Real_gfx10<op, psName#"_IDXEN_RTN", asmName>;
+  defm _OFFEN_RTN  : MUBUF_Real_gfx10<op, psName#"_OFFEN_RTN", asmName>;
+  defm _OFFSET_RTN : MUBUF_Real_gfx10<op, psName#"_OFFSET_RTN", asmName>;
 }
-multiclass MUBUF_Real_Atomics_gfx10<bits<8> op> :
-    MUBUF_Real_Atomics_RTN_gfx10<op> {
-  defm _BOTHEN : MUBUF_Real_gfx10<op>;
-  defm _IDXEN  : MUBUF_Real_gfx10<op>;
-  defm _OFFEN  : MUBUF_Real_gfx10<op>;
-  defm _OFFSET : MUBUF_Real_gfx10<op>;
+multiclass MUBUF_Real_Atomics_gfx10<bits<8> op, string psName = NAME,
+                                    string asmName = get_BUF_ps<psName>.Mnemonic> :
+    MUBUF_Real_Atomics_RTN_gfx10<op, psName, asmName> {
+  defm _BOTHEN : MUBUF_Real_gfx10<op, psName#"_BOTHEN", asmName>;
+  defm _IDXEN  : MUBUF_Real_gfx10<op, psName#"_IDXEN", asmName>;
+  defm _OFFEN  : MUBUF_Real_gfx10<op, psName#"_OFFEN", asmName>;
+  defm _OFFSET : MUBUF_Real_gfx10<op, psName#"_OFFSET", asmName>;
 }
 
 defm BUFFER_STORE_BYTE_D16_HI     : MUBUF_Real_AllAddr_gfx10<0x019>;
@@ -2769,18 +2770,18 @@ multiclass MUBUF_Real_AllAddr_Lds_gfx6_gfx7<bits<8> op, bit isTFE = 0> {
     defm _LDS_BOTHEN : MUBUF_Real_gfx6_gfx7<op>;
   }
 }
-multiclass MUBUF_Real_Atomics_gfx6_gfx7<bits<8> op> {
-  defm _ADDR64 : MUBUF_Real_gfx6_gfx7<op>;
-  defm _BOTHEN : MUBUF_Real_gfx6_gfx7<op>;
-  defm _IDXEN  : MUBUF_Real_gfx6_gfx7<op>;
-  defm _OFFEN  : MUBUF_Real_gfx6_gfx7<op>;
-  defm _OFFSET : MUBUF_Real_gfx6_gfx7<op>;
+multiclass MUBUF_Real_Atomics_gfx6_gfx7<bits<8> op, string psName, string asmName> {
+  defm _ADDR64 : MUBUF_Real_gfx6_gfx7<op, psName#"_ADDR64", asmName>;
+  defm _BOTHEN : MUBUF_Real_gfx6_gfx7<op, psName#"_BOTHEN", asmName>;
+  defm _IDXEN  : MUBUF_Real_gfx6_gfx7<op, psName#"_IDXEN", asmName>;
+  defm _OFFEN  : MUBUF_Real_gfx6_gfx7<op, psName#"_OFFEN", asmName>;
+  defm _OFFSET : MUBUF_Real_gfx6_gfx7<op, psName#"_OFFSET", asmName>;
 
-  defm _ADDR64_RTN : MUBUF_Real_gfx6_gfx7<op>;
-  defm _BOTHEN_RTN : MUBUF_Real_gfx6_gfx7<op>;
-  defm _IDXEN_RTN  : MUBUF_Real_gfx6_gfx7<op>;
-  defm _OFFEN_RTN  : MUBUF_Real_gfx6_gfx7<op>;
-  defm _OFFSET_RTN : MUBUF_Real_gfx6_gfx7<op>;
+  defm _ADDR64_RTN : MUBUF_Real_gfx6_gfx7<op, psName#"_ADDR64_RTN", asmName>;
+  defm _BOTHEN_RTN : MUBUF_Real_gfx6_gfx7<op, psName#"_BOTHEN_RTN", asmName>;
+  defm _IDXEN_RTN  : MUBUF_Real_gfx6_gfx7<op, psName#"_IDXEN_RTN", asmName>;
+  defm _OFFEN_RTN  : MUBUF_Real_gfx6_gfx7<op, psName#"_OFFEN_RTN", asmName>;
+  defm _OFFSET_RTN : MUBUF_Real_gfx6_gfx7<op, psName#"_OFFSET_RTN", asmName>;
 }
 
 multiclass MUBUF_Real_AllAddr_gfx6_gfx7_gfx10<bits<8> op> :
@@ -2795,8 +2796,10 @@ multiclass MUBUF_Real_AllAddr_Lds_gfx6_gfx7_gfx10<bits<8> op> {
   defm _TFE : MUBUF_Real_AllAddr_Lds_Helper_gfx6_gfx7_gfx10<op, 1>;
 }
 
-multiclass MUBUF_Real_Atomics_gfx6_gfx7_gfx10<bits<8> op> :
-  MUBUF_Real_Atomics_gfx6_gfx7<op>, MUBUF_Real_Atomics_gfx10<op>;
+multiclass MUBUF_Real_Atomics_gfx6_gfx7_gfx10<bits<8> op, string psName = NAME,
+                                              string asmName = get_BUF_ps<psName>.Mnemonic> :
+  MUBUF_Real_Atomics_gfx6_gfx7<op, psName, asmName>,
+  MUBUF_Real_Atomics_gfx10<op, psName, asmName>;
 
 // FIXME-GFX6: Following instructions are available only on GFX6.
 //defm BUFFER_ATOMIC_RSUB         : MUBUF_Real_Atomics_gfx6 <0x034>;
@@ -2856,8 +2859,8 @@ defm BUFFER_ATOMIC_INC_X2      : MUBUF_Real_Atomics_gfx6_gfx7_gfx10<0x05c>;
 defm BUFFER_ATOMIC_DEC_X2      : MUBUF_Real_Atomics_gfx6_gfx7_gfx10<0x05d>;
 // FIXME-GFX7: Need to handle hazard for BUFFER_ATOMIC_FCMPSWAP_X2 on GFX7.
 defm BUFFER_ATOMIC_FCMPSWAP_X2 : MUBUF_Real_Atomics_gfx6_gfx7_gfx10<0x05e>;
-defm BUFFER_ATOMIC_FMIN_X2     : MUBUF_Real_Atomics_gfx6_gfx7_gfx10<0x05f>;
-defm BUFFER_ATOMIC_FMAX_X2     : MUBUF_Real_Atomics_gfx6_gfx7_gfx10<0x060>;
+defm BUFFER_ATOMIC_FMIN_X2     : MUBUF_Real_Atomics_gfx6_gfx7_gfx10<0x05f, "BUFFER_ATOMIC_MIN_F64", "buffer_atomic_fmin_x2">;
+defm BUFFER_ATOMIC_FMAX_X2     : MUBUF_Real_Atomics_gfx6_gfx7_gfx10<0x060, "BUFFER_ATOMIC_MAX_F64", "buffer_atomic_fmax_x2">;
 
 defm BUFFER_ATOMIC_CSUB       : MUBUF_Real_Atomics_gfx10<0x034>;
 
diff --git a/llvm/lib/Target/AMDGPU/FLATInstructions.td b/llvm/lib/Target/AMDGPU/FLATInstructions.td
index 99e3a8084f067..2b1a0c9eaad4e 100644
--- a/llvm/lib/Target/AMDGPU/FLATInstructions.td
+++ b/llvm/lib/Target/AMDGPU/FLATInstructions.td
@@ -756,12 +756,6 @@ let SubtargetPredicate = isGFX7GFX10 in {
 defm FLAT_ATOMIC_FCMPSWAP_X2 : FLAT_Atomic_Pseudo <"flat_atomic_fcmpswap_x2",
                                 VReg_64, f64, v2f64, VReg_128>;
 
-defm FLAT_ATOMIC_FMIN_X2     : FLAT_Atomic_Pseudo <"flat_atomic_fmin_x2",
-                                VReg_64, f64>;
-
-defm FLAT_ATOMIC_FMAX_X2     : FLAT_Atomic_Pseudo <"flat_atomic_fmax_x2",
-                                VReg_64, f64>;
-
 } // End SubtargetPredicate = isGFX7GFX10
 
 let SubtargetPredicate = HasBufferFlatGlobalAtomicsF64 in {
@@ -995,10 +989,6 @@ let SubtargetPredicate = isGFX10Plus in {
     FLAT_Global_Atomic_Pseudo<"global_atomic_fmax", VGPR_32, f32>;
   defm GLOBAL_ATOMIC_FCMPSWAP_X2 :
     FLAT_Global_Atomic_Pseudo<"global_atomic_fcmpswap_x2", VReg_64, f64, v2f64, VReg_128>;
-  defm GLOBAL_ATOMIC_FMIN_X2 :
-    FLAT_Global_Atomic_Pseudo<"global_atomic_fmin_x2", VReg_64, f64>;
-  defm GLOBAL_ATOMIC_FMAX_X2 :
-    FLAT_Global_Atomic_Pseudo<"global_atomic_fmax_x2", VReg_64, f64>;
 } // End SubtargetPredicate = isGFX10Plus
 
 let OtherPredicates = [HasAtomicFaddNoRtnInsts] in
@@ -1608,14 +1598,14 @@ defm : FlatSignedAtomicIntrPat <"FLAT_ATOMIC_FMAX", "int_amdgcn_flat_atomic_fmax
 }
 
 let OtherPredicates = [isGFX10Only] in {
-defm : GlobalFLATAtomicPats <"GLOBAL_ATOMIC_FMIN_X2", "atomic_load_fmin_global", f64>;
-defm : GlobalFLATAtomicPats <"GLOBAL_ATOMIC_FMAX_X2", "atomic_load_fmax_global", f64>;
-defm : GlobalFLATAtomicIntrPats <"GLOBAL_ATOMIC_FMIN_X2", "int_amdgcn_global_atomic_fmin", f64>;
-defm : GlobalFLATAtomicIntrPats <"GLOBAL_ATOMIC_FMAX_X2", "int_amdgcn_global_atomic_fmax", f64>;
-defm : FlatSignedAtomicPat <"FLAT_ATOMIC_FMIN_X2", "atomic_load_fmin_flat", f64>;
-defm : FlatSignedAtomicPat <"FLAT_ATOMIC_FMAX_X2", "atomic_load_fmax_flat", f64>;
-defm : FlatSignedAtomicIntrPat <"FLAT_ATOMIC_FMIN_X2", "int_amdgcn_flat_atomic_fmin", f64>;
-defm : FlatSignedAtomicIntrPat <"FLAT_ATOMIC_FMAX_X2", "int_amdgcn_flat_atomic_fmax", f64>;
+defm : GlobalFLATAtomicPats <"GLOBAL_ATOMIC_MIN_F64", "atomic_load_fmin_global", f64>;
+defm : GlobalFLATAtomicPats <"GLOBAL_ATOMIC_MAX_F64", "atomic_load_fmax_global", f64>;
+defm : GlobalFLATAtomicIntrPats <"GLOBAL_ATOMIC_MIN_F64", "int_amdgcn_global_atomic_fmin", f64>;
+defm : GlobalFLATAtomicIntrPats <"GLOBAL_ATOMIC_MAX_F64", "int_amdgcn_global_atomic_fmax", f64>;
+defm : FlatSignedAtomicPat <"FLAT_ATOMIC_MIN_F64", "atomic_load_fmin_flat", f64>;
+defm : FlatSignedAtomicPat <"FLAT_ATOMIC_MAX_F64", "atomic_load_fmax_flat", f64>;
+defm : FlatSignedAtomicIntrPat <"FLAT_ATOMIC_MIN_F64", "int_amdgcn_flat_atomic_fmin", f64>;
+defm : FlatSignedAtomicIntrPat <"FLAT_ATOMIC_MAX_F64", "int_amdgcn_flat_atomic_fmax", f64>;
 }
 
 let OtherPredicates = [isGFX12Only] in {
@@ -1749,8 +1739,8 @@ defm : ScratchFLATLoadPats_D16 <SCRATCH_LOAD_SHORT_D16, load_d16_lo_private, v2f
 // CI
 //===----------------------------------------------------------------------===//
 
-class FLAT_Real_ci <bits<7> op, FLAT_Pseudo ps> :
-  FLAT_Real <op, ps>,
+class FLAT_Real_ci <bits<7> op, FLAT_Pseudo ps, string asmName = ps.Mnemonic> :
+  FLAT_Real <op, ps, asmName>,
   SIMCInstr <ps.PseudoInstr, SIEncodingFamily.SI> {
   let AssemblerPredicate = isGFX7Only;
   let DecoderNamespace="GFX7";
@@ -1772,10 +1762,13 @@ def FLAT_STORE_DWORDX2_ci      : FLAT_Real_ci <0x1d, FLAT_STORE_DWORDX2>;
 def FLAT_STORE_DWORDX4_ci      : FLAT_Real_ci <0x1e, FLAT_STORE_DWORDX4>;
 def FLAT_STORE_DWORDX3_ci      : FLAT_Real_ci <0x1f, FLAT_STORE_DWORDX3>;
 
-multiclass FLAT_Real_Atomics_ci <bits<7> op> {
-  defvar ps = !cast<FLAT_Pseudo>(NAME);
-  def _ci     : FLAT_Real_ci<op, !cast<FLAT_Pseudo>(ps.PseudoInstr)>;
-  def _RTN_ci : FLAT_Real_ci<op, !cast<FLAT_Pseudo>(ps.PseudoInstr # "_RTN")>;
+multiclass FLAT_Real_Atomics_ci <bits<7> op, string opName = NAME,
+                                 string asmName = !cast<FLAT_Pseudo>(opName).Mnemonic> {
+  defvar ps = !cast<FLAT_Pseudo>(opName);
+  defvar ps_rtn = !cast<FLAT_Pseudo>(opName#"_RTN");
+
+  def _ci     : FLAT_Real_ci<op, ps, asmName>;
+  def _RTN_ci : FLAT_Real_ci<op, ps_rtn, asmName>;
 }
 
 defm FLAT_ATOMIC_SWAP          : FLAT_Real_Atomics_ci <0x30>;
@@ -1810,8 +1803,8 @@ defm FLAT_ATOMIC_FCMPSWAP      : FLAT_Real_Atomics_ci <0x3e>;
 defm FLAT_ATOMIC_FMIN          : FLAT_Real_Atomics_ci <0x3f>;
 defm FLAT_ATOMIC_FMAX          : FLAT_Real_Atomics_ci <0x40>;
 defm FLAT_ATOMIC_FCMPSWAP_X2   : FLAT_Real_Atomics_ci <0x5e>;
-defm FLAT_ATOMIC_FMIN_X2       : FLAT_Real_Atomics_ci <0x5f>;
-defm FLAT_ATOMIC_FMAX_X2       : FLAT_Real_Atomics_ci <0x60>;
+defm FLAT_ATOMIC_FMIN_X2       : FLAT_Real_Atomics_ci <0x5f, "FLAT_ATOMIC_MIN_F64", "flat_atomic_fmin_x2">;
+defm FLAT_ATOMIC_FMAX_X2       : FLAT_Real_Atomics_ci <0x60, "FLAT_ATOMIC_MAX_F64", "flat_atomic_fmax_x2">;
 
 
 //===----------------------------------------------------------------------===//
@@ -2093,8 +2086,8 @@ let SubtargetPredicate = isGFX940Plus in {
 // GFX10.
 //===----------------------------------------------------------------------===//
 
-class FLAT_Real_gfx10<bits<7> op, FLAT_Pseudo ps> :
-    FLAT_Real<op, ps>, SIMCInstr<ps.PseudoInstr, SIEncodingFamily.GFX10> {
+class FLAT_Real_gfx10<bits<7> op, FLAT_Pseudo ps, string opName = ps.Mnemonic> :
+    FLAT_Real<op, ps, opName>, SIMCInstr<ps.PseudoInstr, SIEncodingFamily.GFX10> {
   let AssemblerPredicate = isGFX10Only;
   let DecoderNamespace = "GFX10";
 
@@ -2106,25 +2099,28 @@ class FLAT_Real_gfx10<bits<7> op, FLAT_Pseudo ps> :
   let Inst{55}    = 0;
 }
 
-
-multiclass FLAT_Real_Base_gfx10<bits<7> op> {
+multiclass FLAT_Real_Base_gfx10<bits<7> op, string psName = NAME,
+                                string asmName = !cast<FLAT_Pseudo>(psName).Mnemonic> {
   def _gfx10 :
-    FLAT_Real_gfx10<op, !cast<FLAT_Pseudo>(NAME)>;
+    FLAT_Real_gfx10<op, !cast<FLAT_Pseudo>(psName), asmName>;
 }
 
-multiclass FLAT_Real_RTN_gfx10<bits<7> op> {
+multiclass FLAT_Real_RTN_gfx10<bits<7> op, string psName = NAME,
+                               string asmName = !cast<FLAT_Pseudo>(psName).Mnemonic> {
   def _RTN_gfx10 :
-    FLAT_Real_gfx10<op, !cast<FLAT_Pseudo>(NAME#"_RTN")>;
+    FLAT_Real_gfx10<op, !cast<FLAT_Pseudo>(psName#"_RTN"), asmName>;
 }
 
-multiclass FLAT_Real_SADDR_gfx10<bits<7> op> {
+multiclass FLAT_Real_SADDR_gfx10<bits<7> op, string psName = NAME,
+                                 string asmName = !cast<FLAT_Pseudo>(psName#"_SADDR").Mnemonic> {
   def _SADDR_gfx10 :
-    FLAT_Real_gfx10<op, !cast<FLAT_Pseudo>(NAME#"_SADDR")>;
+    FLAT_Real_gfx10<op, !cast<FLAT_Pseudo>(psName#"_SADDR"), asmName>;
 }
 
-multiclass FLAT_Real_SADDR_RTN_gfx10<bits<7> op> {
+multiclass FLAT_Real_SADDR_RTN_gfx10<bits<7> op, string psName = NAME,
+                                     string asmName = !cast<FLAT_Pseudo>(psName#"_SADDR_RTN").Mnemonic> {
   def _SADDR_RTN_gfx10 :
-    FLAT_Real_gfx10<op, !cast<FLAT_Pseudo>(NAME#"_SADDR_RTN")>;
+    FLAT_Real_gfx10<op, !cast<FLAT_Pseudo>(psName#"_SADDR_RTN"), asmName>;
 }
 
 multiclass FLAT_Real_ST_gfx10<bits<7> op> {
@@ -2132,22 +2128,25 @@ multiclass FLAT_Real_ST_gfx10<bits<7> op> {
     FLAT_Real_gfx10<op, !cast<FLAT_Pseudo>(NAME#"_ST")>;
 }
 
-multiclass FLAT_Real_AllAddr_gfx10<bits<7> op> :
-  FLAT_Real_Base_gfx10<op>,
-  FLAT_Real_SADDR_gfx10<op>;
+multiclass FLAT_Real_AllAddr_gfx10<bits<7> op, string OpName = NAME,
+                                   string asmName = !cast<FLAT_Pseudo>(OpName).Mnemonic> :
+  FLAT_Real_Base_gfx10<op, OpName, asmName>,
+  FLAT_Real_SADDR_gfx10<op, OpName, asmName>;
 
-multiclass FLAT_Real_Atomics_gfx10<bits<7> op> :
-  FLAT_Real_Base_gfx10<op>,
-  FLAT_Real_RTN_gfx10<op>;
+multiclass FLAT_Real_Atomics_gfx10<bits<7> op, string OpName = NAME,
+                                   string asmName = !cast<FLAT_Pseudo>(OpName).Mnemonic> :
+  FLAT_Real_Base_gfx10<op, OpName, asmName>,
+  FLAT_Real_RTN_gfx10<op, OpName, asmName>;
 
-multiclass FLAT_Real_GlblAtomics_gfx10<bits<7> op> :
-  FLAT_Real_AllAddr_gfx10<op>,
-  FLAT_Real_RTN_gfx10<op>,
-  FLAT_Real_SADDR_RTN_gfx10<op>;
+multiclass FLAT_Real_GlblAtomics_gfx10<bits<7> op, string OpName = NAME,
+                                       string asmName = !cast<FLAT_Pseudo>(OpName).Mnemonic> :
+  FLAT_Real_AllAddr_gfx10<op, OpName, asmName>,
+  FLAT_Real_RTN_gfx10<op, OpName, asmName>,
+  FLAT_Real_SADDR_RTN_gfx10<op, OpName, asmName>;
 
-multiclass FLAT_Real_GlblAtomics_RTN_gfx10<bits<7> op> :
-  FLAT_Real_RTN_gfx10<op>,
-  FLAT_Real_SADDR_RTN_gfx10<op>;
+multiclass FLAT_Real_GlblAtomics_RTN_gfx10<bits<7> op, string OpName = NAME> :
+  FLAT_Real_RTN_gfx10<op, OpName>,
+  FLAT_Real_SADDR_RTN_gfx10<op, OpName>;
 
 multiclass FLAT_Real_ScratchAllAddr_gfx10<bits<7> op> :
   FLAT_Real_Base_gfx10<op>,
@@ -2224,8 +2223,8 @@ defm FLAT_ATOMIC_XOR_X2         : FLAT_Real_Atomics_gfx10<0x05b>;
 defm FLAT_ATOMIC_INC_X2         : FLAT_Real_Atomics_gfx10<0x05c>;
 defm FLAT_ATOMIC_DEC_X2         : FLAT_Real_Atomics_gfx10<0x05d>;
 defm FLAT_ATOMIC_FCMPSWAP_X2    : FLAT_Real_Atomics_gfx10<0x05e>;
-defm FLAT_ATOMIC_FMIN_X2        : FLAT_Real_Atomics_gfx10<0x05f>;
-defm FLAT_ATOMIC_FMAX_X2        : FLAT_Real_Atomics_gfx10<0x060>;
+defm FLAT_ATOMIC_FMIN_X2        : FLAT_Real_Atomics_gfx10<0x05f, "FLAT_ATOMIC_MIN_F64", "flat_atomic_fmin_x2">;
+defm FLAT_ATOMIC_FMAX_X2        : FLAT_Real_Atomics_gfx10<0x060, "FLAT_ATOMIC_MAX_F64", "flat_atomic_fmax_x2">;
 
 
 // ENC_FLAT_GLBL.
@@ -2282,8 +2281,8 @@ defm GLOBAL_ATOMIC_XOR_X2       : FLAT_Real_GlblAtomics_gfx10<0x05b>;
 defm GLOBAL_ATOMIC_INC_X2       : FLAT_Real_GlblAtomics_gfx10<0x05c>;
 defm GLOBAL_ATOMIC_DEC_X2       : FLAT_Real_GlblAtomics_gfx10<0x05d>;
 defm GLOBAL_ATOMIC_FCMPSWAP_X2  : FLAT_Real_GlblAtomics_gfx10<0x05e>;
-defm GLOBAL_ATOMIC_FMIN_X2      : FLAT_Real_GlblAtomics_gfx10<0x05f>;
-defm GLOBAL_ATOMIC_FMAX_X2      : FLAT_Real_GlblAtomics_gfx10<0x060>;
+defm GLOBAL_ATOMIC_FMIN_X2      : FLAT_Real_GlblAtomics_gfx10<0x05f, "GLOBAL_ATOMIC_MIN_F64", "global_atomic_fmin_x2">;
+defm GLOBAL_ATOMIC_FMAX_X2      : FLAT_Real_GlblAtomics_gfx10<0x060, "GLOBAL_ATOMIC_MAX_F64", "global_atomic_fmax_x2">;
 defm GLOBAL_LOAD_DWORD_ADDTID   : FLAT_Real_AllAddr_gfx10<0x016>;
 defm GLOBAL_STORE_DWORD_ADDTID  : FLAT_Real_AllAddr_gfx10<0x017>;
 
diff --git a/llvm/test/CodeGen/AMDGPU/fp-atomic-to-s_denormmode.mir b/llvm/test/CodeGen/AMDGPU/fp-atomic-to-s_denormmode.mir
index b654c28653711..a2d1c0a87ab39 100644
--- a/llvm/test/CodeGen/AMDGPU/fp-atomic-to-s_denormmode.mir
+++ b/llvm/test/CodeGen/AMDGPU/fp-atomic-to-s_denormmode.mir
@@ -38,14 +38,14 @@ body:            |
 ...
 
 # GCN-LABEL: name: flat_atomic_fmax_x2_to_s_denorm_mode
-# GCN:      FLAT_ATOMIC_FMAX_X2
+# GCN:      FLAT_ATOMIC_MAX_F64
 # GFX10-NEXT: S_NOP 2
 # GCN-NEXT: S_DENORM_MODE
 ---
 name:            flat_atomic_fmax_x2_to_s_denorm_mode
 body:            |
   bb.0:
-    FLAT_ATOMIC_FMAX_X2 undef %0:vreg_64, undef %1:vreg_64, 0, 0, implicit $exec, implicit $flat_scr :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
+    FLAT_ATOMIC_MAX_F64 undef %0:vreg_64, undef %1:vreg_64, 0, 0, implicit $exec, implicit $flat_scr :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
     S_DENORM_MODE 0, implicit-def $mode, implicit $mode
 ...
 
@@ -62,14 +62,14 @@ body:            |
 ...
 
 # GCN-LABEL: name: flat_atomic_fmin_x2_to_s_denorm_mode
-# GCN:      FLAT_ATOMIC_FMIN_X2
+# GCN:      FLAT_ATOMIC_MIN_F64
 # GFX10-NEXT: S_NOP 2
 # GCN-NEXT: S_DENORM_MODE
 ---
 name:            flat_atomic_fmin_x2_to_s_denorm_mode
 body:            |
   bb.0:
-    FLAT_ATOMIC_FMIN_X2 undef %0:vreg_64, undef %1:vreg_64, 0, 0, implicit $exec, implicit $flat_scr :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
+    FLAT_ATOMIC_MIN_F64 undef %0:vreg_64, undef %1:vreg_64, 0, 0, implicit $exec, implicit $flat_scr :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
     S_DENORM_MODE 0, implicit-def $mode, implicit $mode
 ...
 
@@ -98,14 +98,14 @@ body:            |
 ...
 
 # GCN-LABEL: name: flat_atomic_fmax_x2_rtn_to_s_denorm_mode
-# GCN:      FLAT_ATOMIC_FMAX_X2_RTN
+# GCN:      FLAT_ATOMIC_MAX_F64_RTN
 # GFX10-NEXT: S_NOP 2
 # GCN-NEXT: S_DENORM_MODE
 ---
 name:            flat_atomic_fmax_x2_rtn_to_s_denorm_mode
 body:            |
   bb.0:
-    %2:vreg_64 = FLAT_ATOMIC_FMAX_X2_RTN undef %0:vreg_64, undef %1:vreg_64, 0, 1, implicit $exec, implicit $flat_scr :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
+    %2:vreg_64 = FLAT_ATOMIC_MAX_F64_RTN undef %0:vreg_64, undef %1:vreg_64, 0, 1, implicit $exec, implicit $flat_scr :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
     S_DENORM_MODE 0, implicit-def $mode, implicit $mode
 ...
 
@@ -122,14 +122,14 @@ body:            |
 ...
 
 # GCN-LABEL: name: flat_atomic_fmin_x2_rtn_to_s_denorm_mode
-# GCN:      FLAT_ATOMIC_FMIN_X2_RTN
+# GCN:      FLAT_ATOMIC_MIN_F64_RTN
 # GFX10-NEXT: S_NOP 2
 # GCN-NEXT: S_DENORM_MODE
 ---
 name:            flat_atomic_fmin_x2_rtn_to_s_denorm_mode
 body:            |
   bb.0:
-    %2:vreg_64 = FLAT_ATOMIC_FMIN_X2_RTN undef %0:vreg_64, undef %1:vreg_64, 0, 1, implicit $exec, implicit $flat_scr :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
+    %2:vreg_64 = FLAT_ATOMIC_MIN_F64_RTN undef %0:vreg_64, undef %1:vreg_64, 0, 1, implicit $exec, implicit $flat_scr :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
     S_DENORM_MODE 0, implicit-def $mode, implicit $mode
 ...
 
@@ -182,14 +182,14 @@ body:            |
 ...
 
 # GCN-LABEL: name: global_atomic_fmax_x2_to_s_denorm_mode
-# GCN:      GLOBAL_ATOMIC_FMAX_X2
+# GCN:      GLOBAL_ATOMIC_MAX_F64
 # GFX10-NEXT: S_NOP 2
 # GCN-NEXT: S_DENORM_MODE
 ---
 name:            global_atomic_fmax_x2_to_s_denorm_mode
 body:            |
   bb.0:
-    GLOBAL_ATOMIC_FMAX_X2 undef %0:vreg_64, undef %1:vreg_64, 0, 0, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
+    GLOBAL_ATOMIC_MAX_F64 undef %0:vreg_64, undef %1:vreg_64, 0, 0, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
     S_DENORM_MODE 0, implicit-def $mode, implicit $mode
 ...
 
@@ -206,14 +206,14 @@ body:            |
 ...
 
 # GCN-LABEL: name: global_atomic_fmin_x2_to_s_denorm_mode
-# GCN:      GLOBAL_ATOMIC_FMIN_X2
+# GCN:      GLOBAL_ATOMIC_MIN_F64
 # GFX10-NEXT: S_NOP 2
 # GCN-NEXT: S_DENORM_MODE
 ---
 name:            global_atomic_fmin_x2_to_s_denorm_mode
 body:            |
   bb.0:
-    GLOBAL_ATOMIC_FMIN_X2 undef %0:vreg_64, undef %1:vreg_64, 0, 0, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
+    GLOBAL_ATOMIC_MIN_F64 undef %0:vreg_64, undef %1:vreg_64, 0, 0, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
     S_DENORM_MODE 0, implicit-def $mode, implicit $mode
 ...
 
@@ -254,14 +254,14 @@ body:            |
 ...
 
 # GCN-LABEL: name: global_atomic_fmax_x2_rtn_to_s_denorm_mode
-# GCN:      GLOBAL_ATOMIC_FMAX_X2_RTN
+# GCN:      GLOBAL_ATOMIC_MAX_F64_RTN
 # GFX10-NEXT: S_NOP 2
 # GCN-NEXT: S_DENORM_MODE
 ---
 name:            global_atomic_fmax_x2_rtn_to_s_denorm_mode
 body:            |
   bb.0:
-    %2:vreg_64 = GLOBAL_ATOMIC_FMAX_X2_RTN undef %0:vreg_64, undef %1:vreg_64, 0, 1, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
+    %2:vreg_64 = GLOBAL_ATOMIC_MAX_F64_RTN undef %0:vreg_64, undef %1:vreg_64, 0, 1, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
     S_DENORM_MODE 0, implicit-def $mode, implicit $mode
 ...
 
@@ -278,14 +278,14 @@ body:            |
 ...
 
 # GCN-LABEL: name: global_atomic_fmin_x2_rtn_to_s_denorm_mode
-# GCN:      GLOBAL_ATOMIC_FMIN_X2_RTN
+# GCN:      GLOBAL_ATOMIC_MIN_F64_RTN
 # GFX10-NEXT: S_NOP 2
 # GCN-NEXT: S_DENORM_MODE
 ---
 name:            global_atomic_fmin_x2_rtn_to_s_denorm_mode
 body:            |
   bb.0:
-    %2:vreg_64 = GLOBAL_ATOMIC_FMIN_X2_RTN undef %0:vreg_64, undef %1:vreg_64, 0, 1, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
+    %2:vreg_64 = GLOBAL_ATOMIC_MIN_F64_RTN undef %0:vreg_64, undef %1:vreg_64, 0, 1, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
     S_DENORM_MODE 0, implicit-def $mode, implicit $mode
 ...
 
@@ -326,14 +326,14 @@ body:            |
 ...
 
 # GCN-LABEL: name: global_atomic_fmax_x2_saddr_rtn_to_s_denorm_mode
-# GCN:      GLOBAL_ATOMIC_FMAX_X2_SADDR_RTN
+# GCN:      GLOBAL_ATOMIC_MAX_F64_SADDR_RTN
 # GFX10-NEXT: S_NOP 2
 # GCN-NEXT: S_DENORM_MODE
 ---
 name:            global_atomic_fmax_x2_saddr_rtn_to_s_denorm_mode
 body:            |
   bb.0:
-    %2:vreg_64 = GLOBAL_ATOMIC_FMAX_X2_SADDR_RTN undef %0:vgpr_32, undef %1:vreg_64, undef %3:sgpr_64, 0, 1, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
+    %2:vreg_64 = GLOBAL_ATOMIC_MAX_F64_SADDR_RTN undef %0:vgpr_32, undef %1:vreg_64, undef %3:sgpr_64, 0, 1, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
     S_DENORM_MODE 0, implicit-def $mode, implicit $mode
 ...
 
@@ -350,14 +350,14 @@ body:            |
 ...
 
 # GCN-LABEL: name: global_atomic_fmin_x2_saddr_rtn_to_s_denorm_mode
-# GCN:      GLOBAL_ATOMIC_FMIN_X2_SADDR_RTN
+# GCN:      GLOBAL_ATOMIC_MIN_F64_SADDR_RTN
 # GFX10-NEXT: S_NOP 2
 # GCN-NEXT: S_DENORM_MODE
 ---
 name:            global_atomic_fmin_x2_saddr_rtn_to_s_denorm_mode
 body:            |
   bb.0:
-    %2:vreg_64 = GLOBAL_ATOMIC_FMIN_X2_SADDR_RTN undef %0:vgpr_32, undef %1:vreg_64, undef %3:sgpr_64, 0, 1, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
+    %2:vreg_64 = GLOBAL_ATOMIC_MIN_F64_SADDR_RTN undef %0:vgpr_32, undef %1:vreg_64, undef %3:sgpr_64, 0, 1, implicit $exec :: (volatile load store seq_cst seq_cst (s32) on `ptr addrspace(1) undef`)
     S_DENORM_MODE 0, implicit-def $mode, implicit $mode
 ...
 



More information about the llvm-branch-commits mailing list