[Mlir-commits] [mlir] [mlir][vector][gpu] Align minf/maxf reduction kind names with arith (PR #75901)

llvmlistbot at llvm.org llvmlistbot at llvm.org
Mon Dec 18 23:12:10 PST 2023


llvmbot wrote:


<!--LLVM PR SUMMARY COMMENT-->
@llvm/pr-subscribers-mlir-vector
@llvm/pr-subscribers-mlir-linalg

@llvm/pr-subscribers-mlir-gpu

Author: Jakub Kuderski (kuhar)

<details>
<summary>Changes</summary>

This is to avoid confusion when dealing with reduction/combining kinds. For example, see a recent PR comment:
https://github.com/llvm/llvm-project/pull/75846#discussion_r1430722175.

Previously, they were picked to mostly mirror the names of the llvm vector reduction intrinsics:
https://llvm.org/docs/LangRef.html#llvm-vector-reduce-fmin-intrinsic. In isolation, it was not clear if `<maxf>` has `arith.maxnumf` or `arith.maximumf` semantics. The new reduction kind names map 1:1 to arith ops, which makes it easier to tell/look up their semantics.

Because both the vector and the gpu dialect depend on the arith dialect, it more natural to align names with those in arith than with the lowering to llvm intrinsics.

Issue: https://github.com/llvm/llvm-project/issues/72354

---

Patch is 42.85 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/75901.diff


26 Files Affected:

- (modified) mlir/include/mlir/Dialect/GPU/IR/GPUOps.td (+6-6) 
- (modified) mlir/include/mlir/Dialect/Vector/IR/VectorAttributes.td (+4-4) 
- (modified) mlir/include/mlir/Dialect/Vector/IR/VectorOps.td (+12-11) 
- (modified) mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp (+2-2) 
- (modified) mlir/lib/Conversion/GPUToSPIRV/GPUToSPIRV.cpp (+2-2) 
- (modified) mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp (+4-4) 
- (modified) mlir/lib/Conversion/VectorToSPIRV/VectorToSPIRV.cpp (+2-2) 
- (modified) mlir/lib/Dialect/GPU/IR/GPUDialect.cpp (+2-1) 
- (modified) mlir/lib/Dialect/GPU/Transforms/AllReduceLowering.cpp (+2-2) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+2-2) 
- (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+4-4) 
- (modified) mlir/lib/Dialect/Vector/Transforms/LowerVectorContract.cpp (+1-1) 
- (modified) mlir/lib/Dialect/Vector/Transforms/LowerVectorScan.cpp (+4-4) 
- (modified) mlir/test/Conversion/GPUToSPIRV/reductions.mlir (+8-8) 
- (modified) mlir/test/Conversion/VectorToLLVM/vector-reduction-to-llvm.mlir (+2-2) 
- (modified) mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir (+4-4) 
- (modified) mlir/test/Dialect/GPU/all-reduce-maxf.mlir (+1-1) 
- (modified) mlir/test/Dialect/GPU/invalid.mlir (+9-9) 
- (modified) mlir/test/Dialect/Vector/break-down-vector-reduction.mlir (+2-2) 
- (modified) mlir/test/Dialect/Vector/ops.mlir (+10-10) 
- (modified) mlir/test/Dialect/Vector/vector-contract-to-outerproduct-matvec-transforms.mlir (+9-9) 
- (modified) mlir/test/Dialect/Vector/vector-multi-reduction-outer-lowering.mlir (+3-3) 
- (modified) mlir/test/Integration/Dialect/Vector/CPU/test-reductions-f32-reassoc.mlir (+2-2) 
- (modified) mlir/test/Integration/Dialect/Vector/CPU/test-reductions-f32.mlir (+2-2) 
- (modified) mlir/test/Integration/Dialect/Vector/CPU/test-reductions-f64-reassoc.mlir (+2-2) 
- (modified) mlir/test/Integration/Dialect/Vector/CPU/test-reductions-f64.mlir (+2-2) 


``````````diff
diff --git a/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td b/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
index 2e1a5f5cc78aed..2e21cd77d2d83b 100644
--- a/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
+++ b/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
@@ -937,11 +937,11 @@ def GPU_AllReduceOpMul : I32EnumAttrCase<"MUL", 1, "mul">;
 def GPU_AllReduceOpMinUI : I32EnumAttrCase<"MINUI", 2, "minui">;
 def GPU_AllReduceOpMinSI : I32EnumAttrCase<"MINSI", 3, "minsi">;
 // Follows the `arith.minnumf` semantics.
-def GPU_AllReduceOpMinF : I32EnumAttrCase<"MINF", 4, "minf">;
+def GPU_AllReduceOpMinnumF : I32EnumAttrCase<"MINNUMF", 4, "minnumf">;
 def GPU_AllReduceOpMaxUI : I32EnumAttrCase<"MAXUI", 5, "maxui">;
 def GPU_AllReduceOpMaxSI : I32EnumAttrCase<"MAXSI", 6, "maxsi">;
 // Follows the `arith.maxnumf` semantics.
-def GPU_AllReduceOpMaxF : I32EnumAttrCase<"MAXF", 7, "maxf">;
+def GPU_AllReduceOpMaxnumF : I32EnumAttrCase<"MAXNUMF", 7, "maxnumf">;
 def GPU_AllReduceOpAnd : I32EnumAttrCase<"AND", 8, "and">;
 def GPU_AllReduceOpOr  : I32EnumAttrCase<"OR",  9, "or">;
 def GPU_AllReduceOpXor : I32EnumAttrCase<"XOR", 10, "xor">;
@@ -957,10 +957,10 @@ def GPU_AllReduceOperation : I32EnumAttr<"AllReduceOperation",
       GPU_AllReduceOpMul,
       GPU_AllReduceOpMinUI,
       GPU_AllReduceOpMinSI,
-      GPU_AllReduceOpMinF,
+      GPU_AllReduceOpMinnumF,
       GPU_AllReduceOpMaxUI,
       GPU_AllReduceOpMaxSI,
-      GPU_AllReduceOpMaxF,
+      GPU_AllReduceOpMaxnumF,
       GPU_AllReduceOpAnd,
       GPU_AllReduceOpOr,
       GPU_AllReduceOpXor,
@@ -999,7 +999,7 @@ def GPU_AllReduceOp : GPU_Op<"all_reduce",
     accumulation as code region. The reduction operation must be one of:
     *  Integer types: `add`, `mul`, `minui`, `minsi`, `maxui`, `maxsi`, `and`,
        `or`, `xor`
-    *  Floating point types: `add`, `mul`, `minf`, `maxf`, `minimumf`,
+    *  Floating point types: `add`, `mul`, `minnumf`, `maxnumf`, `minimumf`,
        `maximumf`
 
     If `uniform` flag is set either none or all work items of a workgroup
@@ -1039,7 +1039,7 @@ def GPU_SubgroupReduceOp : GPU_Op<"subgroup_reduce", [SameOperandsAndResultType]
     of:
     *  Integer types: `add`, `mul`, `minui`, `minsi`, `maxui`, `maxsi`, `and`,
        `or`, `xor`
-    *  Floating point types: `add`, `mul`, `minf`, `maxf`, `minimumf`,
+    *  Floating point types: `add`, `mul`, `minnumf`, `maxnumf`, `minimumf`,
        `maximumf`
   }];
 
diff --git a/mlir/include/mlir/Dialect/Vector/IR/VectorAttributes.td b/mlir/include/mlir/Dialect/Vector/IR/VectorAttributes.td
index f8f85b0d09d90e..0f08f61d7b2575 100644
--- a/mlir/include/mlir/Dialect/Vector/IR/VectorAttributes.td
+++ b/mlir/include/mlir/Dialect/Vector/IR/VectorAttributes.td
@@ -21,10 +21,10 @@ def COMBINING_KIND_ADD : I32BitEnumAttrCaseBit<"ADD", 0, "add">;
 def COMBINING_KIND_MUL : I32BitEnumAttrCaseBit<"MUL", 1, "mul">;
 def COMBINING_KIND_MINUI : I32BitEnumAttrCaseBit<"MINUI", 2, "minui">;
 def COMBINING_KIND_MINSI : I32BitEnumAttrCaseBit<"MINSI", 3, "minsi">;
-def COMBINING_KIND_MINF : I32BitEnumAttrCaseBit<"MINF", 4, "minf">;
+def COMBINING_KIND_MINNUMF : I32BitEnumAttrCaseBit<"MINNUMF", 4, "minnumf">;
 def COMBINING_KIND_MAXUI : I32BitEnumAttrCaseBit<"MAXUI", 5, "maxui">;
 def COMBINING_KIND_MAXSI : I32BitEnumAttrCaseBit<"MAXSI", 6, "maxsi">;
-def COMBINING_KIND_MAXF : I32BitEnumAttrCaseBit<"MAXF", 7, "maxf">;
+def COMBINING_KIND_MAXNUMF : I32BitEnumAttrCaseBit<"MAXNUMF", 7, "maxnumf">;
 def COMBINING_KIND_AND : I32BitEnumAttrCaseBit<"AND", 8, "and">;
 def COMBINING_KIND_OR  : I32BitEnumAttrCaseBit<"OR", 9, "or">;
 def COMBINING_KIND_XOR : I32BitEnumAttrCaseBit<"XOR", 10, "xor">;
@@ -35,8 +35,8 @@ def CombiningKind : I32BitEnumAttr<
     "CombiningKind",
     "Kind of combining function for contractions and reductions",
     [COMBINING_KIND_ADD, COMBINING_KIND_MUL, COMBINING_KIND_MINUI,
-     COMBINING_KIND_MINSI, COMBINING_KIND_MINF, COMBINING_KIND_MAXUI,
-     COMBINING_KIND_MAXSI, COMBINING_KIND_MAXF, COMBINING_KIND_AND,
+     COMBINING_KIND_MINSI, COMBINING_KIND_MINNUMF, COMBINING_KIND_MAXUI,
+     COMBINING_KIND_MAXSI, COMBINING_KIND_MAXNUMF, COMBINING_KIND_AND,
      COMBINING_KIND_OR, COMBINING_KIND_XOR,
      COMBINING_KIND_MAXIMUMF, COMBINING_KIND_MINIMUMF]> {
   let cppNamespace = "::mlir::vector";
diff --git a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
index afc9d532f6e31b..423118f79e733d 100644
--- a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
+++ b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
@@ -87,8 +87,8 @@ def Vector_ContractionOp :
     An optional kind attribute may be used to specify the combining function
     between the intermediate result and accumulator argument of rank K. This
     attribute can take the values `add`/`mul`/`minsi`/`minui`/`maxsi`/`maxui`
-    /`and`/`or`/`xor` for integers, and `add`/`mul`/`minf`/`maxf`/`minimumf`
-    /`maximumf` for floats. The default is `add`.
+    /`and`/`or`/`xor` for integers, and `add`/`mul`/`minnumf`/`maxnumf`
+    /`minimumf`/`maximumf` for floats. The default is `add`.
 
     Example:
 
@@ -150,7 +150,7 @@ def Vector_ContractionOp :
     #contraction_trait = {
       indexing_maps = #contraction_accesses,
       iterator_types = ["reduction"],
-      kind = #vector.kind<maxf>
+      kind = #vector.kind<maxnumf>
     }
     %6 = vector.contract #contraction_trait %0, %1, %2
       : vector<10xf32>, vector<10xf32> into f32
@@ -234,8 +234,8 @@ def Vector_ReductionOp :
   let description = [{
     Reduces an 1-D vector "horizontally" into a scalar using the given
     operation: `add`/`mul`/`minsi`/`minui`/`maxsi`/`maxui`/`and`/`or`/`xor` for
-    integers, and `add`/`mul`/`minf`/`maxf`/`minimumf`/`maximumf` for floats.
-    Reductions also allow an optional fused accumulator.
+    integers, and `add`/`mul`/`minnumf`/`maxnumf`/`minimumf`/`maximumf` for
+    floats. Reductions also allow an optional fused accumulator.
 
     Note that these operations are restricted to 1-D vectors to remain
     close to the corresponding LLVM intrinsics:
@@ -292,7 +292,7 @@ def Vector_MultiDimReductionOp :
   let description = [{
     Reduces an n-D vector into an (n-k)-D vector (or a scalar when k == n)
     using the given operation: `add`/`mul`/`minsi`/`minui`/`maxsi`/`maxui`
-    /`and`/`or`/`xor` for integers, and `add`/`mul`/`minf`/`maxf`/`minimumf`
+    /`and`/`or`/`xor` for integers, and `add`/`mul`/`minnumf`/`maxnumf`/`minimumf`
     /`maximumf` for floats.
     Takes an initial accumulator operand.
 
@@ -942,7 +942,8 @@ def Vector_OuterProductOp :
 
     An optional kind attribute may be specified to be: `add`/`mul`/`minsi`
     /`minui`/`maxsi`/`maxui`/`and`/`or`/`xor` for integers, and `add`/`mul`
-    /`minf`/`maxf`/`minimumf`/`maximumf` for floats. The default is `add`.
+    /`minnumf`/`maxnumf`/`minimumf`/`maximumf` for floats. The default is
+    `add`.
 
     Example:
 
@@ -954,7 +955,7 @@ def Vector_OuterProductOp :
       vector<4xf32>, vector<8xf32>, vector<4x8xf32>
     return %3: vector<4x8xf32>
 
-    %4 = vector.outerproduct %0, %1, %2 {kind = #vector.kind<maxf>}:
+    %4 = vector.outerproduct %0, %1, %2 {kind = #vector.kind<maxnumf>}:
       vector<4xf32>, vector<8xf32>, vector<4x8xf32>
     return %3: vector<4x8xf32>
 
@@ -2769,9 +2770,9 @@ def Vector_ScanOp :
     Performs an inclusive/exclusive scan on an n-D vector along a single
     dimension returning an n-D result vector using the given
     operation (`add`/`mul`/`minsi`/`minui`/`maxsi`/`maxui`/`and`/`or`/`xor` for
-    integers, and `add`/`mul`/`minf`/`maxf`/`minimumf`/`maximumf` for floats),
-    and a specified value for the initial value. The operator returns the
-    result of scan as well as the result of the last reduction in the scan.
+    integers, and `add`/`mul`/`minnumf`/`maxnumf`/`minimumf`/`maximumf` for
+    floats), and a specified value for the initial value. The operator returns
+    the result of scan as well as the result of the last reduction in the scan.
 
     Example:
 
diff --git a/mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp b/mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp
index 0e978ca0a64248..e60fe5cbd7603f 100644
--- a/mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp
+++ b/mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp
@@ -72,13 +72,13 @@ convertReduxKind(gpu::AllReduceOperation mode) {
     return NVVM::ReduxKind::MIN;
   case gpu::AllReduceOperation::MINUI:
     return std::nullopt;
-  case gpu::AllReduceOperation::MINF:
+  case gpu::AllReduceOperation::MINNUMF:
     return NVVM::ReduxKind::MIN;
   case gpu::AllReduceOperation::MAXSI:
     return NVVM::ReduxKind::MAX;
   case gpu::AllReduceOperation::MAXUI:
     return std::nullopt;
-  case gpu::AllReduceOperation::MAXF:
+  case gpu::AllReduceOperation::MAXNUMF:
     return NVVM::ReduxKind::MAX;
   case gpu::AllReduceOperation::AND:
     return NVVM::ReduxKind::AND;
diff --git a/mlir/lib/Conversion/GPUToSPIRV/GPUToSPIRV.cpp b/mlir/lib/Conversion/GPUToSPIRV/GPUToSPIRV.cpp
index 5a88ab351866bd..d383c16949f0ef 100644
--- a/mlir/lib/Conversion/GPUToSPIRV/GPUToSPIRV.cpp
+++ b/mlir/lib/Conversion/GPUToSPIRV/GPUToSPIRV.cpp
@@ -529,7 +529,7 @@ static std::optional<Value> createGroupReduceOp(OpBuilder &builder,
       {ReduceType::MINSI, ElemType::Integer,
        &createGroupReduceOpImpl<spirv::GroupSMinOp,
                                 spirv::GroupNonUniformSMinOp>},
-      {ReduceType::MINF, ElemType::Float,
+      {ReduceType::MINNUMF, ElemType::Float,
        &createGroupReduceOpImpl<spirv::GroupFMinOp,
                                 spirv::GroupNonUniformFMinOp>},
       {ReduceType::MAXUI, ElemType::Integer,
@@ -538,7 +538,7 @@ static std::optional<Value> createGroupReduceOp(OpBuilder &builder,
       {ReduceType::MAXSI, ElemType::Integer,
        &createGroupReduceOpImpl<spirv::GroupSMaxOp,
                                 spirv::GroupNonUniformSMaxOp>},
-      {ReduceType::MAXF, ElemType::Float,
+      {ReduceType::MAXNUMF, ElemType::Float,
        &createGroupReduceOpImpl<spirv::GroupFMaxOp,
                                 spirv::GroupNonUniformFMaxOp>},
       {ReduceType::MINIMUMF, ElemType::Float,
diff --git a/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp b/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
index cd5df0be740b9c..ebf7d9b65fa1de 100644
--- a/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
+++ b/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
@@ -818,10 +818,10 @@ class VectorReductionOpConversion
       result =
           createFPReductionComparisonOpLowering<LLVM::vector_reduce_fmaximum>(
               rewriter, loc, llvmType, operand, acc, fmf);
-    } else if (kind == vector::CombiningKind::MINF) {
+    } else if (kind == vector::CombiningKind::MINNUMF) {
       result = createFPReductionComparisonOpLowering<LLVM::vector_reduce_fmin>(
           rewriter, loc, llvmType, operand, acc, fmf);
-    } else if (kind == vector::CombiningKind::MAXF) {
+    } else if (kind == vector::CombiningKind::MAXNUMF) {
       result = createFPReductionComparisonOpLowering<LLVM::vector_reduce_fmax>(
           rewriter, loc, llvmType, operand, acc, fmf);
     } else
@@ -938,12 +938,12 @@ class MaskedReductionOpConversion
                                                       ReductionNeutralZero>(
           rewriter, loc, llvmType, operand, acc, maskOp.getMask());
       break;
-    case vector::CombiningKind::MINF:
+    case vector::CombiningKind::MINNUMF:
       result = lowerPredicatedReductionWithStartValue<LLVM::VPReduceFMinOp,
                                                       ReductionNeutralFPMax>(
           rewriter, loc, llvmType, operand, acc, maskOp.getMask());
       break;
-    case vector::CombiningKind::MAXF:
+    case vector::CombiningKind::MAXNUMF:
       result = lowerPredicatedReductionWithStartValue<LLVM::VPReduceFMaxOp,
                                                       ReductionNeutralFPMin>(
           rewriter, loc, llvmType, operand, acc, maskOp.getMask());
diff --git a/mlir/lib/Conversion/VectorToSPIRV/VectorToSPIRV.cpp b/mlir/lib/Conversion/VectorToSPIRV/VectorToSPIRV.cpp
index e48f29a4f17029..868a3521e7a0fd 100644
--- a/mlir/lib/Conversion/VectorToSPIRV/VectorToSPIRV.cpp
+++ b/mlir/lib/Conversion/VectorToSPIRV/VectorToSPIRV.cpp
@@ -478,8 +478,8 @@ struct VectorReductionFloatMinMax final
 
         INT_OR_FLOAT_CASE(MAXIMUMF, SPIRVFMaxOp);
         INT_OR_FLOAT_CASE(MINIMUMF, SPIRVFMinOp);
-        INT_OR_FLOAT_CASE(MAXF, SPIRVFMaxOp);
-        INT_OR_FLOAT_CASE(MINF, SPIRVFMinOp);
+        INT_OR_FLOAT_CASE(MAXNUMF, SPIRVFMaxOp);
+        INT_OR_FLOAT_CASE(MINNUMF, SPIRVFMinOp);
 
       default:
         return rewriter.notifyMatchFailure(reduceOp, "not handled here");
diff --git a/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp b/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
index d31903ea201158..7c3330f4c238f8 100644
--- a/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
+++ b/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
@@ -492,7 +492,8 @@ static LogicalResult verifyReduceOpAndType(gpu::AllReduceOperation opName,
                                            Type resType) {
   using Kind = gpu::AllReduceOperation;
   if (llvm::is_contained(
-          {Kind::MINF, Kind::MAXF, Kind::MINIMUMF, Kind::MAXIMUMF}, opName)) {
+          {Kind::MINNUMF, Kind::MAXNUMF, Kind::MINIMUMF, Kind::MAXIMUMF},
+          opName)) {
     if (!isa<FloatType>(resType))
       return failure();
   }
diff --git a/mlir/lib/Dialect/GPU/Transforms/AllReduceLowering.cpp b/mlir/lib/Dialect/GPU/Transforms/AllReduceLowering.cpp
index ecee9a7b45e32b..6f2161fb1b1289 100644
--- a/mlir/lib/Dialect/GPU/Transforms/AllReduceLowering.cpp
+++ b/mlir/lib/Dialect/GPU/Transforms/AllReduceLowering.cpp
@@ -227,13 +227,13 @@ struct GpuAllReduceRewriter {
       return getFactory<arith::MinSIOp>();
     case Kind::MINUI:
       return getFactory<arith::MinUIOp>();
-    case Kind::MINF:
+    case Kind::MINNUMF:
       return getFactory<arith::MinNumFOp>();
     case Kind::MAXSI:
       return getFactory<arith::MaxSIOp>();
     case Kind::MAXUI:
       return getFactory<arith::MaxUIOp>();
-    case Kind::MAXF:
+    case Kind::MAXNUMF:
       return getFactory<arith::MaxNumFOp>();
     case Kind::AND:
       return getFactory<arith::AndIOp>();
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index d956fd4fdd9bd8..be813df8e782ff 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -2426,11 +2426,11 @@ bool isCastOfBlockArgument(Operation *op) {
 bool isSupportedPoolKind(vector::CombiningKind kind) {
   switch (kind) {
   case vector::CombiningKind::ADD:
-  case vector::CombiningKind::MAXF:
+  case vector::CombiningKind::MAXNUMF:
   case vector::CombiningKind::MAXIMUMF:
   case vector::CombiningKind::MAXSI:
   case vector::CombiningKind::MAXUI:
-  case vector::CombiningKind::MINF:
+  case vector::CombiningKind::MINNUMF:
   case vector::CombiningKind::MINIMUMF:
   case vector::CombiningKind::MINSI:
   case vector::CombiningKind::MINUI:
diff --git a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
index 9f3e13c90a624d..1d3200bf5c8217 100644
--- a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
+++ b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
@@ -140,8 +140,8 @@ static bool isSupportedCombiningKind(CombiningKind combiningKind,
   case CombiningKind::OR:
   case CombiningKind::XOR:
     return elementType.isIntOrIndex();
-  case CombiningKind::MINF:
-  case CombiningKind::MAXF:
+  case CombiningKind::MINNUMF:
+  case CombiningKind::MAXNUMF:
   case CombiningKind::MINIMUMF:
   case CombiningKind::MAXIMUMF:
     return llvm::isa<FloatType>(elementType);
@@ -6233,7 +6233,7 @@ Value mlir::vector::makeArithReduction(OpBuilder &b, Location loc,
     assert(t1.isIntOrIndex() && tAcc.isIntOrIndex() && "expected int values");
     result = b.createOrFold<arith::AndIOp>(loc, v1, acc);
     break;
-  case CombiningKind::MAXF:
+  case CombiningKind::MAXNUMF:
     assert(llvm::isa<FloatType>(t1) && llvm::isa<FloatType>(tAcc) &&
            "expected float values");
     result = b.createOrFold<arith::MaxNumFOp>(loc, v1, acc, fastmath);
@@ -6243,7 +6243,7 @@ Value mlir::vector::makeArithReduction(OpBuilder &b, Location loc,
            "expected float values");
     result = b.createOrFold<arith::MaximumFOp>(loc, v1, acc, fastmath);
     break;
-  case CombiningKind::MINF:
+  case CombiningKind::MINNUMF:
     assert(llvm::isa<FloatType>(t1) && llvm::isa<FloatType>(tAcc) &&
            "expected float values");
     result = b.createOrFold<arith::MinNumFOp>(loc, v1, acc, fastmath);
diff --git a/mlir/lib/Dialect/Vector/Transforms/LowerVectorContract.cpp b/mlir/lib/Dialect/Vector/Transforms/LowerVectorContract.cpp
index 41ff0c18fe6258..6ff4c26763d247 100644
--- a/mlir/lib/Dialect/Vector/Transforms/LowerVectorContract.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/LowerVectorContract.cpp
@@ -139,7 +139,7 @@ createContractArithOp(Location loc, Value x, Value y, Value acc,
   Value mul;
 
   if (isInt) {
-    if (kind == CombiningKind::MINF || kind == CombiningKind::MAXF ||
+    if (kind == CombiningKind::MINNUMF || kind == CombiningKind::MAXNUMF ||
         kind == CombiningKind::MINIMUMF || kind == CombiningKind::MAXIMUMF)
       // Only valid for floating point types.
       return std::nullopt;
diff --git a/mlir/lib/Dialect/Vector/Transforms/LowerVectorScan.cpp b/mlir/lib/Dialect/Vector/Transforms/LowerVectorScan.cpp
index ef6e6f5264a221..4c08946e7f3810 100644
--- a/mlir/lib/Dialect/Vector/Transforms/LowerVectorScan.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/LowerVectorScan.cpp
@@ -86,11 +86,11 @@ static Value genOperator(Location loc, Value x, Value y,
   case CombiningKind::XOR:
     combinedResult = rewriter.create<arith::XOrIOp>(loc, x, y);
     break;
-  case CombiningKind::MINF:
+  case CombiningKind::MINNUMF:
   case CombiningKind::MINIMUMF:
     combinedResult = rewriter.create<arith::MinimumFOp>(loc, x, y);
     break;
-  case CombiningKind::MAXF:
+  case CombiningKind::MAXNUMF:
   case CombiningKind::MAXIMUMF:
     combinedResult = rewriter.create<arith::MaximumFOp>(loc, x, y);
     break;
@@ -105,9 +105,9 @@ static bool isValidKind(bool isInt, vector::CombiningKind kind) {
   enum class KindType { FLOAT, INT, INVALID };
   KindType type{KindType::INVALID};
   switch (kind) {
-  case CombiningKind::MINF:
+  case CombiningKind::MINNUMF:
   case CombiningKind::MINIMUMF:
-  case CombiningKind::MAXF:
+  case CombiningKind::MAXNUMF:
   case CombiningKind::MAXIMUMF:
     type = KindType::FLOAT;
     break;
diff --git a/mlir/test/Conversion/GPUToSPIRV/reductions.mlir b/mlir/test/Conversion/GPUToSPIRV/reductions.mlir
index 636078181cae72..af58f4173136f8 100644
--- a/mlir/test/Conversion/GPUToSPIRV/reductions.mlir
+++ b/mlir/test/Conversion/GPUToSPIRV/reductions.mlir
@@ -331,7 +331,7 @@ gpu.module @kernels {
   gpu.func @test(%arg : f32) kernel
     attributes {spirv.entry_point_abi = #spirv.entry_point_abi<workgroup_size = [16, 1, 1]>} {
     // CHECK: %{{.*}} = spirv.GroupFMin <Workgroup> <Reduce> %[[ARG]] : f32
-    %reduced = gpu.all_reduce minf %arg uniform {} : (f32) -> (f32)
+    %reduced = gpu.all_reduce minnumf %arg uniform {} : (f32) -> (f32)
     gpu.return
   }
 }
@@ -351,7 +351,7 @@ gpu.module @kernels {
   gpu.func @test(%arg : f32) kernel
     attributes {spirv.entry_point_abi = #spirv.entry_point_abi<workgroup_size = [16, 1, 1]>} {
     // CHECK: %{{.*}} = spirv.GroupNonUniformFMin "Workgroup" "Reduce" %[[ARG]] : f32
-    %reduced = gpu.all_reduce minf %arg {} : (f32) -> (f32)
+    %reduced = gpu.all_reduce minnumf %arg {} : (f32) -> (f32)
     gpu.return
   }
 }
@@ -414,7 +414,7 @@ gpu.module @kernels {
   gpu.func @test(%arg : f32) kernel
     attributes {spirv.entry_point_abi = #spirv.entry_point_abi<workgroup_size = [16, 1, 1]>} {
     // CHECK: %{{.*}} = spirv.GroupFMin <Subgroup> <Reduce> %[[ARG]] : f32
-    %reduced = gpu.subgroup_reduce minf %arg uniform : (f32) -> (f32)
+    %reduced = gpu.subgroup_reduce minnumf %arg uniform : (f32) -> (f32)
     gpu.return
   }
 }
@@ -434,7 +434,7 @@ gpu.module @kernels {
   gpu.func @test(%arg : f32) kernel
     attributes {spirv.entry_point_abi = #spirv.entry_point_abi<workgroup_size = [16, 1, 1]>} {
     // CHECK: %{{.*}} = spirv.GroupNonUniformFMin "Subgro...
[truncated]

``````````

</details>


https://github.com/llvm/llvm-project/pull/75901


More information about the Mlir-commits mailing list