[Mlir-commits] [mlir] [mlir][amdgpu] Add scaled_ext_packed{8, 16} operations (PR #159830)
Erick Ochoa Lopez
llvmlistbot at llvm.org
Fri Oct 17 08:56:46 PDT 2025
https://github.com/amd-eochoalo updated https://github.com/llvm/llvm-project/pull/159830
>From 9c09c35633e98f47ebe5ad8c15659e01ef6664cc Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Fri, 19 Sep 2025 14:33:14 -0400
Subject: [PATCH 01/13] [mlir][amdgpu] Add scaled_ext_packed{8,16} operations
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 50 ++++++++++++-
mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp | 70 +++++++++++++++++++
mlir/test/Dialect/AMDGPU/ops.mlir | 55 +++++++++++++++
3 files changed, 174 insertions(+), 1 deletion(-)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index 8370d350afd1e..5cb1486690464 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -112,6 +112,54 @@ def AMDGPU_ExtPackedFp8Op :
}];
}
+def IsValidBlockSize: AttrConstraint<
+ CPred<"::llvm::cast<::mlir::IntegerAttr>($_self).getInt() == 16 || ::llvm::cast<::mlir::IntegerAttr>($_self).getInt() == 32">,
+ "whose value is 16 or 32">;
+
+def AMDGPU_ScaledExtPacked816Op
+ : AMDGPU_Op<"scaled_ext_packed816", [Pure]>,
+ Arguments<(
+ ins AnyTypeOf<[VectorOfLengthAndType<[8], [F4E2M1FN,F8E4M3FN,F8E5M2]>,
+ VectorOfLengthAndType<[16], [F6E2M3FN, F6E3M2FN]>]>:$source,
+ FixedVectorOfLengthAndType<[4], [F8E8M0FNU]>:$scale,
+ ConfinedAttr<I32Attr, [IsValidBlockSize]>:$blockSize,
+ ConfinedAttr<I32Attr, [IntMinValue<0>, IntMaxValue<1>]>:$firstScaleLane,
+ ConfinedAttr<I32Attr, [IntMinValue<0>, IntMaxValue<2>]>:$firstScaleByte)>,
+ Results<(
+ outs AnyTypeOf<[FixedVectorOfLengthAndType<[8], [F32]>,
+ FixedVectorOfLengthAndType<[8], [F16]>,
+ FixedVectorOfLengthAndType<[8], [BF16]>,
+ FixedVectorOfLengthAndType<[16], [F32]>,
+ FixedVectorOfLengthAndType<[16], [F16]>,
+ FixedVectorOfLengthAndType<[16], [BF16]>]>:$res)> {
+
+ let summary = "Extend a vector of packed floating point values";
+
+ let description = [{
+ The scales applied to the input microfloats are stored in two bytes which
+ come from the `scales` input provided in a *half* of the wave identified
+ by `firstScaleLane`. The pair of bytes used is selected by
+ `firstScaleByte`. The 16 vectors in consecutive lanes starting from
+ `firstScaleLane` (which we'll call the scale vectors) will be used by both
+ halves of the wave (with lane L reading from L % 16'th scale vector), but
+ each half will use a different byte.
+
+ When the block size is 32, `firstScaleByte` can be either 0 or 2,
+ selecting halves of the scale vectors. Lanes 0-15 will read from
+ `firstScaleByte` and lanes 16-31 will read from `firstScaleByte` + 1.
+
+ However, when the block size is 16, `firstScaleByte` can be 0 or 1.
+ Lanes 0-15 read from the `firstScaleByte`th element of the scale vectors,
+ while lanes 16-31 read from `firstScaleByte` + 2.
+
+ Note: the layout for the scales generally mirrors how the WMMA
+ instructions use for matix scales. These selection operands allows
+ one to choose portions of the matrix to convert.
+ }];
+
+ let hasCustomAssemblyFormat = 1;
+}
+
def AMDGPU_ScaledExtPackedOp
: AMDGPU_Op<"scaled_ext_packed", [Pure]>,
Arguments<(
@@ -860,7 +908,7 @@ def AMDGPU_MFMAOp :
based on the provided `m`, `k`, `n`, and `nBlks` attributes, along with the
types of the source and destination arguments.
- For information on the layouts of the input and output matrces (which are stored
+ For information on the layouts of the input and output matrices (which are stored
in `sourceA`, `sourceB`, `destC`, and `destD`), see the CDNA ISA documentation.
The `cbsz`, `abid`, and `blgp` parameters control how the lanes of the wave
diff --git a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
index f405d0cc7aa02..33b0131bd4ca9 100644
--- a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
+++ b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
@@ -338,6 +338,76 @@ void RawBufferAtomicCmpswapOp::getCanonicalizationPatterns(
context);
}
+//===----------------------------------------------------------------------===//
+// ScaledExtPacked816Op
+//===----------------------------------------------------------------------===//
+mlir::ParseResult ScaledExtPacked816Op::parse(mlir::OpAsmParser &parser,
+ mlir::OperationState &result) {
+ // Parse attributes
+ if (parser.parseOptionalAttrDict(result.attributes))
+ return failure();
+
+ // Parse source operand
+ OpAsmParser::UnresolvedOperand source;
+ if (parser.parseOperand(source))
+ return failure();
+
+ if (parser.parseKeyword("scale") || parser.parseLParen())
+ return failure();
+ OpAsmParser::UnresolvedOperand scale;
+ if (parser.parseOperand(scale) || parser.parseRParen())
+ return failure();
+
+ // Parse attributes
+ IntegerAttr blockSize, firstScaleLane, firstScaleByte;
+ if (parser.parseKeyword("blockSize") || parser.parseLParen() ||
+ parser.parseAttribute(blockSize, parser.getBuilder().getI32Type()) ||
+ parser.parseRParen())
+ return failure();
+
+ if (parser.parseKeyword("firstScaleLane") || parser.parseLParen() ||
+ parser.parseAttribute(firstScaleLane, parser.getBuilder().getI32Type()) ||
+ parser.parseRParen())
+ return failure();
+
+ if (parser.parseKeyword("firstScaleByte") || parser.parseLParen() ||
+ parser.parseAttribute(firstScaleByte, parser.getBuilder().getI32Type()) ||
+ parser.parseRParen())
+ return failure();
+
+ Type sourceType, resultType;
+ if (parser.parseColon() || parser.parseType(sourceType) ||
+ parser.parseKeyword("to") || parser.parseType(resultType))
+ return failure();
+
+ // Resolve operands with types
+ Type scaleType =
+ VectorType::get({4}, Float8E8M0FNUType::get(parser.getContext()));
+ if (parser.resolveOperand(source, sourceType, result.operands) ||
+ parser.resolveOperand(scale, scaleType, result.operands))
+ return failure();
+
+ result.addAttribute("blockSize", blockSize);
+ result.addAttribute("firstScaleLane", firstScaleLane);
+ result.addAttribute("firstScaleByte", firstScaleByte);
+
+ result.addTypes(resultType);
+ return success();
+}
+
+void ScaledExtPacked816Op::print(OpAsmPrinter &p) {
+ p << " ";
+ p.printOptionalAttrDict(
+ (*this)->getAttrs(),
+ /*elideAttrs=*/{"blockSize", "firstScaleLane", "firstScaleByte"});
+ p << " " << getSource();
+ p << " scale(" << getScale() << ")";
+ p << " blockSize(" << getBlockSize() << ")";
+ p << " firstScaleLane(" << getFirstScaleLane() << ")";
+ p << " firstScaleByte(" << getFirstScaleByte() << ")";
+ p << " : " << getSource().getType() << " to " << getRes().getType();
+}
+
//===----------------------------------------------------------------------===//
// WMMAOp
//===----------------------------------------------------------------------===//
diff --git a/mlir/test/Dialect/AMDGPU/ops.mlir b/mlir/test/Dialect/AMDGPU/ops.mlir
index 8f427e9d56f45..316a79c03aaba 100644
--- a/mlir/test/Dialect/AMDGPU/ops.mlir
+++ b/mlir/test/Dialect/AMDGPU/ops.mlir
@@ -221,6 +221,61 @@ func.func @scaled_ext_scalar_f4e2m1_bf16(%v: vector<2xf4E2M1FN>, %scale: f32) ->
func.return %ret : vector<2xbf16>
}
+// CHECK-LABEL: func.func @scaled_ext_packed816_fp4
+func.func @scaled_ext_packed816_fp4(%v: vector<8xf4E2M1FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xf16>
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xbf16>
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xf32>
+ func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
+}
+
+// CHECK-LABEL: func.func @scaled_ext_packed816_fp8
+func.func @scaled_ext_packed816_fp8(%v: vector<8xf8E4M3FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xf16>
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xbf16>
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xf32>
+ func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
+}
+
+// CHECK-LABEL: func.func @scaled_ext_packed816_bf8
+func.func @scaled_ext_packed816_bf8(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xf16>
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xbf16>
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xf32>
+ func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
+}
+
+// CHECK-LABEL: func.func @scaled_ext_packed816_fp6
+func.func @scaled_ext_packed816_fp6(%v: vector<16xf6E2M3FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<16xf16>, vector<16xbf16>, vector<16xf32>) {
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xf16>
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xbf16>
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xf32>
+ func.return %ret0, %ret1, %ret2 : vector<16xf16>, vector<16xbf16>, vector<16xf32>
+}
+
+// CHECK-LABEL: func.func @scaled_ext_packed816_bf16
+func.func @scaled_ext_packed816_bf16(%v: vector<16xf6E3M2FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<16xf16>, vector<16xbf16>, vector<16xf32>) {
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xf16>
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xbf16>
+ // CHECK: amdgpu.scaled_ext_packed816
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xf32>
+ func.return %ret0, %ret1, %ret2 : vector<16xf16>, vector<16xbf16>, vector<16xf32>
+}
+
// CHECK-LABEL: func.func @packed_scaled_trunc_f8e4m3_f32
// CHECK: amdgpu.packed_scaled_trunc
func.func @packed_scaled_trunc_f8e4m3_f32(%v: vector<2xf32>, %scale: f32) -> vector<4xf8E4M3FN> {
>From f8b11c4affb0d2667f7360582cea0da890803a22 Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Thu, 16 Oct 2025 10:32:12 -0400
Subject: [PATCH 02/13] Use TypesMatchWith and make the scale a constant type
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 20 +++++-
mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp | 70 -------------------
2 files changed, 18 insertions(+), 72 deletions(-)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index 5cb1486690464..6f9cf1825c5c2 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -117,7 +117,9 @@ def IsValidBlockSize: AttrConstraint<
"whose value is 16 or 32">;
def AMDGPU_ScaledExtPacked816Op
- : AMDGPU_Op<"scaled_ext_packed816", [Pure]>,
+ : AMDGPU_Op<"scaled_ext_packed816", [Pure, TypesMatchWith<"scale type is fixed",
+ "source", "scale",
+ "ScaledExtPacked816Op::getScaleType($_self.getContext())">]>,
Arguments<(
ins AnyTypeOf<[VectorOfLengthAndType<[8], [F4E2M1FN,F8E4M3FN,F8E5M2]>,
VectorOfLengthAndType<[16], [F6E2M3FN, F6E3M2FN]>]>:$source,
@@ -157,7 +159,21 @@ def AMDGPU_ScaledExtPacked816Op
one to choose portions of the matrix to convert.
}];
- let hasCustomAssemblyFormat = 1;
+ let assemblyFormat = [{
+ attr-dict $source
+ `scale` `(` $scale `)`
+ `blockSize` `(` $blockSize `)`
+ `firstScaleLane` `(` $firstScaleLane`)`
+ `firstScaleByte` `(` $firstScaleByte `)`
+ `:` type($source) `to` type($res)
+ }];
+
+ let extraClassDeclaration = [{
+ static Type getScaleType(MLIRContext *ctx) {
+ return VectorType::get(4, Float8E8M0FNUType::get(ctx));
+ }
+ }];
+
}
def AMDGPU_ScaledExtPackedOp
diff --git a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
index 33b0131bd4ca9..f405d0cc7aa02 100644
--- a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
+++ b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
@@ -338,76 +338,6 @@ void RawBufferAtomicCmpswapOp::getCanonicalizationPatterns(
context);
}
-//===----------------------------------------------------------------------===//
-// ScaledExtPacked816Op
-//===----------------------------------------------------------------------===//
-mlir::ParseResult ScaledExtPacked816Op::parse(mlir::OpAsmParser &parser,
- mlir::OperationState &result) {
- // Parse attributes
- if (parser.parseOptionalAttrDict(result.attributes))
- return failure();
-
- // Parse source operand
- OpAsmParser::UnresolvedOperand source;
- if (parser.parseOperand(source))
- return failure();
-
- if (parser.parseKeyword("scale") || parser.parseLParen())
- return failure();
- OpAsmParser::UnresolvedOperand scale;
- if (parser.parseOperand(scale) || parser.parseRParen())
- return failure();
-
- // Parse attributes
- IntegerAttr blockSize, firstScaleLane, firstScaleByte;
- if (parser.parseKeyword("blockSize") || parser.parseLParen() ||
- parser.parseAttribute(blockSize, parser.getBuilder().getI32Type()) ||
- parser.parseRParen())
- return failure();
-
- if (parser.parseKeyword("firstScaleLane") || parser.parseLParen() ||
- parser.parseAttribute(firstScaleLane, parser.getBuilder().getI32Type()) ||
- parser.parseRParen())
- return failure();
-
- if (parser.parseKeyword("firstScaleByte") || parser.parseLParen() ||
- parser.parseAttribute(firstScaleByte, parser.getBuilder().getI32Type()) ||
- parser.parseRParen())
- return failure();
-
- Type sourceType, resultType;
- if (parser.parseColon() || parser.parseType(sourceType) ||
- parser.parseKeyword("to") || parser.parseType(resultType))
- return failure();
-
- // Resolve operands with types
- Type scaleType =
- VectorType::get({4}, Float8E8M0FNUType::get(parser.getContext()));
- if (parser.resolveOperand(source, sourceType, result.operands) ||
- parser.resolveOperand(scale, scaleType, result.operands))
- return failure();
-
- result.addAttribute("blockSize", blockSize);
- result.addAttribute("firstScaleLane", firstScaleLane);
- result.addAttribute("firstScaleByte", firstScaleByte);
-
- result.addTypes(resultType);
- return success();
-}
-
-void ScaledExtPacked816Op::print(OpAsmPrinter &p) {
- p << " ";
- p.printOptionalAttrDict(
- (*this)->getAttrs(),
- /*elideAttrs=*/{"blockSize", "firstScaleLane", "firstScaleByte"});
- p << " " << getSource();
- p << " scale(" << getScale() << ")";
- p << " blockSize(" << getBlockSize() << ")";
- p << " firstScaleLane(" << getFirstScaleLane() << ")";
- p << " firstScaleByte(" << getFirstScaleByte() << ")";
- p << " : " << getSource().getType() << " to " << getRes().getType();
-}
-
//===----------------------------------------------------------------------===//
// WMMAOp
//===----------------------------------------------------------------------===//
>From e71f8d8c85dbe354b0e7142d44de02becfb7c813 Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Thu, 16 Oct 2025 10:51:36 -0400
Subject: [PATCH 03/13] Add note about availability on gfx1250+
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index 6f9cf1825c5c2..05525d3a061de 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -157,6 +157,8 @@ def AMDGPU_ScaledExtPacked816Op
Note: the layout for the scales generally mirrors how the WMMA
instructions use for matix scales. These selection operands allows
one to choose portions of the matrix to convert.
+
+ Available on gfx1250+.
}];
let assemblyFormat = [{
>From 4f83cd9a8df19ac3ae4ce230c5b401d5f09b2911 Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Thu, 16 Oct 2025 14:16:50 -0400
Subject: [PATCH 04/13] Add verifier for blockSize and firstScaleByte
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 2 ++
mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp | 13 +++++++++++++
mlir/test/Dialect/AMDGPU/invalid.mlir | 8 ++++++++
3 files changed, 23 insertions(+)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index 05525d3a061de..54464997931d7 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -176,6 +176,8 @@ def AMDGPU_ScaledExtPacked816Op
}
}];
+ let hasVerifier = 1;
+
}
def AMDGPU_ScaledExtPackedOp
diff --git a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
index f405d0cc7aa02..06dbf7520c4fd 100644
--- a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
+++ b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
@@ -338,6 +338,19 @@ void RawBufferAtomicCmpswapOp::getCanonicalizationPatterns(
context);
}
+//===----------------------------------------------------------------------===//
+// ScaledExtPacked816Op
+//===----------------------------------------------------------------------===//
+LogicalResult ScaledExtPacked816Op::verify() {
+ int blockSize = getBlockSize();
+ assert((blockSize == 16 || blockSize == 32) && "invalid block size");
+ int firstScaleByte = getFirstScaleByte();
+ if (blockSize == 16 && firstScaleByte == 2) {
+ return emitOpError("blockSize of 16 cannot have firstScaleByte be 2.");
+ }
+ return success();
+}
+
//===----------------------------------------------------------------------===//
// WMMAOp
//===----------------------------------------------------------------------===//
diff --git a/mlir/test/Dialect/AMDGPU/invalid.mlir b/mlir/test/Dialect/AMDGPU/invalid.mlir
index 66e7dd4014af9..41a5c8dd26676 100644
--- a/mlir/test/Dialect/AMDGPU/invalid.mlir
+++ b/mlir/test/Dialect/AMDGPU/invalid.mlir
@@ -238,3 +238,11 @@ func.func @gather_to_lds_non_lds(%idx1 : index, %mem1 : memref<32xf16>, %mem2 :
amdgpu.gather_to_lds %mem1[%idx1], %mem2[%idx1] : vector<2xf16>, memref<32xf16>, memref<32xf16, strided<[?]>, #gpu.address_space<workgroup>>
func.return
}
+
+// -----
+
+func.func @amdgpu.scaled_ext_packed816_invalid_block_size_and_first_scale_byte(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
+ // expected-error at +1 {{'amdgpu.scaled_ext_packed816' op blockSize of 16 cannot have firstScaleByte be 2.}}
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(2) : vector<8xf8E5M2> to vector<8xf16>
+ func.return
+}
>From b7763efe01de8027a1930192fd62273237465804 Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Thu, 16 Oct 2025 16:05:34 -0400
Subject: [PATCH 05/13] Use ConfinedType
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index 54464997931d7..c8a27970a613f 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -116,14 +116,19 @@ def IsValidBlockSize: AttrConstraint<
CPred<"::llvm::cast<::mlir::IntegerAttr>($_self).getInt() == 16 || ::llvm::cast<::mlir::IntegerAttr>($_self).getInt() == 32">,
"whose value is 16 or 32">;
+
+def Vector4Scales :
+ ConfinedType<FixedVectorOfLengthAndType<[4], [F8E8M0FNU]>, [FixedVectorOfLengthAndType<[4], [F8E8M0FNU]>.predicate],
+ "vector of 4 F8E8M0FNU scales",
+ "::mlir::VectorType">,
+ BuildableType<"::mlir::VectorType::get({4}, $_builder.getType<::mlir::Float8E8M0FNUType>());">;
+
def AMDGPU_ScaledExtPacked816Op
- : AMDGPU_Op<"scaled_ext_packed816", [Pure, TypesMatchWith<"scale type is fixed",
- "source", "scale",
- "ScaledExtPacked816Op::getScaleType($_self.getContext())">]>,
+ : AMDGPU_Op<"scaled_ext_packed816", [Pure]>,
Arguments<(
ins AnyTypeOf<[VectorOfLengthAndType<[8], [F4E2M1FN,F8E4M3FN,F8E5M2]>,
VectorOfLengthAndType<[16], [F6E2M3FN, F6E3M2FN]>]>:$source,
- FixedVectorOfLengthAndType<[4], [F8E8M0FNU]>:$scale,
+ Vector4Scales:$scale,
ConfinedAttr<I32Attr, [IsValidBlockSize]>:$blockSize,
ConfinedAttr<I32Attr, [IntMinValue<0>, IntMaxValue<1>]>:$firstScaleLane,
ConfinedAttr<I32Attr, [IntMinValue<0>, IntMaxValue<2>]>:$firstScaleByte)>,
@@ -170,12 +175,6 @@ def AMDGPU_ScaledExtPacked816Op
`:` type($source) `to` type($res)
}];
- let extraClassDeclaration = [{
- static Type getScaleType(MLIRContext *ctx) {
- return VectorType::get(4, Float8E8M0FNUType::get(ctx));
- }
- }];
-
let hasVerifier = 1;
}
>From d50b6fe5f9b4b773dac981306c13fa464aa6cd2d Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Thu, 16 Oct 2025 16:08:34 -0400
Subject: [PATCH 06/13] Only use AllOfType
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index c8a27970a613f..0c9a6570a173a 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -118,7 +118,7 @@ def IsValidBlockSize: AttrConstraint<
def Vector4Scales :
- ConfinedType<FixedVectorOfLengthAndType<[4], [F8E8M0FNU]>, [FixedVectorOfLengthAndType<[4], [F8E8M0FNU]>.predicate],
+ AllOfType<[FixedVectorOfLengthAndType<[4], [F8E8M0FNU]>],
"vector of 4 F8E8M0FNU scales",
"::mlir::VectorType">,
BuildableType<"::mlir::VectorType::get({4}, $_builder.getType<::mlir::Float8E8M0FNUType>());">;
>From 3cdb174797f65fcf67c34b244a12ffe70805f0ee Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Fri, 17 Oct 2025 10:10:59 -0400
Subject: [PATCH 07/13] Verify shape matches and better type constraint
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 32 ++++++++-----------
mlir/include/mlir/IR/CommonTypeConstraints.td | 8 +++++
mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp | 1 +
mlir/test/Dialect/AMDGPU/invalid.mlir | 8 +++++
4 files changed, 31 insertions(+), 18 deletions(-)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index 0c9a6570a173a..42965c2b16dca 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -113,32 +113,28 @@ def AMDGPU_ExtPackedFp8Op :
}
def IsValidBlockSize: AttrConstraint<
- CPred<"::llvm::cast<::mlir::IntegerAttr>($_self).getInt() == 16 || ::llvm::cast<::mlir::IntegerAttr>($_self).getInt() == 32">,
+ CPred<"::llvm::is_contained({16, 32}, ::llvm::cast<::mlir::IntegerAttr>($_self).getInt())">,
"whose value is 16 or 32">;
-
-def Vector4Scales :
- AllOfType<[FixedVectorOfLengthAndType<[4], [F8E8M0FNU]>],
- "vector of 4 F8E8M0FNU scales",
- "::mlir::VectorType">,
- BuildableType<"::mlir::VectorType::get({4}, $_builder.getType<::mlir::Float8E8M0FNUType>());">;
-
def AMDGPU_ScaledExtPacked816Op
- : AMDGPU_Op<"scaled_ext_packed816", [Pure]>,
+ : AMDGPU_Op<"scaled_ext_packed816", [Pure, AllShapesMatch<["source", "res"]>]>,
Arguments<(
- ins AnyTypeOf<[VectorOfLengthAndType<[8], [F4E2M1FN,F8E4M3FN,F8E5M2]>,
- VectorOfLengthAndType<[16], [F6E2M3FN, F6E3M2FN]>]>:$source,
- Vector4Scales:$scale,
+ ins AnyTypeOf<[FixedVectorOfShapeAndType<[8], F4E2M1FN>,
+ FixedVectorOfShapeAndType<[8], F8E4M3FN>,
+ FixedVectorOfShapeAndType<[8], F8E5M2>,
+ FixedVectorOfShapeAndType<[16], F6E2M3FN>,
+ FixedVectorOfShapeAndType<[16], F6E3M2FN>]>:$source,
+ FixedVectorOfShapeAndType<[4], F8E8M0FNU>:$scale,
ConfinedAttr<I32Attr, [IsValidBlockSize]>:$blockSize,
ConfinedAttr<I32Attr, [IntMinValue<0>, IntMaxValue<1>]>:$firstScaleLane,
ConfinedAttr<I32Attr, [IntMinValue<0>, IntMaxValue<2>]>:$firstScaleByte)>,
Results<(
- outs AnyTypeOf<[FixedVectorOfLengthAndType<[8], [F32]>,
- FixedVectorOfLengthAndType<[8], [F16]>,
- FixedVectorOfLengthAndType<[8], [BF16]>,
- FixedVectorOfLengthAndType<[16], [F32]>,
- FixedVectorOfLengthAndType<[16], [F16]>,
- FixedVectorOfLengthAndType<[16], [BF16]>]>:$res)> {
+ outs AnyTypeOf<[FixedVectorOfShapeAndType<[8], F32>,
+ FixedVectorOfShapeAndType<[8], F16>,
+ FixedVectorOfShapeAndType<[8], BF16>,
+ FixedVectorOfShapeAndType<[16], F32>,
+ FixedVectorOfShapeAndType<[16], F16>,
+ FixedVectorOfShapeAndType<[16], BF16>]>:$res)> {
let summary = "Extend a vector of packed floating point values";
diff --git a/mlir/include/mlir/IR/CommonTypeConstraints.td b/mlir/include/mlir/IR/CommonTypeConstraints.td
index 6b4e3dd603198..8427ba560c8aa 100644
--- a/mlir/include/mlir/IR/CommonTypeConstraints.td
+++ b/mlir/include/mlir/IR/CommonTypeConstraints.td
@@ -623,6 +623,14 @@ class VectorOfLengthAndType<list<int> allowedLengths,
VectorOfNonZeroRankOf<allowedTypes>.summary # VectorOfLength<allowedLengths>.summary,
"::mlir::VectorType">;
+class FixedVectorOfShapeAndType<list<int> shape, Type elType>: ShapedContainerType<
+ [elType],
+ And<[IsVectorOfShape<shape>, IsFixedVectorOfAnyRankTypePred]>,
+ "vector<" # !interleave(shape, "x") # "x" # elType # ">",
+ "::mlir::VectorType">,
+ BuildableType<"::mlir::VectorType::get({" # !interleave(shape, " ,") # "} , " # elType.builderCall # " );">;
+
+
// Any fixed-length vector where the number of elements is from the given
// `allowedLengths` list and the type is from the given `allowedTypes` list
class FixedVectorOfLengthAndType<list<int> allowedLengths,
diff --git a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
index 06dbf7520c4fd..d778142d979fe 100644
--- a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
+++ b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
@@ -348,6 +348,7 @@ LogicalResult ScaledExtPacked816Op::verify() {
if (blockSize == 16 && firstScaleByte == 2) {
return emitOpError("blockSize of 16 cannot have firstScaleByte be 2.");
}
+
return success();
}
diff --git a/mlir/test/Dialect/AMDGPU/invalid.mlir b/mlir/test/Dialect/AMDGPU/invalid.mlir
index 41a5c8dd26676..58c5af3a2a638 100644
--- a/mlir/test/Dialect/AMDGPU/invalid.mlir
+++ b/mlir/test/Dialect/AMDGPU/invalid.mlir
@@ -246,3 +246,11 @@ func.func @amdgpu.scaled_ext_packed816_invalid_block_size_and_first_scale_byte(%
%ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(2) : vector<8xf8E5M2> to vector<8xf16>
func.return
}
+
+// -----
+
+func.func @amdgpu.scaled_ext_packed816_invalid_input_output_sizes(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
+ // expected-error at +1 {{'amdgpu.scaled_ext_packed816' op failed to verify that all of {source, res} have same shape}}
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<16xf16>
+ func.return
+}
\ No newline at end of file
>From 30dcfea0476a1c0bcbbe918d8b065510459c574c Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Fri, 17 Oct 2025 10:58:39 -0400
Subject: [PATCH 08/13] Added scale type to the assembly format
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 2 +-
mlir/test/Dialect/AMDGPU/invalid.mlir | 6 ++--
mlir/test/Dialect/AMDGPU/ops.mlir | 30 +++++++++----------
3 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index 42965c2b16dca..39ada4e491908 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -164,7 +164,7 @@ def AMDGPU_ScaledExtPacked816Op
let assemblyFormat = [{
attr-dict $source
- `scale` `(` $scale `)`
+ `scale` `(` $scale `:` type($scale) `)`
`blockSize` `(` $blockSize `)`
`firstScaleLane` `(` $firstScaleLane`)`
`firstScaleByte` `(` $firstScaleByte `)`
diff --git a/mlir/test/Dialect/AMDGPU/invalid.mlir b/mlir/test/Dialect/AMDGPU/invalid.mlir
index 58c5af3a2a638..0d29c6a41b307 100644
--- a/mlir/test/Dialect/AMDGPU/invalid.mlir
+++ b/mlir/test/Dialect/AMDGPU/invalid.mlir
@@ -243,7 +243,7 @@ func.func @gather_to_lds_non_lds(%idx1 : index, %mem1 : memref<32xf16>, %mem2 :
func.func @amdgpu.scaled_ext_packed816_invalid_block_size_and_first_scale_byte(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
// expected-error at +1 {{'amdgpu.scaled_ext_packed816' op blockSize of 16 cannot have firstScaleByte be 2.}}
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(2) : vector<8xf8E5M2> to vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(16) firstScaleLane(0) firstScaleByte(2) : vector<8xf8E5M2> to vector<8xf16>
func.return
}
@@ -251,6 +251,6 @@ func.func @amdgpu.scaled_ext_packed816_invalid_block_size_and_first_scale_byte(%
func.func @amdgpu.scaled_ext_packed816_invalid_input_output_sizes(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
// expected-error at +1 {{'amdgpu.scaled_ext_packed816' op failed to verify that all of {source, res} have same shape}}
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<16xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(16) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<16xf16>
func.return
-}
\ No newline at end of file
+}
diff --git a/mlir/test/Dialect/AMDGPU/ops.mlir b/mlir/test/Dialect/AMDGPU/ops.mlir
index 316a79c03aaba..7192dbbabd06b 100644
--- a/mlir/test/Dialect/AMDGPU/ops.mlir
+++ b/mlir/test/Dialect/AMDGPU/ops.mlir
@@ -224,55 +224,55 @@ func.func @scaled_ext_scalar_f4e2m1_bf16(%v: vector<2xf4E2M1FN>, %scale: f32) ->
// CHECK-LABEL: func.func @scaled_ext_packed816_fp4
func.func @scaled_ext_packed816_fp4(%v: vector<8xf4E2M1FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xf32>
func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_fp8
func.func @scaled_ext_packed816_fp8(%v: vector<8xf8E4M3FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xf32>
func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_bf8
func.func @scaled_ext_packed816_bf8(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xf32>
func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_fp6
func.func @scaled_ext_packed816_fp6(%v: vector<16xf6E2M3FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<16xf16>, vector<16xbf16>, vector<16xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xf32>
func.return %ret0, %ret1, %ret2 : vector<16xf16>, vector<16xbf16>, vector<16xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_bf16
func.func @scaled_ext_packed816_bf16(%v: vector<16xf6E3M2FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<16xf16>, vector<16xbf16>, vector<16xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xf32>
func.return %ret0, %ret1, %ret2 : vector<16xf16>, vector<16xbf16>, vector<16xf32>
}
>From 67261e52a578ed51a697ea43f225dd969593c3f4 Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Fri, 17 Oct 2025 11:11:42 -0400
Subject: [PATCH 09/13] Use functional-type
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 4 +--
mlir/test/Dialect/AMDGPU/invalid.mlir | 4 +--
mlir/test/Dialect/AMDGPU/ops.mlir | 30 +++++++++----------
3 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index 39ada4e491908..58baa28cf1e39 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -164,11 +164,11 @@ def AMDGPU_ScaledExtPacked816Op
let assemblyFormat = [{
attr-dict $source
- `scale` `(` $scale `:` type($scale) `)`
+ `scale` `(` $scale `)`
`blockSize` `(` $blockSize `)`
`firstScaleLane` `(` $firstScaleLane`)`
`firstScaleByte` `(` $firstScaleByte `)`
- `:` type($source) `to` type($res)
+ `:` functional-type(operands, results)
}];
let hasVerifier = 1;
diff --git a/mlir/test/Dialect/AMDGPU/invalid.mlir b/mlir/test/Dialect/AMDGPU/invalid.mlir
index 0d29c6a41b307..af6f700a03295 100644
--- a/mlir/test/Dialect/AMDGPU/invalid.mlir
+++ b/mlir/test/Dialect/AMDGPU/invalid.mlir
@@ -243,7 +243,7 @@ func.func @gather_to_lds_non_lds(%idx1 : index, %mem1 : memref<32xf16>, %mem2 :
func.func @amdgpu.scaled_ext_packed816_invalid_block_size_and_first_scale_byte(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
// expected-error at +1 {{'amdgpu.scaled_ext_packed816' op blockSize of 16 cannot have firstScaleByte be 2.}}
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(16) firstScaleLane(0) firstScaleByte(2) : vector<8xf8E5M2> to vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(2) : (vector<8xf8E5M2>, vector<4xf8E8M0FNU>) -> vector<8xf16>
func.return
}
@@ -251,6 +251,6 @@ func.func @amdgpu.scaled_ext_packed816_invalid_block_size_and_first_scale_byte(%
func.func @amdgpu.scaled_ext_packed816_invalid_input_output_sizes(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
// expected-error at +1 {{'amdgpu.scaled_ext_packed816' op failed to verify that all of {source, res} have same shape}}
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(16) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<16xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E5M2>, vector<4xf8E8M0FNU>) -> vector<16xf16>
func.return
}
diff --git a/mlir/test/Dialect/AMDGPU/ops.mlir b/mlir/test/Dialect/AMDGPU/ops.mlir
index 7192dbbabd06b..f96e14b592927 100644
--- a/mlir/test/Dialect/AMDGPU/ops.mlir
+++ b/mlir/test/Dialect/AMDGPU/ops.mlir
@@ -224,55 +224,55 @@ func.func @scaled_ext_scalar_f4e2m1_bf16(%v: vector<2xf4E2M1FN>, %scale: f32) ->
// CHECK-LABEL: func.func @scaled_ext_packed816_fp4
func.func @scaled_ext_packed816_fp4(%v: vector<8xf4E2M1FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf4E2M1FN>, vector<4xf8E8M0FNU>) -> vector<8xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf4E2M1FN>, vector<4xf8E8M0FNU>) -> vector<8xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN> to vector<8xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf4E2M1FN>, vector<4xf8E8M0FNU>) -> vector<8xf32>
func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_fp8
func.func @scaled_ext_packed816_fp8(%v: vector<8xf8E4M3FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E4M3FN>, vector<4xf8E8M0FNU>) -> vector<8xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E4M3FN>, vector<4xf8E8M0FNU>) -> vector<8xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN> to vector<8xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E4M3FN>, vector<4xf8E8M0FNU>) -> vector<8xf32>
func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_bf8
func.func @scaled_ext_packed816_bf8(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E5M2>, vector<4xf8E8M0FNU>) -> vector<8xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E5M2>, vector<4xf8E8M0FNU>) -> vector<8xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2> to vector<8xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E5M2>, vector<4xf8E8M0FNU>) -> vector<8xf32>
func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_fp6
func.func @scaled_ext_packed816_fp6(%v: vector<16xf6E2M3FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<16xf16>, vector<16xbf16>, vector<16xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E2M3FN>, vector<4xf8E8M0FNU>) -> vector<16xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E2M3FN>, vector<4xf8E8M0FNU>) -> vector<16xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN> to vector<16xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E2M3FN>, vector<4xf8E8M0FNU>) -> vector<16xf32>
func.return %ret0, %ret1, %ret2 : vector<16xf16>, vector<16xbf16>, vector<16xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_bf16
func.func @scaled_ext_packed816_bf16(%v: vector<16xf6E3M2FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<16xf16>, vector<16xbf16>, vector<16xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E3M2FN>, vector<4xf8E8M0FNU>) -> vector<16xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E3M2FN>, vector<4xf8E8M0FNU>) -> vector<16xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale : vector<4xf8E8M0FNU>) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN> to vector<16xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E3M2FN>, vector<4xf8E8M0FNU>) -> vector<16xf32>
func.return %ret0, %ret1, %ret2 : vector<16xf16>, vector<16xbf16>, vector<16xf32>
}
>From fadf035fb4b99fb51271c455692c62814d1a22a9 Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Fri, 17 Oct 2025 11:18:17 -0400
Subject: [PATCH 10/13] Use : source_ty, scale_ty -> res_ty
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 2 +-
mlir/test/Dialect/AMDGPU/invalid.mlir | 4 +--
mlir/test/Dialect/AMDGPU/ops.mlir | 30 +++++++++----------
3 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index 58baa28cf1e39..c6338538e2022 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -168,7 +168,7 @@ def AMDGPU_ScaledExtPacked816Op
`blockSize` `(` $blockSize `)`
`firstScaleLane` `(` $firstScaleLane`)`
`firstScaleByte` `(` $firstScaleByte `)`
- `:` functional-type(operands, results)
+ `:` type($source) `,` type($scale) `->` type($res)
}];
let hasVerifier = 1;
diff --git a/mlir/test/Dialect/AMDGPU/invalid.mlir b/mlir/test/Dialect/AMDGPU/invalid.mlir
index af6f700a03295..de72695c8a433 100644
--- a/mlir/test/Dialect/AMDGPU/invalid.mlir
+++ b/mlir/test/Dialect/AMDGPU/invalid.mlir
@@ -243,7 +243,7 @@ func.func @gather_to_lds_non_lds(%idx1 : index, %mem1 : memref<32xf16>, %mem2 :
func.func @amdgpu.scaled_ext_packed816_invalid_block_size_and_first_scale_byte(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
// expected-error at +1 {{'amdgpu.scaled_ext_packed816' op blockSize of 16 cannot have firstScaleByte be 2.}}
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(2) : (vector<8xf8E5M2>, vector<4xf8E8M0FNU>) -> vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(2) : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<8xf16>
func.return
}
@@ -251,6 +251,6 @@ func.func @amdgpu.scaled_ext_packed816_invalid_block_size_and_first_scale_byte(%
func.func @amdgpu.scaled_ext_packed816_invalid_input_output_sizes(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
// expected-error at +1 {{'amdgpu.scaled_ext_packed816' op failed to verify that all of {source, res} have same shape}}
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E5M2>, vector<4xf8E8M0FNU>) -> vector<16xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<16xf16>
func.return
}
diff --git a/mlir/test/Dialect/AMDGPU/ops.mlir b/mlir/test/Dialect/AMDGPU/ops.mlir
index f96e14b592927..f9c6899dadfc1 100644
--- a/mlir/test/Dialect/AMDGPU/ops.mlir
+++ b/mlir/test/Dialect/AMDGPU/ops.mlir
@@ -224,55 +224,55 @@ func.func @scaled_ext_scalar_f4e2m1_bf16(%v: vector<2xf4E2M1FN>, %scale: f32) ->
// CHECK-LABEL: func.func @scaled_ext_packed816_fp4
func.func @scaled_ext_packed816_fp4(%v: vector<8xf4E2M1FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf4E2M1FN>, vector<4xf8E8M0FNU>) -> vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN>, vector<4xf8E8M0FNU> -> vector<8xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf4E2M1FN>, vector<4xf8E8M0FNU>) -> vector<8xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN>, vector<4xf8E8M0FNU> -> vector<8xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf4E2M1FN>, vector<4xf8E8M0FNU>) -> vector<8xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf4E2M1FN>, vector<4xf8E8M0FNU> -> vector<8xf32>
func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_fp8
func.func @scaled_ext_packed816_fp8(%v: vector<8xf8E4M3FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E4M3FN>, vector<4xf8E8M0FNU>) -> vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN>, vector<4xf8E8M0FNU> -> vector<8xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E4M3FN>, vector<4xf8E8M0FNU>) -> vector<8xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN>, vector<4xf8E8M0FNU> -> vector<8xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E4M3FN>, vector<4xf8E8M0FNU>) -> vector<8xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E4M3FN>, vector<4xf8E8M0FNU> -> vector<8xf32>
func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_bf8
func.func @scaled_ext_packed816_bf8(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) -> (vector<8xf16>, vector<8xbf16>, vector<8xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E5M2>, vector<4xf8E8M0FNU>) -> vector<8xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<8xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E5M2>, vector<4xf8E8M0FNU>) -> vector<8xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<8xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<8xf8E5M2>, vector<4xf8E8M0FNU>) -> vector<8xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<8xf32>
func.return %ret0, %ret1, %ret2 : vector<8xf16>, vector<8xbf16>, vector<8xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_fp6
func.func @scaled_ext_packed816_fp6(%v: vector<16xf6E2M3FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<16xf16>, vector<16xbf16>, vector<16xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E2M3FN>, vector<4xf8E8M0FNU>) -> vector<16xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN>, vector<4xf8E8M0FNU> -> vector<16xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E2M3FN>, vector<4xf8E8M0FNU>) -> vector<16xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN>, vector<4xf8E8M0FNU> -> vector<16xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E2M3FN>, vector<4xf8E8M0FNU>) -> vector<16xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E2M3FN>, vector<4xf8E8M0FNU> -> vector<16xf32>
func.return %ret0, %ret1, %ret2 : vector<16xf16>, vector<16xbf16>, vector<16xf32>
}
// CHECK-LABEL: func.func @scaled_ext_packed816_bf16
func.func @scaled_ext_packed816_bf16(%v: vector<16xf6E3M2FN>, %scale: vector<4xf8E8M0FNU>) -> (vector<16xf16>, vector<16xbf16>, vector<16xf32>) {
// CHECK: amdgpu.scaled_ext_packed816
- %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E3M2FN>, vector<4xf8E8M0FNU>) -> vector<16xf16>
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN>, vector<4xf8E8M0FNU> -> vector<16xf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E3M2FN>, vector<4xf8E8M0FNU>) -> vector<16xbf16>
+ %ret1 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN>, vector<4xf8E8M0FNU> -> vector<16xbf16>
// CHECK: amdgpu.scaled_ext_packed816
- %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : (vector<16xf6E3M2FN>, vector<4xf8E8M0FNU>) -> vector<16xf32>
+ %ret2 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(0) : vector<16xf6E3M2FN>, vector<4xf8E8M0FNU> -> vector<16xf32>
func.return %ret0, %ret1, %ret2 : vector<16xf16>, vector<16xbf16>, vector<16xf32>
}
>From 03831000cc808490624ec751624f3a5437c2f9df Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Fri, 17 Oct 2025 11:45:01 -0400
Subject: [PATCH 11/13] Adds examples and better verification
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 40 +++++++++++++++++++
mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp | 8 +++-
mlir/test/Dialect/AMDGPU/invalid.mlir | 12 +++++-
3 files changed, 56 insertions(+), 4 deletions(-)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index c6338538e2022..a99b17777e537 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -150,10 +150,50 @@ def AMDGPU_ScaledExtPacked816Op
When the block size is 32, `firstScaleByte` can be either 0 or 2,
selecting halves of the scale vectors. Lanes 0-15 will read from
`firstScaleByte` and lanes 16-31 will read from `firstScaleByte` + 1.
+ For example:
+ ```mlir
+ // Input: 8-element vector of F8E4M3FN, converting to F32
+ // Lanes 0-15 read from byte 0, lanes 16-31 read from byte 1
+ %result = amdgpu.scaled_ext_packed816 %source
+ scale(%scales)
+ blockSize(32)
+ firstScaleLane(0)
+ firstScaleByte(0)
+ : vector<8xf8E4M3FN>, vector<4xf8E8M0FNU> -> vector<8xf32>
+
+ // Input: 16-element vector of F6E2M3FN, converting to F16
+ // Lanes 0-15 read from byte 2, lanes 16-31 read from byte 3
+ %result = amdgpu.scaled_ext_packed816 %source
+ scale(%scales)
+ blockSize(32)
+ firstScaleLane(1)
+ firstScaleByte(2)
+ : vector<16xf6E2M3FN>, vector<4xf8E8M0FNU> -> vector<16xf16>
+ ```
However, when the block size is 16, `firstScaleByte` can be 0 or 1.
Lanes 0-15 read from the `firstScaleByte`th element of the scale vectors,
while lanes 16-31 read from `firstScaleByte` + 2.
+ For example:
+ ```
+ // Input: 8-element vector of F8E5M2, converting to BF16
+ // Lanes 0-15 read from byte 0, lanes 16-31 read from byte 2 (0+2)
+ %result = amdgpu.scaled_ext_packed816 %source
+ scale(%scales)
+ blockSize(16)
+ firstScaleLane(0)
+ firstScaleByte(0)
+ : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<8xbf16>
+
+ // Input: 16-element vector of F6E3M2FN, converting to F32
+ // Lanes 0-15 read from byte 1, lanes 16-31 read from byte 3 (1+2)
+ %result = amdgpu.scaled_ext_packed816 %source
+ scale(%scales)
+ blockSize(16)
+ firstScaleLane(1)
+ firstScaleByte(1)
+ : vector<16xf6E3M2FN>, vector<4xf8E8M0FNU> -> vector<16xf32>
+ ```
Note: the layout for the scales generally mirrors how the WMMA
instructions use for matix scales. These selection operands allows
diff --git a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
index d778142d979fe..7fa78f976d9e7 100644
--- a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
+++ b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
@@ -345,8 +345,12 @@ LogicalResult ScaledExtPacked816Op::verify() {
int blockSize = getBlockSize();
assert((blockSize == 16 || blockSize == 32) && "invalid block size");
int firstScaleByte = getFirstScaleByte();
- if (blockSize == 16 && firstScaleByte == 2) {
- return emitOpError("blockSize of 16 cannot have firstScaleByte be 2.");
+ if (blockSize == 16 && !::llvm::is_contained({0, 1}, firstScaleByte)) {
+ return emitOpError(
+ "blockSize of 16 can only have firstScaleByte be 0 or 1.");
+ } else if (blockSize == 32 && !::llvm::is_contained({0, 2}, firstScaleByte)) {
+ return emitOpError(
+ "blockSize of 32 can only have firstScaleByte be 0 or 2.");
}
return success();
diff --git a/mlir/test/Dialect/AMDGPU/invalid.mlir b/mlir/test/Dialect/AMDGPU/invalid.mlir
index de72695c8a433..a8256b16ed8a1 100644
--- a/mlir/test/Dialect/AMDGPU/invalid.mlir
+++ b/mlir/test/Dialect/AMDGPU/invalid.mlir
@@ -241,14 +241,22 @@ func.func @gather_to_lds_non_lds(%idx1 : index, %mem1 : memref<32xf16>, %mem2 :
// -----
-func.func @amdgpu.scaled_ext_packed816_invalid_block_size_and_first_scale_byte(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
- // expected-error at +1 {{'amdgpu.scaled_ext_packed816' op blockSize of 16 cannot have firstScaleByte be 2.}}
+func.func @amdgpu.scaled_ext_packed816_invalid_block_size_and_first_scale_byte_16(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
+ // expected-error at +1 {{'amdgpu.scaled_ext_packed816' op blockSize of 16 can only have firstScaleByte be 0 or 1.}}
%ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(2) : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<8xf16>
func.return
}
// -----
+func.func @amdgpu.scaled_ext_packed816_invalid_block_size_and_first_scale_byte_32(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
+ // expected-error at +1 {{'amdgpu.scaled_ext_packed816' op blockSize of 32 can only have firstScaleByte be 0 or 2.}}
+ %ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(32) firstScaleLane(0) firstScaleByte(1) : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<8xf16>
+ func.return
+}
+
+// -----
+
func.func @amdgpu.scaled_ext_packed816_invalid_input_output_sizes(%v: vector<8xf8E5M2>, %scale: vector<4xf8E8M0FNU>) {
// expected-error at +1 {{'amdgpu.scaled_ext_packed816' op failed to verify that all of {source, res} have same shape}}
%ret0 = amdgpu.scaled_ext_packed816 %v scale(%scale) blockSize(16) firstScaleLane(0) firstScaleByte(0) : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<16xf16>
>From 7aa6169697b3d5dd2668245b32d347c76699c6d0 Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Fri, 17 Oct 2025 11:53:33 -0400
Subject: [PATCH 12/13] no else after return and remove global resolution
---
mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
index 7fa78f976d9e7..1c1794d5a1826 100644
--- a/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
+++ b/mlir/lib/Dialect/AMDGPU/IR/AMDGPUDialect.cpp
@@ -345,10 +345,11 @@ LogicalResult ScaledExtPacked816Op::verify() {
int blockSize = getBlockSize();
assert((blockSize == 16 || blockSize == 32) && "invalid block size");
int firstScaleByte = getFirstScaleByte();
- if (blockSize == 16 && !::llvm::is_contained({0, 1}, firstScaleByte)) {
+ if (blockSize == 16 && !llvm::is_contained({0, 1}, firstScaleByte)) {
return emitOpError(
"blockSize of 16 can only have firstScaleByte be 0 or 1.");
- } else if (blockSize == 32 && !::llvm::is_contained({0, 2}, firstScaleByte)) {
+ }
+ if (blockSize == 32 && !llvm::is_contained({0, 2}, firstScaleByte)) {
return emitOpError(
"blockSize of 32 can only have firstScaleByte be 0 or 2.");
}
>From 7a5fea59e71551863c5ed4bc573f10a430fcf8bb Mon Sep 17 00:00:00 2001
From: Erick Ochoa <erick.ochoalopez at amd.com>
Date: Fri, 17 Oct 2025 11:56:27 -0400
Subject: [PATCH 13/13] indentation and syntax highlighting
---
mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td | 38 +++++++------------
1 file changed, 13 insertions(+), 25 deletions(-)
diff --git a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
index a99b17777e537..7184de93bfacb 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/IR/AMDGPU.td
@@ -154,45 +154,33 @@ def AMDGPU_ScaledExtPacked816Op
```mlir
// Input: 8-element vector of F8E4M3FN, converting to F32
// Lanes 0-15 read from byte 0, lanes 16-31 read from byte 1
- %result = amdgpu.scaled_ext_packed816 %source
- scale(%scales)
- blockSize(32)
- firstScaleLane(0)
- firstScaleByte(0)
- : vector<8xf8E4M3FN>, vector<4xf8E8M0FNU> -> vector<8xf32>
+ %result = amdgpu.scaled_ext_packed816 %source scale(%scales)
+ blockSize(32) firstScaleLane(0) firstScaleByte(0)
+ : vector<8xf8E4M3FN>, vector<4xf8E8M0FNU> -> vector<8xf32>
// Input: 16-element vector of F6E2M3FN, converting to F16
// Lanes 0-15 read from byte 2, lanes 16-31 read from byte 3
- %result = amdgpu.scaled_ext_packed816 %source
- scale(%scales)
- blockSize(32)
- firstScaleLane(1)
- firstScaleByte(2)
- : vector<16xf6E2M3FN>, vector<4xf8E8M0FNU> -> vector<16xf16>
+ %result = amdgpu.scaled_ext_packed816 %source scale(%scales)
+ blockSize(32) firstScaleLane(1) firstScaleByte(2)
+ : vector<16xf6E2M3FN>, vector<4xf8E8M0FNU> -> vector<16xf16>
```
However, when the block size is 16, `firstScaleByte` can be 0 or 1.
Lanes 0-15 read from the `firstScaleByte`th element of the scale vectors,
while lanes 16-31 read from `firstScaleByte` + 2.
For example:
- ```
+ ```mlir
// Input: 8-element vector of F8E5M2, converting to BF16
// Lanes 0-15 read from byte 0, lanes 16-31 read from byte 2 (0+2)
- %result = amdgpu.scaled_ext_packed816 %source
- scale(%scales)
- blockSize(16)
- firstScaleLane(0)
- firstScaleByte(0)
- : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<8xbf16>
+ %result = amdgpu.scaled_ext_packed816 %source scale(%scales)
+ blockSize(16) firstScaleLane(0) firstScaleByte(0)
+ : vector<8xf8E5M2>, vector<4xf8E8M0FNU> -> vector<8xbf16>
// Input: 16-element vector of F6E3M2FN, converting to F32
// Lanes 0-15 read from byte 1, lanes 16-31 read from byte 3 (1+2)
- %result = amdgpu.scaled_ext_packed816 %source
- scale(%scales)
- blockSize(16)
- firstScaleLane(1)
- firstScaleByte(1)
- : vector<16xf6E3M2FN>, vector<4xf8E8M0FNU> -> vector<16xf32>
+ %result = amdgpu.scaled_ext_packed816 %source scale(%scales)
+ blockSize(16) firstScaleLane(1) firstScaleByte(1)
+ : vector<16xf6E3M2FN>, vector<4xf8E8M0FNU> -> vector<16xf32>
```
Note: the layout for the scales generally mirrors how the WMMA
More information about the Mlir-commits
mailing list