[Mlir-commits] [mlir] [mlir][amdgpu] Define an amdgpu.scaling_mfma wrapper (PR #137498)

Krzysztof Drewniak llvmlistbot at llvm.org
Tue Apr 29 23:52:29 PDT 2025


================
@@ -687,6 +687,11 @@ def MFMAOutTypes : AnyTypeOf<[F64,
                               VectorOfLengthAndType<[4, 16, 32], [F32]>,
                               VectorOfLengthAndType<[4, 16, 32], [I32]>,
                               VectorOfLengthAndType<[4], [F64]>]>;
+// scaled_mfma
+def ScaledMFMAInTypes : AnyTypeOf<[VectorOfLengthAndType<[8], [F8E5M2FNUZ, F8E4M3FNUZ]>,
----------------
krzysz00 wrote:

I'm pretty sure only 32x{the various types} is supported? Unless I missed one?

https://github.com/llvm/llvm-project/pull/137498


More information about the Mlir-commits mailing list