[Mlir-commits] [mlir] [mlir][tosa] Add FP8 support (PR #127730)

Umang Yadav llvmlistbot at llvm.org
Wed Feb 19 06:15:24 PST 2025


================
@@ -459,16 +472,123 @@ LogicalResult tosa::AvgPool2dOp::verify() {
   if (inputETy.isF32() && !accType.isF32())
     return emitOpError("accumulator type for f32 tensor is not f32");
 
+  if ((llvm::isa<Float8E5M2Type>(inputETy) ||
+       llvm::isa<Float8E4M3FNType>(inputETy)) &&
+      !accType.isF16())
+    return emitOpError("accumulator type for f8 tensor is not f16");
----------------
umangyadav wrote:

These seems to restrictive. max value for Float8E5M2 type is 57344. Max value for fp16 accumulator is only around  ~65k. Fp8 requires Fp32 accumulator not fp16.

https://github.com/llvm/llvm-project/pull/127730


More information about the Mlir-commits mailing list