[Mlir-commits] [mlir] [mlir][AMDGPU] Add support for AMD f16 math library calls (PR #108809)

Daniel Hernandez-Juarez llvmlistbot at llvm.org
Fri Sep 20 03:41:00 PDT 2024


================
@@ -89,7 +91,14 @@ struct OpToFuncCallLowering : public ConvertOpToLLVMPattern<SourceOp> {
 private:
   Value maybeCast(Value operand, PatternRewriter &rewriter) const {
     Type type = operand.getType();
-    if (!isa<Float16Type>(type))
+    if (!isa<FloatType>(type))
+      return operand;
+
+    // if there's a f16 function, no need to cast f16 values
+    if (!f16Func.empty() && isa<Float16Type>(type))
+      return operand;
+
+    if (isa<Float64Type>(type) || isa<Float32Type>(type))
----------------
dhernandez0 wrote:

After looking into this, it turns out all f8 types are i8 when running this pass? So, the only types we might want to cast are f16 and bf16. So, I've kept the same logic just adding bf16. I've also added a test to math-to-rocdl.mlir and updated the comment.

https://github.com/llvm/llvm-project/pull/108809


More information about the Mlir-commits mailing list