[llvm] [AMDGPU] Handle CreateBinOp not returning BinaryOperator (PR #137791)

via llvm-commits llvm-commits at lists.llvm.org
Tue Apr 29 04:49:37 PDT 2025


https://github.com/anjenner created https://github.com/llvm/llvm-project/pull/137791

AMDGPUCodeGenPrepareImpl::visitBinaryOperator() calls Builder.CreateBinOp() and casts the resulting Value as a BinaryOperator without checking, leading to an assert failure in a case found by fuzzing. In this case, the operands are constant and CreateBinOp does constant folding so returns a Constant instead of a BinaryOperator.

>From 4342e6aa5156fc2b7b7a3a881f99ebe142d4e797 Mon Sep 17 00:00:00 2001
From: Andrew Jenner <Andrew.Jenner at amd.com>
Date: Tue, 29 Apr 2025 06:52:50 -0400
Subject: [PATCH] [AMDGPU] Handle CreateBinOp not returning BinaryOperator

AMDGPUCodeGenPrepareImpl::visitBinaryOperator() calls Builder.CreateBinOp()
and casts the resulting Value as a BinaryOperator without checking, leading
to an assert failure in a case found by fuzzing. In this case, the operands
are constant and CreateBinOp does constant folding so returns a Constant
instead of a BinaryOperator.
---
 llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp       |  5 ++++-
 .../print-pipeline-passes.AFLCustomIRMutator.ll       | 11 +++++++++++
 2 files changed, 15 insertions(+), 1 deletion(-)
 create mode 100644 llvm/test/CodeGen/AMDGPU/print-pipeline-passes.AFLCustomIRMutator.ll

diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp b/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp
index 6617373f89c8b..53bea99a2e98c 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp
@@ -1685,7 +1685,10 @@ bool AMDGPUCodeGenPrepareImpl::visitBinaryOperator(BinaryOperator &I) {
             // return the new value. Just insert a scalar copy and defer
             // expanding it.
             NewElt = Builder.CreateBinOp(Opc, NumEltN, DenEltN);
-            Div64ToExpand.push_back(cast<BinaryOperator>(NewElt));
+            // CreateBinOp does constant folding. If the operands are constant,
+            // it will return a Constant instead of a BinaryOperator.
+            if (auto *NewEltBO = dyn_cast<BinaryOperator>(NewElt))
+              Div64ToExpand.push_back(NewEltBO);
           }
         }
 
diff --git a/llvm/test/CodeGen/AMDGPU/print-pipeline-passes.AFLCustomIRMutator.ll b/llvm/test/CodeGen/AMDGPU/print-pipeline-passes.AFLCustomIRMutator.ll
new file mode 100644
index 0000000000000..583ef3a8bb7c7
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/print-pipeline-passes.AFLCustomIRMutator.ll
@@ -0,0 +1,11 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx90a -O1 < %s | FileCheck -check-prefix=GCN %s
+
+define amdgpu_kernel void @kernel() {
+; GCN-LABEL: kernel:
+; GCN:       ; %bb.0: ; %entry
+; GCN-NEXT:    s_endpgm
+entry:
+  %B = srem <32 x i64> zeroinitializer, zeroinitializer
+  ret void
+}



More information about the llvm-commits mailing list