[llvm] [AMDGPU] Use correct number of bits needed for div/rem shrinking (PR #80622)

via llvm-commits llvm-commits at lists.llvm.org
Mon Feb 5 06:25:40 PST 2024


================
@@ -1213,7 +1213,10 @@ Value *AMDGPUCodeGenPrepareImpl::expandDivRem24(IRBuilder<> &Builder,
                                                 BinaryOperator &I, Value *Num,
                                                 Value *Den, bool IsDiv,
                                                 bool IsSigned) const {
-  int DivBits = getDivNumBits(I, Num, Den, 9, IsSigned);
+  unsigned SSBits = Num->getType()->getScalarSizeInBits();
+  // If Num bits <= 24, assume 0 signbits.
+  unsigned AtLeast = (SSBits <= 24) ? 0 : (SSBits - 24);
----------------
choikwa wrote:

AtLeast is measuring number of sign bits and 9 would mean for i32, 23-bits are used in div/rem. So if computeNumSignBits returned 8, it would be disqualified. I think the code in expandDivRem24Impl should be able to handle 24-bit case though.

https://github.com/llvm/llvm-project/pull/80622


More information about the llvm-commits mailing list