[llvm] [SCEV] Support larger than 64-bit types in ashr(add(shl(x, n), c), m) (PR #71600)

Björn Pettersson via llvm-commits llvm-commits at lists.llvm.org
Tue Nov 7 15:11:59 PST 2023


https://github.com/bjope created https://github.com/llvm/llvm-project/pull/71600

In commit 5a9a02f67b771fb2edcf06 scalar evolution got support for computing SCEV:s for (ashr(add(shl(x, n), c), m)) constructs. The code however used APInt::getZExtValue without first checking that the APInt would fit inside an uint64_t. When for example using 128-bit types we ended up in assertion failures (or maybe miscompiles in non-assert builds).
This patch simply avoid converting from APInt to uint64_t when creating the truncated constant. We can just truncate the APInt instead.

>From 5bebcbf5110c8036c5a4a5ddb755b841d32b8d25 Mon Sep 17 00:00:00 2001
From: Bjorn Pettersson <bjorn.a.pettersson at ericsson.com>
Date: Wed, 8 Nov 2023 00:10:30 +0100
Subject: [PATCH] [SCEV] Support larger than 64-bit types in ashr(add(shl(x,
 n), c), m)

In commit 5a9a02f67b771fb2edcf06 scalar evolution got support for
computing SCEV:s for (ashr(add(shl(x, n), c), m)) constructs. The
code however used APInt::getZExtValue without first checking that
the APInt would fit inside an uint64_t. When for example using
128-bit types we ended up in assertion failures (or maybe miscompiles
in non-assert builds).
This patch simply avoid converting from APInt to uint64_t when
creating the truncated constant. We can just truncate the APInt
instead.
---
 llvm/lib/Analysis/ScalarEvolution.cpp         |  3 +--
 .../ScalarEvolution/sext-add-inreg.ll         | 25 ++++++++++++++++---
 2 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/llvm/lib/Analysis/ScalarEvolution.cpp b/llvm/lib/Analysis/ScalarEvolution.cpp
index de24aa986688a1e..a57a16600ca1e69 100644
--- a/llvm/lib/Analysis/ScalarEvolution.cpp
+++ b/llvm/lib/Analysis/ScalarEvolution.cpp
@@ -7888,8 +7888,7 @@ const SCEV *ScalarEvolution::createSCEV(Value *V) {
             // same type, so create a new Constant with type same as TruncTy.
             // Also, the Add constant should be shifted right by AShr amount.
             APInt AddOperand = AddOperandCI->getValue().ashr(AShrAmt);
-            AddConstant = getConstant(TruncTy, AddOperand.getZExtValue(),
-                                      AddOperand.isSignBitSet());
+            AddConstant = getConstant(AddOperand.trunc(BitWidth - AShrAmt));
             // we model the expression as sext(add(trunc(A), c << n)), since the
             // sext(trunc) part is already handled below, we create a
             // AddExpr(TruncExp) which will be used later.
diff --git a/llvm/test/Analysis/ScalarEvolution/sext-add-inreg.ll b/llvm/test/Analysis/ScalarEvolution/sext-add-inreg.ll
index 8aac64ea4f6dc4b..1a51172edf0f3e3 100644
--- a/llvm/test/Analysis/ScalarEvolution/sext-add-inreg.ll
+++ b/llvm/test/Analysis/ScalarEvolution/sext-add-inreg.ll
@@ -1,19 +1,36 @@
 ; NOTE: Assertions have been autogenerated by utils/update_analyze_test_checks.py UTC_ARGS: --version 2
 ; RUN: opt < %s -disable-output "-passes=print<scalar-evolution>" 2>&1 | FileCheck %s
 
-define i64 @test(i64 %a) {
-; CHECK-LABEL: 'test'
-; CHECK-NEXT:  Classifying expressions for: @test
+define i64 @test64(i64 %a) {
+; CHECK-LABEL: 'test64'
+; CHECK-NEXT:  Classifying expressions for: @test64
 ; CHECK-NEXT:    %add = shl i64 %a, 8
 ; CHECK-NEXT:    --> (256 * %a) U: [0,-255) S: [-9223372036854775808,9223372036854775553)
 ; CHECK-NEXT:    %shl = add i64 %add, 256
 ; CHECK-NEXT:    --> (256 + (256 * %a)) U: [0,-255) S: [-9223372036854775808,9223372036854775553)
 ; CHECK-NEXT:    %ashr = ashr exact i64 %shl, 8
 ; CHECK-NEXT:    --> (sext i56 (1 + (trunc i64 %a to i56)) to i64) U: [-36028797018963968,36028797018963968) S: [-36028797018963968,36028797018963968)
-; CHECK-NEXT:  Determining loop execution counts for: @test
+; CHECK-NEXT:  Determining loop execution counts for: @test64
 ;
   %add = shl i64 %a, 8
   %shl = add i64 %add, 256
   %ashr = ashr exact i64 %shl, 8
   ret i64 %ashr
 }
+
+define i128 @test128(i128 %a) {
+; CHECK-LABEL: 'test128'
+; CHECK-NEXT:  Classifying expressions for: @test128
+; CHECK-NEXT:    %shl = shl i128 %a, 4
+; CHECK-NEXT:    --> (16 * %a) U: [0,-15) S: [-170141183460469231731687303715884105728,170141183460469231731687303715884105713)
+; CHECK-NEXT:    %add = add i128 %shl, -55
+; CHECK-NEXT:    --> (-55 + (16 * %a)) U: [-55,-70) S: [170141183460469231731687303715884105673,170141183460469231731687303715884105658)
+; CHECK-NEXT:    %ashr = ashr i128 %add, 4
+; CHECK-NEXT:    --> (sext i124 (-4 + (trunc i128 %a to i124)) to i128) U: [-10633823966279326983230456482242756608,10633823966279326983230456482242756608) S: [-10633823966279326983230456482242756608,10633823966279326983230456482242756608)
+; CHECK-NEXT:  Determining loop execution counts for: @test128
+;
+  %shl = shl i128 %a, 4
+  %add = add i128 %shl, -55
+  %ashr = ashr i128 %add, 4
+  ret i128 %ashr
+}



More information about the llvm-commits mailing list