[PATCH] D46649: [AggressiveInstCombine] convert a chain of 'and-shift' bits into masked compare

Roman Lebedev via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed May 9 11:07:49 PDT 2018


lebedev.ri added inline comments.


================
Comment at: lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:75
+
+/// This is a recursive helper for foldAnyOrAllBitsSet() that walks through a
+/// chain of 'and' or 'or' instructions looking for shift ops of a common source
----------------
spatel wrote:
> lebedev.ri wrote:
> > This looks, recursive.
> > I think this is a stack overflow waiting to happen.
> > 
> > Can you try to come up with some degenerative test-case
> > where it has to self-recurse 20-40 times,
> > just so we know it does not happen?
> > 
> Yes - it's definitely recursive. :)
> Could convert to a worklist pattern.
> But there is a practical cap on that recursion - you'd never have more recursion than number of bits in the value. So a 40-bit mask would be:
> 
> ```
> define i64 @allset_40_bit_mask(i64 %x) {
>   %t1 = lshr i64 %x, 1
>   %t2 = lshr i64 %x, 2
>   %t3 = lshr i64 %x, 3
>   %t4 = lshr i64 %x, 4
>   %t5 = lshr i64 %x, 5
>   %t6 = lshr i64 %x, 6
>   %t7 = lshr i64 %x, 7
>   %t8 = lshr i64 %x, 8
>   %t9 = lshr i64 %x, 9
>   %t10 = lshr i64 %x, 10
>   %t11 = lshr i64 %x, 11
>   %t12 = lshr i64 %x, 12
>   %t13 = lshr i64 %x, 13
>   %t14 = lshr i64 %x, 14
>   %t15 = lshr i64 %x, 15
>   %t16 = lshr i64 %x, 16
>   %t17 = lshr i64 %x, 17
>   %t18 = lshr i64 %x, 18
>   %t19 = lshr i64 %x, 19
>   %t20 = lshr i64 %x, 20
>   %t21 = lshr i64 %x, 21
>   %t22 = lshr i64 %x, 22
>   %t23 = lshr i64 %x, 23
>   %t24 = lshr i64 %x, 24
>   %t25 = lshr i64 %x, 25
>   %t26 = lshr i64 %x, 26
>   %t27 = lshr i64 %x, 27
>   %t28 = lshr i64 %x, 28
>   %t29 = lshr i64 %x, 29
>   %t30 = lshr i64 %x, 30
>   %t31 = lshr i64 %x, 31
>   %t32 = lshr i64 %x, 32
>   %t33 = lshr i64 %x, 33
>   %t34 = lshr i64 %x, 34
>   %t35 = lshr i64 %x, 35
>   %t36 = lshr i64 %x, 36
>   %t37 = lshr i64 %x, 37
>   %t38 = lshr i64 %x, 38
>   %t39 = lshr i64 %x, 39
>   %t40 = lshr i64 %x, 40
> 
>   %a1 = and i64 %t1, 1
>   %a2 = and i64 %t2, %a1
>   %a3 = and i64 %t3, %a2
>   %a4 = and i64 %t4, %a3
>   %a5 = and i64 %t5, %a4
>   %a6 = and i64 %t6, %a5
>   %a7 = and i64 %t7, %a6
>   %a8 = and i64 %t8, %a7
>   %a9 = and i64 %t9, %a8
>   %a10 = and i64 %t10, %a9
>   %a11 = and i64 %t11, %a10
>   %a12 = and i64 %t12, %a11
>   %a13 = and i64 %t13, %a12
>   %a14 = and i64 %t14, %a13
>   %a15 = and i64 %t15, %a14
>   %a16 = and i64 %t16, %a15
>   %a17 = and i64 %t17, %a16
>   %a18 = and i64 %t18, %a17
>   %a19 = and i64 %t19, %a18
>   %a20 = and i64 %t20, %a19
>   %a21 = and i64 %t21, %a20
>   %a22 = and i64 %t22, %a21
>   %a23 = and i64 %t23, %a22
>   %a24 = and i64 %t24, %a23
>   %a25 = and i64 %t25, %a24
>   %a26 = and i64 %t26, %a25
>   %a27 = and i64 %t27, %a26
>   %a28 = and i64 %t28, %a27
>   %a29 = and i64 %t29, %a28
>   %a30 = and i64 %t30, %a29
>   %a31 = and i64 %t31, %a30
>   %a32 = and i64 %t32, %a31
>   %a33 = and i64 %t33, %a32
>   %a34 = and i64 %t34, %a33
>   %a35 = and i64 %t35, %a34
>   %a36 = and i64 %t36, %a35
>   %a37 = and i64 %t37, %a36
>   %a38 = and i64 %t38, %a37
>   %a39 = and i64 %t39, %a38
>   %a40 = and i64 %t40, %a39
> 
>   ret i64 %a40
> }
> 
> ```
> That'll get folded down to:
> 
> ```
> define i64 @allset_40_bit_mask(i64 %x) local_unnamed_addr #0 {
>   %1 = and i64 %x, 2199023255550
>   %2 = icmp eq i64 %1, 2199023255550
>   %3 = zext i1 %2 to i64
>   ret i64 %3
> }
> 
> ```
> 
> I'll add a test like that.
> you'd never have more recursion than number of bits in the value.

Yes, i realize, and thank you.


https://reviews.llvm.org/D46649





More information about the llvm-commits mailing list