[llvm] r308503 - [X86] Don't try to scale down if that exceeds the bitwidth.
Hans Wennborg via llvm-commits
llvm-commits at lists.llvm.org
Fri Jul 21 01:02:17 PDT 2017
Merged in r308718.
On Thu, Jul 20, 2017 at 3:48 PM, Hans Wennborg <hans at chromium.org> wrote:
> On Wed, Jul 19, 2017 at 8:15 PM, Davide Italiano <davide at freebsd.org> wrote:
>> On Wed, Jul 19, 2017 at 11:09 AM, Davide Italiano via llvm-commits
>> <llvm-commits at lists.llvm.org> wrote:
>>> Author: davide
>>> Date: Wed Jul 19 11:09:46 2017
>>> New Revision: 308503
>>>
>>> URL: http://llvm.org/viewvc/llvm-project?rev=308503&view=rev
>>> Log:
>>> [X86] Don't try to scale down if that exceeds the bitwidth.
>>>
>>> Fixes the crash reported in PR33844.
>>>
>>> Added:
>>> llvm/trunk/test/CodeGen/X86/pr33844.ll
>>> Modified:
>>> llvm/trunk/lib/Target/X86/X86ISelDAGToDAG.cpp
>>>
>>> Modified: llvm/trunk/lib/Target/X86/X86ISelDAGToDAG.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelDAGToDAG.cpp?rev=308503&r1=308502&r2=308503&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Target/X86/X86ISelDAGToDAG.cpp (original)
>>> +++ llvm/trunk/lib/Target/X86/X86ISelDAGToDAG.cpp Wed Jul 19 11:09:46 2017
>>> @@ -1055,7 +1055,10 @@ static bool foldMaskAndShiftToScale(Sele
>>>
>>> // Scale the leading zero count down based on the actual size of the value.
>>> // Also scale it down based on the size of the shift.
>>> - MaskLZ -= (64 - X.getSimpleValueType().getSizeInBits()) + ShiftAmt;
>>> + unsigned ScaleDown = (64 - X.getSimpleValueType().getSizeInBits()) + ShiftAmt;
>>> + if (MaskLZ < ScaleDown)
>>> + return true;
>>> + MaskLZ -= ScaleDown;
>>>
>>> // The final check is to ensure that any masked out high bits of X are
>>> // already known to be zero. Otherwise, the mask has a semantic impact
>>>
>>> Added: llvm/trunk/test/CodeGen/X86/pr33844.ll
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pr33844.ll?rev=308503&view=auto
>>> ==============================================================================
>>> --- llvm/trunk/test/CodeGen/X86/pr33844.ll (added)
>>> +++ llvm/trunk/test/CodeGen/X86/pr33844.ll Wed Jul 19 11:09:46 2017
>>> @@ -0,0 +1,38 @@
>>> +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
>>> +; RUN: llc -o - %s | FileCheck %s
>>> +
>>> +target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
>>> +target triple = "x86_64-unknown-linux-gnu"
>>> +
>>> + at global = external global i32
>>> + at global.1 = external global i64
>>> +
>>> +define void @patatino() {
>>> +; CHECK-LABEL: patatino:
>>> +; CHECK: # BB#0: # %bb
>>> +; CHECK-NEXT: movl {{.*}}(%rip), %eax
>>> +; CHECK-NEXT: movl %eax, %ecx
>>> +; CHECK-NEXT: shrl $31, %ecx
>>> +; CHECK-NEXT: addl $2147483647, %ecx # imm = 0x7FFFFFFF
>>> +; CHECK-NEXT: shrl $31, %ecx
>>> +; CHECK-NEXT: andl $62, %ecx
>>> +; CHECK-NEXT: andl $-536870912, %eax # imm = 0xE0000000
>>> +; CHECK-NEXT: orl %ecx, %eax
>>> +; CHECK-NEXT: movl %eax, {{.*}}(%rip)
>>> +; CHECK-NEXT: retq
>>> +bb:
>>> + %tmp = load i32, i32* @global
>>> + %tmp1 = lshr i32 %tmp, 31
>>> + %tmp2 = add nuw nsw i32 %tmp1, 2147483647
>>> + %tmp3 = load i64, i64* @global.1
>>> + %tmp4 = shl i64 %tmp3, 23
>>> + %tmp5 = add nsw i64 %tmp4, 8388639
>>> + %tmp6 = trunc i64 %tmp5 to i32
>>> + %tmp7 = lshr i32 %tmp2, %tmp6
>>> + %tmp8 = load i32, i32* @global
>>> + %tmp9 = and i32 %tmp7, 62
>>> + %tmp10 = and i32 %tmp8, -536870912
>>> + %tmp11 = or i32 %tmp9, %tmp10
>>> + store i32 %tmp11, i32* @global
>>> + ret void
>>> +}
>>>
>>>
>>> _______________________________________________
>>> llvm-commits mailing list
>>> llvm-commits at lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>>
>> Hans, we might consider backporting it to 5.0 once this has baked in
>> the tree for a bit.
>> I marked https://bugs.llvm.org/show_bug.cgi?id=33844 as "blocker" just
>> to make sure it's in your radar.
>
> Noted, thanks.
More information about the llvm-commits
mailing list