[llvm] r267232 - Revert r267210, it makes clang assert (PR27490).

Hans Wennborg via llvm-commits llvm-commits at lists.llvm.org
Fri Apr 22 15:58:55 PDT 2016


I've attached an IR repro to the bug.

Thanks,
Hans

On Fri, Apr 22, 2016 at 3:37 PM, Philip Reames via llvm-commits
<llvm-commits at lists.llvm.org> wrote:
> Also, I'm assuming this is https://llvm.org/bugs/show_bug.cgi?id=27490?  If
> so, could you please attach an IR reproducer?  I appreciate the C
> reproducer, but I do not have a recent build of clang.  I work only on LLVM.
>
> Philip
>
>
> On 04/22/2016 03:35 PM, Philip Reames via llvm-commits wrote:
>>
>> Nico,
>>
>> I have no objection to this being reverted, but in the future, please make
>> sure to reply to the original commit thread so that the author knows
>> something is going on.  There was a discussion on IRC, but I got drug into
>> that quite late and email is our system of record.
>>
>> Philip
>>
>> On 04/22/2016 03:08 PM, Nico Weber via llvm-commits wrote:
>>>
>>> Author: nico
>>> Date: Fri Apr 22 17:08:42 2016
>>> New Revision: 267232
>>>
>>> URL: http://llvm.org/viewvc/llvm-project?rev=267232&view=rev
>>> Log:
>>> Revert r267210, it makes clang assert (PR27490).
>>>
>>> Modified:
>>> llvm/trunk/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
>>>      llvm/trunk/test/Transforms/InstCombine/atomic.ll
>>>
>>> Modified:
>>> llvm/trunk/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
>>> URL:
>>> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp?rev=267232&r1=267231&r2=267232&view=diff
>>>
>>> ==============================================================================
>>> --- llvm/trunk/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
>>> (original)
>>> +++ llvm/trunk/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
>>> Fri Apr 22 17:08:42 2016
>>> @@ -327,8 +327,6 @@ static LoadInst *combineLoadToNewType(In
>>>     LoadInst *NewLoad = IC.Builder->CreateAlignedLoad(
>>>         IC.Builder->CreateBitCast(Ptr, NewTy->getPointerTo(AS)),
>>>         LI.getAlignment(), LI.getName() + Suffix);
>>> -  NewLoad->setAtomic(LI.getOrdering(), LI.getSynchScope());
>>> -  assert(!LI.isVolatile() && "volatile unhandled here");
>>>     MDBuilder MDB(NewLoad->getContext());
>>>     for (const auto &MDPair : MD) {
>>>       unsigned ID = MDPair.first;
>>> @@ -401,8 +399,6 @@ static StoreInst *combineStoreToNewValue
>>>     StoreInst *NewStore = IC.Builder->CreateAlignedStore(
>>>         V, IC.Builder->CreateBitCast(Ptr,
>>> V->getType()->getPointerTo(AS)),
>>>         SI.getAlignment());
>>> -  NewStore->setAtomic(SI.getOrdering(), SI.getSynchScope());
>>> -  assert(!SI.isVolatile() && "volatile unhandled here");
>>>     for (const auto &MDPair : MD) {
>>>       unsigned ID = MDPair.first;
>>>       MDNode *N = MDPair.second;
>>> @@ -460,9 +456,9 @@ static StoreInst *combineStoreToNewValue
>>>   /// later. However, it is risky in case some backend or other part of
>>> LLVM is
>>>   /// relying on the exact type loaded to select appropriate atomic
>>> operations.
>>>   static Instruction *combineLoadToOperationType(InstCombiner &IC,
>>> LoadInst &LI) {
>>> -  // FIXME: We could probably with some care handle both volatile and
>>> ordered
>>> -  // atomic loads here but it isn't clear that this is important.
>>> -  if (!LI.isUnordered())
>>> +  // FIXME: We could probably with some care handle both volatile and
>>> atomic
>>> +  // loads here but it isn't clear that this is important.
>>> +  if (!LI.isSimple())
>>>       return nullptr;
>>>       if (LI.use_empty())
>>> @@ -896,7 +892,6 @@ Instruction *InstCombiner::visitLoadInst
>>>           V1->setAtomic(LI.getOrdering(), LI.getSynchScope());
>>>           V2->setAlignment(Align);
>>>           V2->setAtomic(LI.getOrdering(), LI.getSynchScope());
>>> -        assert(!LI.isVolatile() && "volatile unhandled here");
>>>           return SelectInst::Create(SI->getCondition(), V1, V2);
>>>         }
>>>   @@ -939,9 +934,9 @@ Instruction *InstCombiner::visitLoadInst
>>>   /// the store instruction as otherwise there is no way to signal
>>> whether it was
>>>   /// combined or not: IC.EraseInstFromFunction returns a null pointer.
>>>   static bool combineStoreToValueType(InstCombiner &IC, StoreInst &SI) {
>>> -  // FIXME: We could probably with some care handle both volatile and
>>> ordered
>>> -  // atomic stores here but it isn't clear that this is important.
>>> -  if (!SI.isUnordered())
>>> +  // FIXME: We could probably with some care handle both volatile and
>>> atomic
>>> +  // stores here but it isn't clear that this is important.
>>> +  if (!SI.isSimple())
>>>       return false;
>>>       Value *V = SI.getValueOperand();
>>>
>>> Modified: llvm/trunk/test/Transforms/InstCombine/atomic.ll
>>> URL:
>>> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/InstCombine/atomic.ll?rev=267232&r1=267231&r2=267232&view=diff
>>>
>>> ==============================================================================
>>> --- llvm/trunk/test/Transforms/InstCombine/atomic.ll (original)
>>> +++ llvm/trunk/test/Transforms/InstCombine/atomic.ll Fri Apr 22 17:08:42
>>> 2016
>>> @@ -173,45 +173,6 @@ define i32 @test17(i1 %cnd) {
>>>     ret i32 %x
>>>   }
>>>   -declare void @clobber()
>>> -
>>> -define i32 @test18(float* %p) {
>>> -; CHECK-LABEL: define i32 @test18(
>>> -; CHECK: load atomic i32, i32* [[A:%.*]] unordered, align 4
>>> -; CHECK: store atomic i32 [[B:%.*]], i32* [[C:%.*]] unordered, align 4
>>> -  %x = load atomic float, float* %p unordered, align 4
>>> -  call void @clobber() ;; keep the load around
>>> -  store atomic float %x, float* %p unordered, align 4
>>> -  ret i32 0
>>> -}
>>> -
>>> -; TODO: probably also legal in this case
>>> -define i32 @test19(float* %p) {
>>> -; CHECK-LABEL: define i32 @test19(
>>> -; CHECK: load atomic float, float* %p seq_cst, align 4
>>> -; CHECK: store atomic float %x, float* %p seq_cst, align 4
>>> -  %x = load atomic float, float* %p seq_cst, align 4
>>> -  call void @clobber() ;; keep the load around
>>> -  store atomic float %x, float* %p seq_cst, align 4
>>> -  ret i32 0
>>> -}
>>> -
>>> -define i32 @test20(i32** %p, i8* %v) {
>>> -; CHECK-LABEL: define i32 @test20(
>>> -; CHECK: store atomic i8* %v, i8** [[D:%.*]] unordered, align 4
>>> -  %cast = bitcast i8* %v to i32*
>>> -  store atomic i32* %cast, i32** %p unordered, align 4
>>> -  ret i32 0
>>> -}
>>> -; TODO: probably also legal in this case
>>> -define i32 @test21(i32** %p, i8* %v) {
>>> -; CHECK-LABEL: define i32 @test21(
>>> -; CHECK: store atomic i32* %cast, i32** %p monotonic, align 4
>>> -  %cast = bitcast i8* %v to i32*
>>> -  store atomic i32* %cast, i32** %p monotonic, align 4
>>> -  ret i32 0
>>> -}
>>> -
>>>   define i32 @test22(i1 %cnd) {
>>>   ; CHECK-LABEL: define i32 @test22(
>>>   ; CHECK: [[PHI:%.*]] = phi i32
>>>
>>>
>>> _______________________________________________
>>> llvm-commits mailing list
>>> llvm-commits at lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>>
>>
>> _______________________________________________
>> llvm-commits mailing list
>> llvm-commits at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits


More information about the llvm-commits mailing list