[LLVMdev] why are volatile memory accesses ordered?

Sanjoy Das sanjoy at playingwithpointers.com
Thu Apr 2 18:20:49 PDT 2015


Daniel, you're right.  This is embarrassing -- I started the whole
discussion based on a typo.

Sorry for the noise!

-- Sanjoy

On Thu, Apr 2, 2015 at 6:18 PM, Daniel Berlin <dberlin at dberlin.org> wrote:
> Dunno whether it's worthwhile, but we definitely test for it, and if
> his two non-volatile loads were identical, we'd definitely eliminate
> them in GVN.
>
>
>
> On Thu, Apr 2, 2015 at 6:09 PM, Chandler Carruth <chandlerc at google.com> wrote:
>> Could you explain why you think it is worthwhile to optimize code involving
>> a volatile access?
>>
>> On Thu, Apr 2, 2015 at 6:08 PM Sanjoy Das <sanjoy at playingwithpointers.com>
>> wrote:
>>>
>>> Marking volatile accesses as !unordered seems excessively
>>> conservative.  For instance, LLVM is not able to optimize
>>>
>>> declare void @escape(i32*)
>>> declare i32 @read_only(i32) readonly
>>>
>>> define i32 @f(i1* %c) {
>>>  entry:
>>>   %a = alloca i32
>>>   %b = alloca i32
>>>   call void @escape(i32* %a)
>>>   call void @escape(i32* %b)
>>>
>>>   %a0 = load i32, i32* %b, align 4
>>>   %lv = load volatile i32, i32* %b
>>>   %a1 = load i32, i32* %a, align 4
>>>
>>>   %result = add i32 %a0, %a1
>>>   ret i32 %result
>>> }
>>>
>>> to "%result = %a0 << 1" via -O3.
>>>
>>> NB: changing the volatile load to a volatile store triggers the
>>> optimization (via -instcombine) -- llvm::FindAvailableLoadedValue just
>>> skips over noalias ordered stores.
>>>
>>> -- Sanjoy
>>> _______________________________________________
>>> LLVM Developers mailing list
>>> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
>>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>>
>>
>> _______________________________________________
>> LLVM Developers mailing list
>> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>>



More information about the llvm-dev mailing list