[PATCH] [X86] Use the generic AtomicExpandPass instead of X86AtomicExpandPass
morisset at google.com
Wed Sep 3 10:50:57 PDT 2014
There was already a lot of tests for atomics on X86 (test/CodeGen/X86/atomic*), so I did not modify them in this patch, just checked that they still pass.
I agree that preserving the synchronization scope might be good, but that would be a different patch (the code in X86AtomicAtomicPass does not look like it was preserving synchronization scope either).
Comment at: lib/CodeGen/AtomicExpandPass.cpp:134
@@ +133,3 @@
+ // atomic swap, that can be implemented for example as a ldrex/strex on ARM
+ // or cmpxchg8/16b on X86, as these are atomic for larger sizes.
+ // It is the responsibility of the target to only return true in
> IIRC they both need a `lock` prefix to be atomic.
Indeed, I will update the comment.
Comment at: lib/CodeGen/AtomicExpandPass.cpp:298
@@ +297,3 @@
+ LoadInst *InitLoaded = Builder.CreateLoad(Addr);
> Could you also explain the rationale behind the alignment (C11/C++11 memory model requires at least natural alignment, and the IR has the same guarantee).
All of this code was just moved from the X86AtomicExpandPass, I am not entirely sure of why it does things in a specific way.
But is it not providing exactly this natural alignment that you mention ?
Comment at: lib/Target/X86/X86ISelLowering.cpp:16547
@@ +16546,3 @@
+ if (OpWidth == 64)
+ return !Subtarget.is64Bit(); // FIXME this should be Subtarget.hasCmpxchg8b
+ else if (OpWidth == 128)
> Fix the FIXME?
This code was moved from X86AtomicExpandPass. Because fixing this FIXME would not be entirely trivial (need to add hasCmpxchg8b to the subtarget, find which ones offer it, and so on), and is orthogonal to this change, I chose not to tackle it in this patch.
More information about the llvm-commits