[PATCH] D18200: Add __atomic_* lowering to AtomicExpandPass.

Philip Reames via llvm-commits llvm-commits at lists.llvm.org
Thu Mar 17 18:20:41 PDT 2016

reames requested changes to this revision.
reames added a reviewer: reames.
reames added a comment.
This revision now requires changes to proceed.

More comments.  This is a ways from submission just on code quality and test coverage alone.

Comment at: lib/CodeGen/AtomicExpandPass.cpp:68
@@ +67,3 @@
+    bool expandAtomicOpToLibcall(Instruction *I, unsigned Size, unsigned Align,
+                                 Value *PointerOperand, Value *ValueOperand,
This function seems overly and falsely generic.  If I'm reading this correctly, the CASExpected and Ordering2 arguments are only used for CAS operations.  I suggest just specializing this based on whether this is a CAS or not.

Comment at: lib/CodeGen/AtomicExpandPass.cpp:172
@@ -95,1 +171,3 @@
   for (auto I : AtomicInsts) {
+    if (isa<FenceInst>(I))
+      continue;
a) move this filter into the previous loop
b) separate and land this refactoring without further review

Comment at: lib/CodeGen/AtomicExpandPass.cpp:181
@@ +180,3 @@
+    // If the Size/Alignment is not supported, replace with a libcall.
+    if (LI) {
The current structure of this code will break floating point and pointer atomic canonicalization.  The previous code fell through, the new code does not.  I suspect you need to swap the order of the two steps.  

Comment at: lib/CodeGen/AtomicExpandPass.cpp:970
@@ +969,3 @@
+  }
+  llvm_unreachable("Unknown atomic memory order.");
This should be the default in the switch.

Comment at: lib/CodeGen/AtomicExpandPass.cpp:980
@@ +979,3 @@
+                                  const DataLayout &DL) {
+  // TODO: "LargestSize" is an approximation for "largest type that
+  // you can express in C". It seems to be the case that int128 is
Wait, what?  Why do we need this?  We can simply test whether we have a particular sized libcall.  The enable logic should be strictly contained there.

This strongly hints to me we should split the libcall part out into it's own change.  This would also make it more natural to write tests for the lowering.  (Which tests are missing from the current patch and required.)

Comment at: lib/CodeGen/AtomicExpandPass.cpp:1000
@@ +999,3 @@
+  if (!expandAtomicOpToLibcall(I, Size, Align, I->getPointerOperand(), nullptr,
+                               nullptr, I->getOrdering(),
Use an assert.

Comment at: lib/CodeGen/AtomicExpandPass.cpp:1013
@@ +1012,3 @@
+  if (!expandAtomicOpToLibcall(I, Size, Align, I->getPointerOperand(),
+                               I->getValueOperand(), nullptr, I->getOrdering(),

Comment at: lib/CodeGen/AtomicExpandPass.cpp:1110
@@ +1109,3 @@
+  // CAS libcall, via a CAS loop, instead.
+  if (!Success) {
+    expandAtomicRMWToCmpXchg(I, [this](IRBuilder<> &Builder, Value *Addr,
This would be much more naturally expressed at the callsite by having this function simply return false when the expansion doesn't work, and then applying the fallback along with the second call.  

Comment at: lib/CodeGen/AtomicExpandPass.cpp:1137
@@ +1136,3 @@
+    Value *ValueOperand, Value *CASExpected, AtomicOrdering Ordering,
+    AtomicOrdering Ordering2, const RTLIB::Libcall *Libcalls) {
+  LLVMContext &Ctx = I->getContext();
Please use an ArrayRef for the last argument.

Comment at: lib/CodeGen/AtomicExpandPass.cpp:1178
@@ +1177,3 @@
+  // Build up the function call. There's two kinds. First, the sized
+  // variants.  These calls are going to be one of the following (with
+  // N=1,2,4,8,16):
What I was suggesting was that you first found the generic libcall, and then given the size information, selected the sized version if one was available.  I find the mixing of sized and unsized functions in your current array structure very confusing and hard to follow.

I hadn't realized there were no generic versions for some of them.  That seems unfortunately inconsistent.

Also, I strongly disagree with your position that we should optimize manually written generic calls into specific ones.  If it truly is an optimization, we should perform it.  If nothing else, it removes a lot of complexity from the potential frontends.  

Comment at: lib/CodeGen/AtomicExpandPass.cpp:1243
@@ +1242,3 @@
+    } else {
+      AllocaValue = AllocaBuilder.CreateAlloca(ValueOperand->getType());
+      AllocaValue->setAlignment(AllocaAlignment);
The difference between generic (allocas) and specific call signatures is large enough that trying to share the code is actively confusing.  Please just split this method based on the UseSizedLibcall variable or extract helper functions.


More information about the llvm-commits mailing list