[PATCH] D18200: Add __atomic_* lowering to AtomicExpandPass.

JF Bastien via llvm-commits llvm-commits at lists.llvm.org
Thu Mar 24 14:48:09 PDT 2016


jfb added inline comments.

================
Comment at: docs/Atomics.rst:424
@@ +423,3 @@
+
+When the target implements atomic cmpxchg or LL/SC instructions (as most do)
+this is trivial: all the other operations can be implemented on top of those
----------------
Code-quote ``cmpxchg`` here and below.

================
Comment at: docs/Atomics.rst:553
@@ +552,3 @@
+on function-call boundaries. For example, MIPS supports the MIPS16 ISA, which is
+has a smaller instruction encoding than the usual MIPS32 ISA. ARM, similarly,
+has the Thumb ISA. In MIPS16 and earlier versions of Thumb, the atomic
----------------
"which is has"

================
Comment at: docs/Atomics.rst:559
@@ +558,3 @@
+Additionally, a few OS/target pairs provide kernel-supported lock-free
+atomics. ARM/Linux is an example of this: the kernel provides a function which
+on older CPUs contains a "magically-restartable" atomic sequence (which looks
----------------
Link to the kernel user helpers doc: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt

================
Comment at: include/llvm/Target/TargetLowering.h:1053
@@ +1052,3 @@
+  /// AtomicExpandPass into an __atomic_* library call.
+  unsigned getMaxAtomicSizeSupported() const { return MaxAtomicSizeSupported; }
+
----------------
Size is usually bytes for LLVM, and SizeInBits is used in the name when we expect bits. I think you sound rename the better / setter and data member appropriately.

================
Comment at: lib/CodeGen/AtomicExpandPass.cpp:131
@@ +130,3 @@
+unsigned getAtomicOpAlign(AtomicRMWInst *RMWI) {
+  // TODO: This instruction has no alignment attribute, but unlike the
+  // default alignment for load/store, the default here is to assume
----------------
Reference the bug number.

================
Comment at: lib/CodeGen/AtomicExpandPass.cpp:970
@@ +969,3 @@
+  }
+  llvm_unreachable("Unknown atomic memory order.");
+}
----------------
jyknight wrote:
> reames wrote:
> > This should be the default in the switch.
> No it shouldn't; written this way, if someone adds new cases it will emit a compiler warning. That's not terribly important here, but it's a good idiom.
Agree with @jyknight on this one. You still need unreachable for the compilers that aren't smart enough to see all paths return.

================
Comment at: lib/CodeGen/AtomicExpandPass.cpp:1094
@@ +1093,3 @@
+    break;
+  default:
+    llvm_unreachable("Unexpected RMW operation");
----------------
Change `default` to `FIRST_BINOP`,  `LAST_BINOP`, `BAD_BINOP` and put the unreachable after the switch.

================
Comment at: lib/CodeGen/AtomicExpandPass.cpp:1144
@@ +1143,3 @@
+
+  unsigned AllocaAlignment = std::min(Size, 16u);
+  bool UseSizedLibcall = canUseSizedAtomicCall(Size, Align, DL);
----------------
Why 16? Could use DL->StackNaturalAlign?


http://reviews.llvm.org/D18200





More information about the llvm-commits mailing list