[PATCH] D30416: [BitfieldShrinking] Shrink Bitfields load/store when the bitfields are legal to access independently

Chandler Carruth via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Sat Mar 25 07:25:09 PDT 2017


chandlerc added a comment.

Thanks a lot for working on this, first round of comments!



================
Comment at: include/llvm/CodeGen/Passes.h:67-68
 
+  /// This pass shrinks some bitfields load/store to legal type accesses.
+  FunctionPass *createBitfieldShrinkingPass(const TargetMachine *TM = nullptr);
+
----------------
Since this is a CodeGen pass, the code should live in lib/CodeGen rather than in lib/Transforms.


================
Comment at: lib/CodeGen/TargetPassConfig.cpp:480-482
+  if (!DisableBitfieldShrinking)
+    addPass(createBitfieldShrinkingPass(TM));
+
----------------
This should probably be predicated on `getOptLevel()` much like below?


================
Comment at: lib/Transforms/Scalar/BitfieldShrinking.cpp:1
+//===- BitfieldShrinking.cpp - Shrink the type of bitfield accesses -------===//
+//
----------------
I understand that the motivation here are bitfield accesses, but that isn't how we should describe the pass IMO.

This is a generic pass to narrow memory accesses, and I think you should name it and document it accordingly.

Naturally, you can still mention bitfields as one of the motivations and to help explain the specific patterns that are handled here. But if we have some other memory access shrinking we want to do, I would imagine that we would want to add it to this pass.

This will probably need to be propagated through many of the comments here.


================
Comment at: lib/Transforms/Scalar/BitfieldShrinking.cpp:10-19
+// Consecutive bitfields are now wrapped as a group and represented as a
+// large integer. To access a specific bitfield, wide load/store plus a
+// series of bit operations are needed. However, if the bitfield is of legal
+// type, it is more efficient to access it directly. The pass analyzes the
+// load/store and bit operations, tries to find out such widened bitfield
+// accesses and shrink them. The pass is expected to run in the late
+// pipeline so that possible wide load/store coalescing opportunities for
----------------
See above about re-focusing the documentation here on the generic memory access narrowing, and making the details about bitfields part of the motivation.

I would also make sure to include here a high level overview of the approach / algorithm used. Things like the fact that this uses MemorySSA and is specifically designed to handle shrinking across control flow seems important.

I'd also suggest making this a \file doxygen comment.


================
Comment at: lib/Transforms/Scalar/BitfieldShrinking.cpp:54
+namespace {
+typedef struct {
+  unsigned Shift;
----------------
Use C++ struct naming rather than a C-style typedef of an anonymous struct.


================
Comment at: lib/Transforms/Scalar/BitfieldShrinking.cpp:232-235
+  while (NewWidth < TBits && (!isStoreLegalType() || !coverOldRange())) {
+    NewWidth = NextPowerOf2(NewWidth);
+    NewTy = Type::getIntNTy(*Context, NewWidth);
+  }
----------------
While I generally like the use of lambdas to help factor this code, I find the parameters which are changing with each loop iteration being captured by reference and so implicitly changing to be really confusing.

I would prefer to pass parameters that are fundamentally the input to the lambda as actual parameters, and use capture more for common context that isn't really specific to a particular call to the lambda. Does that make sense?


================
Comment at: lib/Transforms/Scalar/BitfieldShrinking.cpp:266-269
+  if (isa<BitCastInst>(V1))
+    NV1 = cast<BitCastInst>(V1)->getOperand(0);
+  if (isa<BitCastInst>(V2))
+    NV2 = cast<BitCastInst>(V2)->getOperand(0);
----------------
We try to avoid doing `isa<Foo>(...)` and then `cast<Foo>(...)` in LLVM (it adds overhead to asserts builds that can really add up and is a bit redundant). Instead, use `dyn_cast` here?


================
Comment at: lib/Transforms/Scalar/BitfieldShrinking.cpp:488-489
+    switch (BOP->getOpcode()) {
+    default:
+      break;
+    case Instruction::And:
----------------
Is this valid at this point? It seems like it shouldn't be able to happen. I'd either use llvm_unreachable to mark that or add a comment explaining what is happening here.


================
Comment at: lib/Transforms/Scalar/BitfieldShrinking.cpp:568
+  SmallPtrSet<const BasicBlock *, 4> BBSet;
+  SmallVector<const BasicBlock *, 4> WorkSet;
+  BBSet.insert(FBB);
----------------
`Worklist` is a more common name for this kind of vector in LLVM.


================
Comment at: lib/Transforms/Scalar/BitfieldShrinking.cpp:887-889
+    DEBUG(dbgs() << "********** Function after Bitfield Shrinking: "
+                 << Fn.getName() << '\n');
+    DEBUG(dbgs() << Fn);
----------------
The pass manager already provides for facilities for printing before and after passes -- is this needed?


Repository:
  rL LLVM

https://reviews.llvm.org/D30416





More information about the llvm-commits mailing list