[PATCH] D72146: [DSE] Add first version of MemorySSA-backed DSE.

Alina Sbirlea via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Jan 13 17:09:13 PST 2020


asbirlea added a comment.

Some comments inline.

Re-adding the inline comment: downwards walks are inefficient. This also currently misses optimization opportunities. In order to not miss anything, all Defs need to be looked at, not only uses, adding more overhead. So it's possible replacing MemDepAnalysis with MemorySSA may not get many benefits. 
Maybe do some preliminary performance testing to check trade-offs between this recursive approach, the iterative approach in Tyker's patch, or consider a bottom-up approach?



================
Comment at: llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp:1372
+// or Phis. The Uses will not be noalias of the Def they are a use of (unless
+// we hit the memoryssa optimistaion limit). Unfortately we cannot say the same
+// things of stores. So we walk forward from one store to the next, looking
----------------
s/optimistaion/optimisation
s/Unfortately/Unfortunately


================
Comment at: llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp:1383
+// There are numerous other things we need to handle along the way.
+//  - Any instruction that may throw usually acts a dse barrier (So long as the
+//      memory is non-escaping)
----------------
s/acts a/acts as a


================
Comment at: llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp:1393
+
+enum class WalkType { Bad, Next };
+struct WalkResult {
----------------
Could you use more informative names here? `Bad` is vague, and there's a `Next` in the `WalkResult` below which will make the code using these confusing.
Perhaps `StopWalk` and `ContinueWalk`?


================
Comment at: llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp:1401
+
+Optional<MemoryLocation> getLocForWriteEx(Instruction *I,
+                                          const TargetLibraryInfo &TLI) {
----------------
Nit: Potential reduction of code duplication between this (its callsites) and hasAnalyzableMemoryWrite.


================
Comment at: llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp:1442
+    MR = AA.callCapturesBefore(UseInst, DefLoc, &DT);
+  MR = clearMust(MR);
+  return isRefSet(MR);
----------------
This is not necessary.


================
Comment at: llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp:1465
+
+// Returns true if \p M is an intrisnic that does not read or write memory.
+bool isNoopIntrinsic(MemoryUseOrDef *M) {
----------------
s/intrisnic/intrinsic


================
Comment at: llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp:1476
+    case Intrinsic::dbg_label:
+    case Intrinsic::dbg_value:
+      return true;
----------------
Debuginfo no longer has associated MemoryDefs. 
Even with a non-standard AA pipeline, MemorySSA will not create accesses for them.


================
Comment at: llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp:1486
+// Find the next MemoryDef clobbering Current, if there is any, skipping things
+// that do not alias. WalkType::Bad indiciates that the walk hit one of the
+// following:
----------------
s/indiciates/indicates


================
Comment at: llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp:1500
+  // Scan the uses to see if we have a single interesting MemoryAccess.
+  for (Use &U : Current->uses()) {
+    MemoryAccess *UseAccess = cast<MemoryAccess>(U.getUser());
----------------
The list of uses will not always include all clobbers, even transitively.
For example, before MemorySSA optimizations, the Defs form a chain:
```
1 = Def(LoE)
2 = Def(1)
3 = Def(2)
```
Let's say that 1 and 3 are two stores where we could eliminate either one of them. The Def at 2 does not alias any of them.
Let's say some previous pass that uses MemorySSA queries the clobbering access for 2, and finds no alias, so it will optimize it to `2=Def(LoE)`.
Now 1 has no uses and this goes the "bail out" path.

This should only be the case for missed optimizations, not a correctness problem.
However, as a general comment, with MemorySSA, it's efficient to do walks upwards looking for clobbering accesses not downward.


================
Comment at: llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp:1675
+
+  // Collect blocks with throwing instructions not modeled in MemroySSA and
+  // alloc-like objects.
----------------
s/MemroySSA/MemorySSA


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D72146/new/

https://reviews.llvm.org/D72146





More information about the llvm-commits mailing list