[llvm-commits] [llvm] r60314 - /llvm/trunk/lib/Transforms/Scalar/GVN.cpp
Chris Lattner
sabre at nondot.org
Sun Nov 30 17:31:36 PST 2008
Author: lattner
Date: Sun Nov 30 19:31:36 2008
New Revision: 60314
URL: http://llvm.org/viewvc/llvm-project?rev=60314&view=rev
Log:
Make GVN be more intelligent about redundant load
elimination: when finding dependent load/stores, realize that
they are the same if aliasing claims must alias instead of relying
on the pointers to be exactly equal. This makes load elimination
more aggressive. For example, on 403.gcc, we had:
< 68 gvn - Number of instructions PRE'd
< 152718 gvn - Number of instructions deleted
< 49699 gvn - Number of loads deleted
< 6153 memdep - Number of dirty cached non-local responses
< 169336 memdep - Number of fully cached non-local responses
< 162428 memdep - Number of uncached non-local responses
now we have:
> 64 gvn - Number of instructions PRE'd
> 153623 gvn - Number of instructions deleted
> 49856 gvn - Number of loads deleted
> 5022 memdep - Number of dirty cached non-local responses
> 159030 memdep - Number of fully cached non-local responses
> 162443 memdep - Number of uncached non-local responses
That's an extra 157 loads deleted and extra 905 other instructions nuked.
This slows down GVN very slightly, from 3.91 to 3.96s.
Modified:
llvm/trunk/lib/Transforms/Scalar/GVN.cpp
Modified: llvm/trunk/lib/Transforms/Scalar/GVN.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Scalar/GVN.cpp?rev=60314&r1=60313&r2=60314&view=diff
==============================================================================
--- llvm/trunk/lib/Transforms/Scalar/GVN.cpp (original)
+++ llvm/trunk/lib/Transforms/Scalar/GVN.cpp Sun Nov 30 19:31:36 2008
@@ -915,11 +915,28 @@
}
if (StoreInst* S = dyn_cast<StoreInst>(DepInfo.getInst())) {
- if (S->getPointerOperand() != L->getPointerOperand())
+ // Reject loads and stores that are to the same address but are of
+ // different types.
+ // NOTE: 403.gcc does have this case (e.g. in readonly_fields_p) because
+ // of bitfield access, it would be interesting to optimize for it at some
+ // point.
+ if (S->getOperand(0)->getType() != L->getType())
+ return false;
+
+ if (S->getPointerOperand() != L->getPointerOperand() &&
+ VN.getAliasAnalysis()->alias(S->getPointerOperand(), 1,
+ L->getPointerOperand(), 1)
+ != AliasAnalysis::MustAlias)
return false;
repl[DepBB] = S->getOperand(0);
} else if (LoadInst* LD = dyn_cast<LoadInst>(DepInfo.getInst())) {
- if (LD->getPointerOperand() != L->getPointerOperand())
+ if (LD->getType() != L->getType())
+ return false;
+
+ if (LD->getPointerOperand() != L->getPointerOperand() &&
+ VN.getAliasAnalysis()->alias(LD->getPointerOperand(), 1,
+ L->getPointerOperand(), 1)
+ != AliasAnalysis::MustAlias)
return false;
repl[DepBB] = LD;
} else {
More information about the llvm-commits
mailing list