[llvm] 23b7609 - CodeGenPrepare: Reorder check for cold and shouldOptimizeForSize
Matt Arsenault via llvm-commits
llvm-commits at lists.llvm.org
Tue Feb 4 11:23:51 PST 2020
Author: Matt Arsenault
Date: 2020-02-04T11:23:13-08:00
New Revision: 23b76096b7d9b3a81ebfb1579471329a7d6b9c03
URL: https://github.com/llvm/llvm-project/commit/23b76096b7d9b3a81ebfb1579471329a7d6b9c03
DIFF: https://github.com/llvm/llvm-project/commit/23b76096b7d9b3a81ebfb1579471329a7d6b9c03.diff
LOG: CodeGenPrepare: Reorder check for cold and shouldOptimizeForSize
shouldOptimizeForSize is showing up in a profile, spending around 10%
of the pass time in one function. This should probably not be so slow,
but the much cheaper attribute check should be done first anyway.
Added:
Modified:
llvm/lib/CodeGen/CodeGenPrepare.cpp
Removed:
################################################################################
diff --git a/llvm/lib/CodeGen/CodeGenPrepare.cpp b/llvm/lib/CodeGen/CodeGenPrepare.cpp
index 38563b5f256d..efcf7ba39009 100644
--- a/llvm/lib/CodeGen/CodeGenPrepare.cpp
+++ b/llvm/lib/CodeGen/CodeGenPrepare.cpp
@@ -1937,8 +1937,8 @@ bool CodeGenPrepare::optimizeCallInst(CallInst *CI, bool &ModifiedDT) {
// cold block. This interacts with our handling for loads and stores to
// ensure that we can fold all uses of a potential addressing computation
// into their uses. TODO: generalize this to work over profiling data
- bool OptForSize = OptSize || llvm::shouldOptimizeForSize(BB, PSI, BFI.get());
- if (!OptForSize && CI->hasFnAttr(Attribute::Cold))
+ if (CI->hasFnAttr(Attribute::Cold) &&
+ !OptSize && !llvm::shouldOptimizeForSize(BB, PSI, BFI.get()))
for (auto &Arg : CI->arg_operands()) {
if (!Arg->getType()->isPointerTy())
continue;
@@ -4587,12 +4587,14 @@ static bool FindAllMemoryUses(
}
if (CallInst *CI = dyn_cast<CallInst>(UserI)) {
- // If this is a cold call, we can sink the addressing calculation into
- // the cold path. See optimizeCallInst
- bool OptForSize = OptSize ||
+ if (CI->hasFnAttr(Attribute::Cold)) {
+ // If this is a cold call, we can sink the addressing calculation into
+ // the cold path. See optimizeCallInst
+ bool OptForSize = OptSize ||
llvm::shouldOptimizeForSize(CI->getParent(), PSI, BFI);
- if (!OptForSize && CI->hasFnAttr(Attribute::Cold))
- continue;
+ if (!OptForSize)
+ continue;
+ }
InlineAsm *IA = dyn_cast<InlineAsm>(CI->getCalledValue());
if (!IA) return true;
More information about the llvm-commits
mailing list