[PATCH] D46039: Fix compile time hang in LSR
Jun Bum Lim via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Wed Apr 25 12:37:55 PDT 2018
junbuml added inline comments.
================
Comment at: lib/Transforms/Scalar/LoopStrengthReduce.cpp:3605-3606
+ // x to Depth.
+ GenerateReassociations(LU, LUIdx, LU.Formulae.back(),
+ Depth + 1 + (Log2_32(AddOps.size()) >> 2));
}
----------------
evstupac wrote:
> junbuml wrote:
> > IIUC, this change perform like if there are many reassociations at one level, then we limit the depth in the next levels. right? If we want to guarantee certain number of reassociations at each level, we may need to limit the number of function call here at each level. I'm not sure which one is beneficial for performance in general between limiting the depth or limiting a certain number of reassociations at each level with a fixed depth.
> Yes. However the change do not limit linear case (1st level). If we want to limit 1st level, we'll need to decide which operand we use and which not, making algorithm more complicated with questionable compile time gain. Doing nothing in case we exceed some limit can hurt performance. I think that for linear cases we should leave number of reassociations as is. If you have example that shows we should limit this as well, please share.
Thanks for the explanation. I agree. We may not want to simply limit considering AddOps in each level without knowing compile time gain. I don't have any specific example, but I also wanted to see any regression case caused by limiting the depth. If this function handle linear cases in general, it might be reasonable.
Repository:
rL LLVM
https://reviews.llvm.org/D46039
More information about the llvm-commits
mailing list