[PATCH] D31239: [WIP] Add Caching of Known Bits in InstCombine

Hal Finkel via llvm-commits llvm-commits at lists.llvm.org
Sat Mar 25 05:28:01 PDT 2017


On 03/24/2017 06:00 PM, Daniel Berlin wrote:
>
>
> On Fri, Mar 24, 2017 at 3:44 PM, Daniel Berlin <dberlin at dberlin.org 
> <mailto:dberlin at dberlin.org>> wrote:
>
>     In which i repeat my claim that trying to keep caches up to date
>     is significantly harder to get right and verify than incremental
>     recomputation :)
>
>
> and to be super clear here, the practical difference between the two 
> is that incremental computation is really "build an analysis that can 
> be run at any point, but is a full analysis that computes all answers 
> and not on demand. The speed at which the analysis runs depends on how 
> much data has changed since last time.".

How exactly do you propose this works in this case (given that the 
recomputation cost is proportional to the amount the IR has changed)? It 
sounds like caching with the default behavior on RAUW flipped. Instead 
of assuming by default that the replacement instructions has the same 
known bits as the original, we assume that it does not (and then 
recursively clear out all users).

  -Hal

> The compile time hit depends on how often you want answers.  Usually 
> people try to say "i want them always up to date" (often without any 
> quantification of cost or other trade-offs), which leads to trying to 
> do it on-demand
>
> Caching is really  "build something that is always computed on 
> demand.  Try to make it faster by storing the answers it gives 
> somewhere, and hoping that the rest of the algorithm retains the 
> ability to give correct answers on demand when you remove some of the 
> stored answers".
>
> One can obviously argue that incremental recomputation is a form of 
> caching, so i'm trying to be concrete about my definitions.
>
> usually, caching additionally requires clients be able to correctly 
> predict and remove the set of answers necessary for correctness (IE 
> know enough about how the algorithm works) but this isn't strictly the 
> case if designed well.
>
> Caching also often ends up with degraded answers over time, while most 
> full analysis can be made to not do so. See, for example, memdep, 
> which has unresolvable cache invalidation issues.  This means, for 
> example, opt -gvn -gvn gives different answers than opt -gvn  | opt -gvn.
>
> Both approaches obviously require information about what has changed 
> in the IR.
>
>      I additionally claim that we should actually quantify the
>     performance benefits of doing either over "recomputing once or
>     twice during pipeline" before going down this route.
>
>     We are *already* trading off compile time performance vs generated
>     code performance.
>
>     All other things being equal, we should trade it off in a way that
>     makes maintenance of code and verification of correctness as easy
>     as possible.
>
>     Asserting that a good tradeoff to reduce compile time is "cache"
>     instead of "stop doing it on demand" (or incrementally recompute),
>     without any data other than compile time performance seems ... not
>     particularly scientific.
>
>     It's certainly true that caching is often posited as "easier", but
>     i think a trip through bugzilla would put this idea to rest.
>
>
>
>     On Fri, Mar 24, 2017 at 3:33 PM, Craig Topper via Phabricator
>     <reviews at reviews.llvm.org <mailto:reviews at reviews.llvm.org>> wrote:
>
>         craig.topper added a comment.
>
>         I'm seeing more problems than just nsw/nuw flags here.
>
>         sext.ll test is failing because SimplifyDemandedInstructions
>         bits determined that this
>
>         %and = and i32 %x, 16
>
>           shl i32 %and, 27
>
>         Simplified to just the shl because we were only demanding the
>         MSB of the shift. This occurred after we had cached the known
>         bits for the shl as having 31 lsbs as 0. But without the "and"
>         in there we can no longer guarantee the lower bits of the
>         shift result are 0.
>
>         I also got a failure on shift.ll not reported here. This was
>         caused by getShiftedValue rewriting operands and changing
>         constants of other shifts. This effectively shifts the Known
>         bits and thus the cache needs to be invalidate.
>
>
>         https://reviews.llvm.org/D31239 <https://reviews.llvm.org/D31239>
>
>
>
>
>

-- 
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20170325/cf2667bb/attachment.html>


More information about the llvm-commits mailing list