[llvm] r268452 - PM: Port LoopRotation to the new loop pass manager

Xinliang David Li via llvm-commits llvm-commits at lists.llvm.org
Tue May 10 16:56:30 PDT 2016


On Tue, May 10, 2016 at 4:43 PM, Chandler Carruth via llvm-commits <
llvm-commits at lists.llvm.org> wrote:

> On Tue, May 10, 2016 at 5:07 PM Sean Silva via llvm-commits <
> llvm-commits at lists.llvm.org> wrote:
>
>> On Tue, May 10, 2016 at 2:59 PM, Hal Finkel <hfinkel at anl.gov> wrote:
>>
>>>
>>>
>>> ------------------------------
>>>
>>> *From: *"Sean Silva via llvm-commits" <llvm-commits at lists.llvm.org>
>>> *To: *"Justin Bogner" <mail at justinbogner.com>
>>> *Cc: *"llvm-commits" <llvm-commits at lists.llvm.org>
>>> *Sent: *Tuesday, May 10, 2016 4:55:08 PM
>>> *Subject: *Re: [llvm] r268452 - PM: Port LoopRotation to the new loop
>>> pass manager
>>>
>>>
>>>
>>>
>>> On Tue, May 10, 2016 at 12:46 PM, Justin Bogner <mail at justinbogner.com>
>>> wrote:
>>>
>>>> Sean Silva <chisophugis at gmail.com> writes:
>>>> > On Tue, May 10, 2016 at 11:27 AM, Justin Bogner <
>>>> mail at justinbogner.com>
>>>> > wrote:
>>>> >
>>>> >> Sean Silva <chisophugis at gmail.com> writes:
>>>> >> > On Tue, May 3, 2016 at 3:02 PM, Justin Bogner via llvm-commits <
>>>> >> > llvm-commits at lists.llvm.org> wrote:
>>>> >> >
>>>> >> >> Author: bogner
>>>> >> >> Date: Tue May  3 17:02:31 2016
>>>> >> >> New Revision: 268452
>>>> >> >>
>>>> >> >> URL: http://llvm.org/viewvc/llvm-project?rev=268452&view=rev
>>>> >> >> Log:
>>>> >> >> PM: Port LoopRotation to the new loop pass manager
>>>> >>  ...
>>>> >> >> --- llvm/trunk/test/Transforms/LoopRotate/basic.ll (original)
>>>> >> >> +++ llvm/trunk/test/Transforms/LoopRotate/basic.ll Tue May  3
>>>> 17:02:31
>>>> >> 2016
>>>> >> >> @@ -1,4 +1,6 @@
>>>> >> >>  ; RUN: opt -S -loop-rotate < %s | FileCheck %s
>>>> >> >> +; RUN: opt -S
>>>> >> >>
>>>> >>
>>>> -passes='require<loops>,require<targetir>,require<assumptions>,loop(rotate)'
>>>> >> >> < %s | FileCheck %s
>>>> >> >>
>>>> >> >
>>>> >> > Sorry if this is a stupid question, but why do we need to
>>>> explicitly
>>>> >> > "require" the passes when the loop-rotate already declares the
>>>> >> dependency?
>>>> >> > (I feel like there's some part of the bigger picture that I'm
>>>> missing
>>>> >> here)
>>>> >>
>>>> >> To be clear, there are no dependencies in the new pass manager -
>>>> passes
>>>> >> request analysis results from an analysis manager and they're
>>>> calculated
>>>> >> or returned from the cache as appropriate. Naturally, the next
>>>> question
>>>> >> one would ask is "okay then, why aren't these calculated on demand
>>>> when
>>>> >> loop-rotate asks for them?".
>>>> >>
>>>> >
>>>> > Ah, yeah. The whole "schedule vs. cache" is one of the main
>>>> differences of
>>>> > the new PM.
>>>> >
>>>> >
>>>> >>
>>>> >> The mechanical answer is that loop-rotate uses the `getCachedResult`
>>>> >> API, not the `getResult` one - the difference being that
>>>> >> `getCachedResult` only returns results if they're already calculated,
>>>> >> and null otherwise. Hence, without the require<> in the test, there
>>>> are
>>>> >> no results.
>>>> >
>>>> >
>>>> > Ah, makes perfect sense!
>>>> >
>>>> >
>>>> >> This of course leads to "why does loop-rotate use
>>>> >> getCachedResult?"
>>>> >>
>>>> >> It has to. These analyses are function analyses, and a loop pass
>>>> isn't
>>>> >> allowed to cause a function analysis to run, much like a function
>>>> pass
>>>> >> isn't allowed to cause a module analysis to be run. They have to
>>>> stick
>>>> >> to their own level. This enforces correct layering and acts as a
>>>> >> safeguard against accidentally doing extra work.
>>>> >>
>>>> >> That said, the "require<loops>" is redundant, since a LoopPassManager
>>>> >> can't operate without having calculated that. There may also be an
>>>> >> argument to be made that the LPM should implicitly calculate some
>>>> other
>>>> >> analyses, but we'd have to be careful not to do too much.
>>>> >>
>>>> >
>>>> > Makes sense. But then `+  assert((LI && TTI && AC) && "Analyses for
>>>> loop
>>>> > rotation not available");` is not really appropriate, right? Can't we
>>>> hit
>>>> > that assert just by passing the right arguments to opt? (by dropping
>>>> the
>>>> > `require<*>` analyses from the passes= specification?)
>>>> >
>>>> > (sorry, I'm away from the office and don't have an easy way to test
>>>> locally)
>>>>
>>>> Yes it does. The other option is to just bail on the transform in that
>>>> case, which would be a better option for things like fuzzers and
>>>> bisection scripts, but makes it harder to ensure we're doing everything
>>>> correctly while we're bringing this up. For now I think the assert is
>>>> best, but we may want to revisit this at some point.
>>>>
>>>
>>> I'm uncomfortable with the assert because it won't be there in Release
>>> and I don't want to be running into UB in my release build from a mistyped
>>> command line. I think report_fatal_error is more appropriate.
>>>
>>> +1
>>>
>>> Also, we might want to consider moving the error-reporting logic into
>>> the pass manager itself. I could see having a 'Required' parameter to
>>> getCachedResult such that it would be a fatal error should no cached result
>>> be available.
>>>
>>
>> Using a function called "getCachedResult" for this purpose seems a bit
>> off in the first place. I think it would make sense to call it
>> `getRequiredResult` or something like that so that it expresses the
>> semantic intent ("as a loop pass, I can't force this to be computed, but I
>> require it so it better be there").
>>
>
> I think "cached" is still really important -- it signifies that unlike
> other query paths it won't go and do work all of a sudden. =]
>
> That said, I agree that the pattern of requiring a cached result from an
> outer pass manager is very common and we should streamline this.
>
> The reason I hadn't actually done it is that I'm actually *really* torn
> about whether it would be better to have passes just skip their logic or
> fail hard. I almost *every* case I prefer the assert or fatal error...
> except when it comes down to bisecting a failing pass or similar techniques.
>


How about this:   getCachedResults can either return null or fail hard
depending on an internal option.  The client code shares the same boiler
plate -- skips/bails when null is returned from getCachedResult. In normal
build, you would get assertion/fail, but bitsecting tools can disable the
assert.

David




>
> -Chandler
>
>
>>
>> -- Sean Silva
>>
>>
>>>
>>>  -Hal
>>>
>>>
>>>
>>> _______________________________________________
>>> llvm-commits mailing list
>>> llvm-commits at lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>>>
>>>
>>>
>>>
>>> --
>>> Hal Finkel
>>> Assistant Computational Scientist
>>> Leadership Computing Facility
>>> Argonne National Laboratory
>>>
>> _______________________________________________
>> llvm-commits mailing list
>> llvm-commits at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20160510/0ec266dd/attachment.html>


More information about the llvm-commits mailing list