[llvm] r268452 - PM: Port LoopRotation to the new loop pass manager

Justin Bogner via llvm-commits llvm-commits at lists.llvm.org
Tue May 10 22:08:10 PDT 2016


Xinliang David Li via llvm-commits <llvm-commits at lists.llvm.org> writes:
> On Tue, May 10, 2016 at 4:43 PM, Chandler Carruth via llvm-commits <llvm-commits at lists.llvm.org> wrote:
> On Tue, May 10, 2016 at 5:07 PM Sean Silva via llvm-commits <llvm-commits at lists.llvm.org> wrote:
>>> Using a function called "getCachedResult" for this purpose seems a bit
>>> off in the first place. I think it would make sense to call it
>>> `getRequiredResult` or something like that so that it expresses the
>>> semantic intent ("as a loop pass, I can't force this to be computed,
>>> but I require it so it better be there").
>>
>> I think "cached" is still really important -- it signifies that unlike
>> other query paths it won't go and do work all of a sudden. =]
>>
>> That said, I agree that the pattern of requiring a cached result from an
>> outer pass manager is very common and we should streamline this.
>>
>> The reason I hadn't actually done it is that I'm actually *really* torn
>> about whether it would be better to have passes just skip their logic or
>> fail hard. I almost *every* case I prefer the assert or fatal error...
>> except when it comes down to bisecting a failing pass or similar
>> techniques.
>
> How about this:   getCachedResults can either return null or fail hard
> depending on an internal option.  The client code shares the same boiler plate
> -- skips/bails when null is returned from getCachedResult. In normal build,
> you would get assertion/fail, but bitsecting tools can disable the assert.

That doesn't quite work - there are certainly cases where we use
getCachedResults because it really is optional.

We might be able to combine this idea with adding a second API that
says "I really need this result from the cache and I can't progress
without it".


More information about the llvm-commits mailing list