[llvm-dev] Move InlineCost.cpp out of Analysis?
Chandler Carruth via llvm-dev
llvm-dev at lists.llvm.org
Mon Apr 18 15:20:59 PDT 2016
On Mon, Apr 18, 2016 at 3:16 PM Hal Finkel <hfinkel at anl.gov> wrote:
>
>
> ------------------------------
>
> *From: *"Chandler Carruth" <chandlerc at gmail.com>
> *To: *"Hal Finkel" <hfinkel at anl.gov>
> *Cc: *"Mehdi Amini" <mehdi.amini at apple.com>, "via llvm-dev" <
> llvm-dev at lists.llvm.org>, "Xinliang David Li" <davidxl at google.com>
> *Sent: *Monday, April 18, 2016 4:54:36 PM
>
>
> *Subject: *Re: [llvm-dev] Move InlineCost.cpp out of Analysis?
>
> On Mon, Apr 18, 2016 at 2:46 PM Hal Finkel <hfinkel at anl.gov> wrote:
>
>>
>>
>> ------------------------------
>>
>> *From: *"Chandler Carruth" <chandlerc at gmail.com>
>> *To: *"Xinliang David Li" <davidxl at google.com>
>> *Cc: *"Mehdi Amini" <mehdi.amini at apple.com>, "Hal Finkel" <
>> hfinkel at anl.gov>, "via llvm-dev" <llvm-dev at lists.llvm.org>
>> *Sent: *Monday, April 18, 2016 4:31:05 PM
>> *Subject: *Re: [llvm-dev] Move InlineCost.cpp out of Analysis?
>>
>>
>>
>> On Mon, Apr 18, 2016 at 2:28 PM Xinliang David Li <davidxl at google.com>
>> wrote:
>>
>>> On Mon, Apr 18, 2016 at 2:18 PM, Chandler Carruth <chandlerc at gmail.com>
>>> wrote:
>>>
>>>> The difference between Analysis and Transforms is *not* about passes,
>>>> but about what the code *does*.
>>>>
>>>> Code for mutating the IR should be in Transforms, and code that
>>>> analyzes the IR without mutating it should be in Analysis. This is why, for
>>>> example, InstructionSimplify is in Analysis -- it does not mutate the IR in
>>>> any way.
>>>>
>>>> So I think InlineCost and similar things should stay in the Analysis
>>>> library regardless of whether they are passes or not.
>>>>
>>>
>>> Is that the only criteria (IR mod or not) ?
>>>
>>
>> Given a public API, yes, that should be the only criteria IMO.
>>
>>
>>> Most of the transformations have pass specific analysis (that are not
>>> shared with other clients) -- those code stay with the transformation -- it
>>> does not make sense to move those code in to Analysis.
>>>
>>
>> But I would expect these to also not expose public APIs to the analyses.
>> Provided the public API is limited to the transformation, I think the code
>> be closely located makes sense.
>>
>> I'm not sure the situation is as clear cut as you make it out to be. We
>> can factor common components out of different transformations (e.g. out of
>> different inliner implementations), including common components that don't
>> modify the IR, without that component forming what I would generally
>> consider an analysis. In this case, we're really talking about the
>> encapsulation of the inliner's cost heuristic, and the generality of this
>> information is questionable. To put it another way, while I'm fine with
>> someone reusing this logic to build a new inliner, I'd be very skeptical of
>> using it for any other purpose. That's why it does not really feel like an
>> analysis to me. This is not a strong feeling, however, and I'm also fine
>> with it staying in Analysis.
>>
>
> I don't think that everything that is an analysis needs to be completely
> general though. Certainly, we need to be very clear and careful in what
> terms we describe the API to make the quality and nature of the results
> unsurprising, but I think that the API around the inline cost is reasonably
> good about that -- it seems to pretty clearly indicate that this is a
> threshold determining mechanism and not something that provides a holistic
> view of the impact of a potential inlining decision.
>
> I actually think it would be good if essentially *everything* we can put a
> reasonable public API around and which doesn't mutate the IR were moved to
> the analysis library. As an example of why I think this, without this it is
> very hard to know what code can be re-used in analysis passes. And the
> constraint there is really exactly this: it has a public API that can be
> re-used, and it doesn't mutate the IR. Nothing more or less that I see.
>
>
> Naturally, we don't need to go on a campaign to move everything *right
> now*, and we can even be fairly lazy about it, but I do think that this
> direction is the correct long-term design of the libraries here.
>
> In general I agree, but the inliner's cost heuristic is a bit special in
> this regard. It is special because it contains tuned thresholds that are
> meaningful only in the context of inlining as performed by the current
> inliner. Were we to change the inliner's overall scheme (e.g. made it more
> top-down) those would need to change. If someone wanted to build some
> higher-level analysis that made use of the InlineCost information, the fact
> that they had to move things from Transforms/Utils to Anslysis would be a
> good red flag that they're probably using the wrong tool. Obviously it
> might be the right tool, but it is probably something we'd want to think
> about. A second inliner, however, might be sufficiently close in design to
> the current one that the tuned thresholds might be applicable.
>
> In short, I like your proposal, but I'd further propose that we say that
> anything that does not mutate the IR and does not make assumptions about
> its ultimate client should be an analysis.
>
But everything makes assumptions about its client? That's the whole point
of having a public API -- you need to be able to document your assumptions
well enough to have a reasonable public API. But that's not about analysis
vs. transforms, that's really true for *any* public API. For example, I
would expect anything in Transforms/Utils to document very clearly what
assumptions they make about their callers.
And I'm not even sure I buy that our inline cost is *that* specialized...
It uses TTI, another analysis, for estimating cost. It is essentially a
size cost estimation tool. The closest thing to this kind of assumption are
the *thresholds* is uses. Those should really be clear input parameters so
that callers can customize it.
Anyways, all of this is to say that I don't think the inline cost's public
API is in great shape, but I don't think it is inherently without a
reasonable public API. So I suspect we should be more trying to improve the
clarity and configurability of its API rather than moving things around.
But all of this is a digression. Let's get back to the meat of the subject
around ProfileData.
>
> -Hal
>
>
> --
> Hal Finkel
> Assistant Computational Scientist
> Leadership Computing Facility
> Argonne National Laboratory
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/592c7769/attachment.html>
More information about the llvm-dev
mailing list