[llvm-dev] Move InlineCost.cpp out of Analysis?

Xinliang David Li via llvm-dev llvm-dev at lists.llvm.org
Mon Apr 18 16:30:44 PDT 2016


On Mon, Apr 18, 2016 at 3:57 PM, Hal Finkel <hfinkel at anl.gov> wrote:

>
> ------------------------------
>
> *From: *"Xinliang David Li" <davidxl at google.com>
> *To: *"Chandler Carruth" <chandlerc at gmail.com>
> *Cc: *"Hal Finkel" <hfinkel at anl.gov>, "via llvm-dev" <
> llvm-dev at lists.llvm.org>, "Mehdi Amini" <mehdi.amini at apple.com>
> *Sent: *Monday, April 18, 2016 5:45:21 PM
> *Subject: *Re: [llvm-dev] Move InlineCost.cpp out of Analysis?
>
>
>
> On Mon, Apr 18, 2016 at 3:00 PM, Chandler Carruth <chandlerc at gmail.com>
> wrote:
>
>> On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel <hfinkel at anl.gov> wrote:
>>
>>>
>>>
>>> ------------------------------
>>>
>>> *From: *"Xinliang David Li" <davidxl at google.com>
>>>
>>> On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini <mehdi.amini at apple.com>
>>> wrote:
>>>>
>>>> In the current case at stake: the issue is that we can't make the
>>>> Analysis library using anything from the ProfileData library. Conceptually
>>>> there is a problem IMO.
>>>>
>>>
>>>
>>> Yes -- this is a very good point.
>>>
>>> Independent of anything else, +1.
>>>
>>
>> The design of ProfileData and reading profile information in the entire
>> middle end had a really fundamental invariant that folks seem to have lost
>> track of:
>>
>
> Not sure about what you mean by 'lost track of'.
>
>>
>> a) There is exactly *one* way to get at profile information from general
>> analyses and transforms: a dedicated analysis pass that manages access to
>> the profile info.
>>
>
>
> This is not the case as of today.  BPI is a dedicated analysis pass to
> manage branch probability profile information, but this pass is only used
> in limited situations (e.g, for BFI, profile update in jump-threading etc)
> -- using it it requires more memory as well as incremental update
> interfaces.  Many transformation passes simply skip it and directly access
> the meta data in IR.
>
> Really? Which ones? I see a number of passes that know about profiling
> metadata so they can preserve it, or transfer it across restructuring, but
> nothing that really interprets it on its own in a non-trivial way.
>

In a lot of cases, the client code simply set the metadata, but the user
clients include:

SimplifyCFG.cpp, Locals.cpp, CodeGenPrepare.cpp, etc.

David


>
> I'm not sure this is desirable regardless.
>
>
>
>>
>> b) There is exactly *one* way for this analysis to compute this
>> information from an *external* profile source: profile metadata attached to
>> the IR.
>>
>
>
> This is the case already -- all profile data are annotated to the IR via
> analysis pass (or in FE based instrumentation case, by FE during llvm code
> gen).
>
>
>
>>
>> c) There could be many external profile sources, but all of them should
>> be read and then translated into metadata annotations on the IR so that
>> serialization / deserialization preserve them in a common format and we can
>> reason about how they work.
>>
>>
> This should be the case already -- for instance sample and instrumentation
> based IR share the same annotation for branch probability, entry count and
> profile summary.
>
>
>>
>> This layering is why it is only a transform that accesses ProfileData --
>> it is responsible for annotating the IR and nothing else. Then the
>> analysis uses these annotations and never reads the data directly.
>>
>> I think this is a really important separation of concerns as it ensures
>> that we don't get an explosion of different analyses supporting various
>> different subsets of profile sources.
>>
>>
>> Now, the original design only accounted for profile information *within*
>> a function body, clearly it needs to be extended to support intraprocedural
>> information.
>>
>
>
> Not sure what you mean.  Profile data in general does not extend to IPA
> (we will reopen discussion on that soon), but profile summary is
> 'invariant'/readonly data, which should be available to IPA already.
>
> IPA-level profiling data might be invariant, but inside the function it
> certainly need to change because the code inside functions is changed
> (branches are eliminated, transformed into selects, etc.)
>
>  -Hal
>
>
> David
>
>
>
>> But I would still expect that to follow a similar layering where we first
>> read the data into IR annotations, then have an analysis pass (this time a
>> module analysis pass in all likelihood) that brokers access to these
>> annotations through an API that can do intelligent things like synthesizing
>> it from the "cold" attribute or whatever when missing.
>>
>
>
>>
>> -Chandler
>>
>
>
>
>
> --
> Hal Finkel
> Assistant Computational Scientist
> Leadership Computing Facility
> Argonne National Laboratory
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/06d78f87/attachment-0001.html>


More information about the llvm-dev mailing list