[llvm-dev] Move InlineCost.cpp out of Analysis?
Xinliang David Li via llvm-dev
llvm-dev at lists.llvm.org
Mon Apr 18 15:45:21 PDT 2016
On Mon, Apr 18, 2016 at 3:00 PM, Chandler Carruth <chandlerc at gmail.com>
> On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel <hfinkel at anl.gov> wrote:
>> *From: *"Xinliang David Li" <davidxl at google.com>
>> On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini <mehdi.amini at apple.com>
>>> In the current case at stake: the issue is that we can't make the
>>> Analysis library using anything from the ProfileData library. Conceptually
>>> there is a problem IMO.
>> Yes -- this is a very good point.
>> Independent of anything else, +1.
> The design of ProfileData and reading profile information in the entire
> middle end had a really fundamental invariant that folks seem to have lost
> track of:
Not sure about what you mean by 'lost track of'.
> a) There is exactly *one* way to get at profile information from general
> analyses and transforms: a dedicated analysis pass that manages access to
> the profile info.
This is not the case as of today. BPI is a dedicated analysis pass to
manage branch probability profile information, but this pass is only used
in limited situations (e.g, for BFI, profile update in jump-threading etc)
-- using it it requires more memory as well as incremental update
interfaces. Many transformation passes simply skip it and directly access
the meta data in IR.
> b) There is exactly *one* way for this analysis to compute this
> information from an *external* profile source: profile metadata attached to
> the IR.
This is the case already -- all profile data are annotated to the IR via
analysis pass (or in FE based instrumentation case, by FE during llvm code
> c) There could be many external profile sources, but all of them should be
> read and then translated into metadata annotations on the IR so that
> serialization / deserialization preserve them in a common format and we can
> reason about how they work.
This should be the case already -- for instance sample and instrumentation
based IR share the same annotation for branch probability, entry count and
> This layering is why it is only a transform that accesses ProfileData --
> it is responsible for annotating the IR and nothing else. Then the
> analysis uses these annotations and never reads the data directly.
> I think this is a really important separation of concerns as it ensures
> that we don't get an explosion of different analyses supporting various
> different subsets of profile sources.
> Now, the original design only accounted for profile information *within* a
> function body, clearly it needs to be extended to support intraprocedural
Not sure what you mean. Profile data in general does not extend to IPA (we
will reopen discussion on that soon), but profile summary is
'invariant'/readonly data, which should be available to IPA already.
> But I would still expect that to follow a similar layering where we first
> read the data into IR annotations, then have an analysis pass (this time a
> module analysis pass in all likelihood) that brokers access to these
> annotations through an API that can do intelligent things like synthesizing
> it from the "cold" attribute or whatever when missing.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev