[llvm-dev] Move InlineCost.cpp out of Analysis?

Chandler Carruth via llvm-dev llvm-dev at lists.llvm.org
Mon Apr 18 17:46:04 PDT 2016


On Mon, Apr 18, 2016 at 5:36 PM Xinliang David Li <davidxl at google.com>
wrote:

> On Mon, Apr 18, 2016 at 5:14 PM, Chandler Carruth <chandlerc at gmail.com>
> wrote:
>
>> On Mon, Apr 18, 2016 at 4:38 PM Xinliang David Li <davidxl at google.com>
>> wrote:
>>
>>>
>>>>>> Now, the original design only accounted for profile information
>>>>>> *within* a function body, clearly it needs to be extended to support
>>>>>> intraprocedural information.
>>>>>>
>>>>>
>>>>>
>>>>> Not sure what you mean.  Profile data in general does not extend to
>>>>> IPA (we will reopen discussion on that soon), but profile summary is
>>>>> 'invariant'/readonly data, which should be available to IPA already.
>>>>>
>>>>
>>>> I don't know what you mean by "invariant" or readonly data here. I
>>>> think that whether or not the profile information is mutated shouldn't
>>>> influence the design invariants I described above.
>>>>
>>>
>>> I do not disagree with this. What I was saying is that the information
>>> can be made available to IPA in some form due to its readonly nature.
>>>
>>
>> While it can be made available, it is very hard to make it available even
>> in a readonly form in the current pass manager.
>>
>> You essentially have to avoid caching anything and make the API always
>> re-examine the IR annotations in order to reflect changes to them. There
>> are a few other "Immutable" analysis passes that abuse the legacy pass
>> manager in this way.
>>
>
> Can you explain ? Why caching has to be avoided? What are these abuses?
>

There is no invalidation of an immutable analysis pass in the old pass
manager. So if you cache something, and it is changed, it won't really get
updated unless you have a way to guarantee that every thing that updates
the information *always* invalidates your cache (such as the ValueHandle
machinery). But that style of cache invalidation often causes as many
problems as it solves because it is very expensive.

As a concrete example, if you want to walk the call graph to compute
transitive profile information on call graph edges, you'll have no way to
update this when the call graph changes other than to always recompute it
when queried.


>
>
>
>> That seems fine as a temporary thing. I don't *think* this kind of
>> information would pose a significant compile time cost to recompute on each
>> query, but I've not looked at the nature of the IPA queries you would want
>> to make.
>>
>
> Note that for profile summary, since it is at program level (thus
> identical across modules), recomputing (re-reading from meta data) seems a
> waste -- though not to big as measured by Easwaran.
>

The above only applies to things that change (function counts, etc).
Genuinely immutable stuff is fine here.

However, there is another problem -- even if it won't be changed while the
optimizer is running, immutable analysis run before *all* transform passes,
including any pass that reads profile data and writes it into the IR. =/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160419/bc7dc9d2/attachment.html>


More information about the llvm-dev mailing list