[llvm-commits] [llvm] r173148 - in /llvm/trunk: include/llvm/Analysis/TargetTransformInfo.h lib/Analysis/CodeMetrics.cpp lib/Analysis/IPA/InlineCost.cpp lib/Analysis/TargetTransformInfo.cpp lib/Transforms/Scalar/TailRecursionElimination.cpp

Chandler Carruth chandlerc at gmail.com
Fri Feb 21 15:58:19 PST 2014


On Fri, Feb 21, 2014 at 11:34 AM, Hal Finkel <hfinkel at anl.gov> wrote:

> ----- Original Message -----
> > From: "Chandler Carruth" <chandlerc at gmail.com>
> > To: llvm-commits at cs.uiuc.edu
> > Sent: Tuesday, January 22, 2013 5:26:02 AM
> > Subject: [llvm-commits] [llvm] r173148 - in /llvm/trunk:
> include/llvm/Analysis/TargetTransformInfo.h
> > lib/Analysis/CodeMetrics.cpp lib/Analysis/IPA/InlineCost.cpp
> lib/Analysis/TargetTransformInfo.cpp
> > lib/Transforms/Scalar/TailRecursionElimination.cpp
> >
> > Author: chandlerc
> > Date: Tue Jan 22 05:26:02 2013
> > New Revision: 173148
> >
> > URL: http://llvm.org/viewvc/llvm-project?rev=173148&view=rev
> > Log:
> > Begin fleshing out an interface in TTI for modelling the costs of
> > generic function calls and intrinsics. This is somewhat overlapping
> > with
> > an existing intrinsic cost method, but that one seems targetted at
> > vector intrinsics. I'll merge them or separate their names and use
> > cases
> > in a separate commit.
> >
> > This sinks the test of 'callIsSmall' down into TTI where targets can
> > control it. The whole thing feels very hack-ish to me though. I've
> > left
> > a FIXME comment about the fundamental design problem this presents.
> > It
> > isn't yet clear to me what the users of this function *really* care
> > about. I'll have to do more analysis to figure that out. Putting this
> > here at least provides it access to proper analysis pass tools and
> > other
> > such. It also allows us to more cleanly implement the baseline cost
> > interfaces in TTI.
> >
> > With this commit, it is now theoretically possible to simplify much
> > of
> > the inline cost analysis's handling of calls by calling through to
> > this
> > interface. That conversion will have to happen in subsequent commits
> > as
> > it requires more extensive restructuring of the inline cost analysis.
> >
> > The CodeMetrics class is now really only in the business of running
> > over
> > a block of code and aggregating the metrics on that block of code,
> > with
> > the actual cost evaluation done entirely in terms of TTI.
>
> Looking at this in detail,
>
> +    NumInsts += TTI.getUserCost(&*II);
>
> I'm not sure this is doing what we want. The issue is that the cost model
> is designed to be used with the vectorizer, and the 'costs' produced by
> that model don't really measure instructions, or even uop counts, but
> relative throughput. This means, for example, that if we can dispatch two
> integer adds per cycle, and 1 floating-point add per cycle, then the
> floating point add will have twice the cost of the integer add (which
> probably has a cost of 1). But it is not clear to me at all that this is
> the appropriate measure for inlining and unrolling (especially for
> unrolling, for OOO cores with loop dispatch buffers, we almost certainly
> want uop counts).
>
> So the question is: what to do about this? The straightforward thing seems
> to be to add some kind of 'cost-type' to all of the TTI cost APIs. There
> seem to be three potentially relevant kinds of costs:
>  - Scaled throughput
>  - uop counts
>  - instruction counts (or, perhaps instead: in-memory code size)
>
> The advantage of doing this is, for the most part, none of the target
> independent code would need to care about what kind of cost we needed
> (because the scalarization logic probably does not care), and just the
> target parts need to change. But maybe there is a better way. Opinions?
>

So, what Arnold said, but more generally I think we should formalize this
more.

The inliner doesn't actually *care* what is counted though. It's not clear
that we actually have any users that want the third measurement. Instead,
we have users that want to have an incredibly coarse corollary. As such, I
would focus on better interfaces for scaled throughput and uop counts, and
then we may find that one of these can easily be used by the inliner as a
"good enough" approximation of code size.

To be more specific about what the inliner cares about: it doesn't really
care much about the exact size, throughput, or anything else. Instead, it
cares about recognizing IR patterns which literally *disappear* on the
target (zext and bitcast are huge here), and IR patterns that *explode* on
the target like floating point math that we have to lower to library calls.
Everything else is essentially noise.

-Chandler.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20140221/6d7d01ad/attachment.html>


More information about the llvm-commits mailing list