[cfe-dev] costing optimisations

Sean Silva silvas at purdue.edu
Thu Nov 22 22:46:48 PST 2012


Adding LLVMdev, since this is intimately related to the optimization passes.

> I think this is roughly because some function level optimisations are
> worse than O(N) in the number of instructions.

Please profile this and mail llvmdev regarding passes with
significantly superlinear behavior (e.g. O(n^2)). My understanding is
that these kinds of algorithmic problems are generally considered bugs
in the optimizer and will be fixed.

> Roughly, a pass would calculate the cost of running on some
> unit (translation unit, function), and a "goodness" estimating
> how much improvement might be expected. The optimiser
> then tries to simultaneously maximise goodness and minimise
> cost by selecting some sequence of passes.

I'm doubt there is a way to compute expected benefit that is
significantly cheaper than actually performing the optimization
itself. You also have to consider that passes interact with each other
in complicated and "poorly understood" ways so it's hard to
extrapolate one pass's estimated benefit to overall code improvement
in the end. I say "poorly understood" because as far as I know there
is no formal or scientific underpinning to the choice and ordering of
passes which are run as the "standard compile opts" (e.g. -O1, -O2,
-O3, etc.) besides very high-level, extremely rough heuristics.

> Another way to do this would be to calculate a time limit on
> a pass and abort it if it exceeded the limit.

No way. This would make the optimizations nondeterministic.

-- Sean Silva

On Thu, Nov 22, 2012 at 11:50 PM, john skaller
<skaller at users.sourceforge.net> wrote:
> Hi, I've been having some of the usual problems with C compiler optimisations:
> they work fine on small functions but drop dead on big ones.
>
> I think this is roughly because some function level optimisations are
> worse than O(N) in the number of instructions.
>
> Changing the command line -O level is not a solution.
> The optimiser should adapt to ensure reasonable compile times.
>
> Has anyone thought about costing optimisation passes and using
> a linear optimisation formula to select when to apply them?
>
> TeX does something like this to calculate the best places to put
> line breaks when typesetting.
>
> Roughly, a pass would calculate the cost of running on some
> unit (translation unit, function), and a "goodness" estimating
> how much improvement might be expected. The optimiser
> then tries to simultaneously maximise goodness and minimise
> cost by selecting some sequence of passes.
>
> Another way to do this would be to calculate a time limit on
> a pass and abort it if it exceeded the limit.
>
> Some optimisations could be coded to do this kind of thing
> internally, i.e. partition a function and optimise the pieces,
> rather than the whole function, to avoid time overruns.
>
> FYI: I have a product which generates big functions.
> It does this for two reasons: performance is one,
> and lack of suitable semantics in C, roughly this
> is coroutine calls (exchange of control), and probably
> continuation passing. LLVM also lacks these fundamentals
> and is too high level to implement them, so the implementation
> is done natively but the result is big functions.
>
> --
> john skaller
> skaller at users.sourceforge.net
> http://felix-lang.org
>
>
>
>
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev



More information about the cfe-dev mailing list