[cfe-dev] costing optimisations

john skaller skaller at users.sourceforge.net
Thu Nov 22 20:50:21 PST 2012


Hi, I've been having some of the usual problems with C compiler optimisations:
they work fine on small functions but drop dead on big ones.

I think this is roughly because some function level optimisations are
worse than O(N) in the number of instructions.

Changing the command line -O level is not a solution.
The optimiser should adapt to ensure reasonable compile times.

Has anyone thought about costing optimisation passes and using
a linear optimisation formula to select when to apply them?

TeX does something like this to calculate the best places to put
line breaks when typesetting. 

Roughly, a pass would calculate the cost of running on some
unit (translation unit, function), and a "goodness" estimating
how much improvement might be expected. The optimiser
then tries to simultaneously maximise goodness and minimise
cost by selecting some sequence of passes.

Another way to do this would be to calculate a time limit on
a pass and abort it if it exceeded the limit.

Some optimisations could be coded to do this kind of thing
internally, i.e. partition a function and optimise the pieces,
rather than the whole function, to avoid time overruns.

FYI: I have a product which generates big functions.
It does this for two reasons: performance is one,
and lack of suitable semantics in C, roughly this
is coroutine calls (exchange of control), and probably
continuation passing. LLVM also lacks these fundamentals
and is too high level to implement them, so the implementation
is done natively but the result is big functions.

--
john skaller
skaller at users.sourceforge.net
http://felix-lang.org







More information about the cfe-dev mailing list