[LLVMdev] Advice - llvm as binary to binary translator ?
Jonathan S. Shapiro
shap at eros-os.com
Tue Jun 24 11:33:50 PDT 2008
This is off-topic. I would reply directly to Erick, but I seem to have
missed the note that had his reply address.
The evidence in the literature supporting non-trivial optimization in a
dynamic binary to binary translator should be viewed with caution. Our
suspicion when we started the HDTrans work was that all that the fancy
optimizations accomplish in most cases is to (almost) make up for the
computational cost of generating the IR in the first place. The numbers
we achieved on IA32->IA32 translation seem to support that view. There
are special cases where this might not be true, and of course there are
specific counter-examples, but the HDTrans numbers apparently came as a
shock to some people in the dynamic translator world.
Erick mentioned in particular a number of control flow optimizations
that are very easily handled in a direct emitter (i.e. with no IR). Dead
code is also naturally eliminated by any dynamic translator. Redundant
expressions are not, but the performance advantage of modern processors
over legacy machines is so large that eliminating those may not be
worthwhile. Newer machines typically have more registers and/or fast L1
caches, both of which are your friend in a dynamic translator.
Given this, if performance actually matters your time and effort might
be better spent crafting a really tight binary decoder that pays careful
attention to cache utilization, and then implementing a relatively
modest number of branch to branch and call/return optimizations that are
fairly simple to do with a low-level direct translator.
Regardless of whether you accept my argument about the value of
optimization, it's clear from the HDTrans numbers that you can lose a
lot more performance in decoding and IR generation than the published
research work has generally acknowledged.
Please don't hesitate to contact me directly if I can be helpful.
More information about the llvm-dev