[LLVMdev] thinking about timing-test-driven scheduler

orthochronous orthochronous at gmail.com
Wed Jun 9 08:30:53 PDT 2010


Hi,

I've been thinking about how to implement a framework for attempting
instruction scheduling of small blocks of code by using (GA/simulated
annealing/etc) controlled timing-test-evaluations of various
orderings. (I'm particularly interested small-ish numerical inner loop
code in low-power CPUs like Atom and various ARMs where there CPU
doesn't have the ability to "massage" the compiler's scheduling.) I
had been thinking that the minimal changes approach would be by
attaching the desired ordering to the bit-code as metadata nodes,
using the standard JIT interface and just adding a tiny bit of code in
the scheduler to recognise the metadata and add those as extra
dependencies via addPred().

However, looking at things in detail it looks like instruction
selection does things that would make propagating bitcode-instruction
ordering through to MachineInstr ordering actually pretty invasive. It
looks like it may actually be less invasive overall to put the "try
different orderings" controller code at the point where the
SelectionDAG is just about to be converted to a list of MachineInstr,
essntially repeatedly calling reg-alloc, executing and using the
run-time info to suggest a new possible ordering. Are there any
obvious reasons why this is actually a bad approach to the problem? On
the assumption that its a reasonable idea, it involves repeatedly
calling reg-alloc and machine code execution from an unexpected point
in the llvm system, so is there any documentation about any
"destructive updating" or "single-use" assumptions in the
compilation-and-execution codepaths? (I haven't found anything obvious
on the llvm website.)

Many thanks for any suggestions,
David Steven Tweed



More information about the llvm-dev mailing list