[LLVMdev] Pre-Allocation Schedulers in LLVM

Andrew Trick atrick at apple.com
Tue Sep 20 20:54:09 PDT 2011


On Sep 17, 2011, at 10:07 AM, Ghassan Shobaki wrote:
> Hi,
> 
> I am currently writing a paper documenting a research project that we have done on pre-allocation instruction scheduling to balance ILP and register pressure. In the paper we compare the pre-allocation scheduler that we have developed to LLVM's default schedulers for two targets: x86-64 and x86-32. We would like to include in our paper some brief descriptions of the two LLVM schedulers that we are comparing against and some information about the machine model that they are scheduling for.  So, it would be great if you could confirm or correct the following information and answer my questions below:
> 
> The default scheduler for the x86-32 target is the bottom-up register-pressure reduction (BURR) scheduler, while for the x86-64 target it is the ILP Scheduler. According to the brief documentation in the source file ScheduleDAGRRList, the BURR is a register pressure reduction scheduler, while the ILP is a register-pressure aware scheduler that tries to balance ILP and register pressure. 

Yes. For those wondering how to find out, grep for 'setSchedulingPreference'.

> My questions are:
> 
> -Are there any references (such as published research) that describe each/any of these scheduling algorithms? 

The LLVM ILP scheduler is a mix of heuristics, some of which may be standard practice while others are LLVM inventions. The combination of heuristics can only be described as ad hoc.

> - By examining the source code, it appears that neither scheduler has a machine model describing the functional units and the mapping of instructions to functional units on the target x86 machine. Is that right?

The x86 LLVM target does not have a machine pipeline model (instruction itinerary).

> - Based on the test cases that I have analyzed, it looks that the BURR scheduler sets all latencies to 1, which essentially eliminates any scheduling for ILP and makes scheduling for register pressure reduction the only objective of this scheduler. Can you please confirm or correct this?

You're obviously wondering why this is called an "ILP" scheduler, which is understandable. I would even argue that the term "scheduler" is not entirely accurate.

At any rate, this scheduler does not directly increase IPC by avoiding pipeline stalls as you might expect. What we mean by ILP scheduling is increasing the number of "in-flight" instructions. This is often the opposite of register pressure. The key to this is carefully tracking register pressure (locally) and dependence height. The heuristics decide whether an instruction is likely to increase register pressure. If not, then we attempt to overlap chains of dependent instructions.

Note that LLVM's "hybrid" scheduler works much the same as the ILP scheduler but uses a mix of heuristics that happens to work better for targets with an intruction itinerary.

> - Again based on analyzing test cases, it appears that the ILP scheduler sets the latencies of DIV and SQRT (both INT and FP) to 10, while the latencies of all other instructions are set to 10. Can you please confirm or correct this observation?

The scheduler is aware that certain operations are "very high latency", but frankly that knowledge isn't crucial. 10 is used, because we want a number much greater than 1, and within reasonable expected latency. Without a pipeline model, it just doesn't make much difference.

> Apparently, the developers of the ILP scheduler assumed that this rough latency model would be sufficient to do ILP scheduling on the x86 target, because the x86 hardware has a good dynamic scheduler. Our testing, however, shows that this is the case for most but not all programs. For one particular benchmark with a high-degree of ILP, using more precise latency info significantly improved performance. Will the LLVM developers be interested in adding more precise latency info for the x86 target?

Free performance? Sure. If you're talking about adding an instruction itinerary for X86, that should be no problem. A major change to the scheduler may be harder to swallow. Either way, it would be nice to see the patch and benchmark data or specific examples of code sequences that improve.

Thanks,
-Andy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20110920/e438e074/attachment.html>


More information about the llvm-dev mailing list