[LLVMdev] LLVM's Pre-allocation Scheduler Tested against a Branch-and-Bound Scheduler

Ghassan Shobaki ghassan_shobaki at yahoo.com
Sat Sep 29 14:46:05 PDT 2012


Duncan,

Yes, there is a significant random variance among runs for some (but not all) SPEC benchmarks (examples are libquantum, bwaves and cactus). However, we run a complete SPEC test with three or five iterations after every significant change we make to the code and make sure that we reproduce all previously measured differences. If we can't reproduce some previously seen result, that flags a bug in our latest changes that we trace and fix. For the production test that was used to generate the results for the paper, we ran SPEC using 9 iterations (as documented in the paper). Since we have been working on this for over a year, making multiple significant changes every week, most of the differences that are reported here have been reproduced tens if not hundreds of times. Any difference that cannot be reproduced many times is reported as a zero difference, hence the many zero differences in the tables.

Furthermore, we have analyzed most of the benchmarks on which significant differences has been measured (lbm, gromacs, cactus, sphinx, etc) and identified the cause of the performance difference in each case. In most cases, the cause is a reduction in register pressure in some hot basic block that causes a significant reduction in spill code, as reported by the register allocator. For example, on the lbm benchmark, using LLVM's scheduler causes the register allocator to spill 12 virtual registers in the hottest function that amounts  to 99% of the execution time, while using the branch-and-bound scheduler causes the register allocator to spill only 2 registers in that hot function. 


So, we are quite certain that the reported differences are real and reproducible.

Evan,

Please see my inlined answers below:

Thanks
-Ghassan




________________________________
 From: Evan Cheng <evan.cheng at apple.com>
To: Ghassan Shobaki <ghassan_shobaki at yahoo.com> 
Cc: "llvmdev at cs.uiuc.edu" <llvmdev at cs.uiuc.edu> 
Sent: Saturday, September 29, 2012 9:37 PM
Subject: Re: [LLVMdev] LLVM's Pre-allocation Scheduler Tested against a Branch-and-Bound Scheduler
 



On Sep 29, 2012, at 2:43 AM, Ghassan Shobaki <ghassan_shobaki at yahoo.com> wrote:

Hi,
>
>We are currently working on revising a journal article that describes
 our work on pre-allocation scheduling using LLVM and have some questions about LLVM's pre-allocation scheduler. The answers to these question will help us better document and analyze the results of our benchmark tests that compare our algorithm with LLVM's pre-allocation scheduling algorithm.
>
>First, here is a brief description of our work:
>
>We have developed a combinatorial algorithm for balancing instruction-level parallelism (ILP) and register pressure (RP) during pre-allocation scheduling. The algorithm is based on a branch-and-bound (B&B) approach, wherein the objective function is a linear combination of schedule length and register pressure. We have implemented this algorithm and integrated it into LLVM 2.9 as an alternate pre-allocation scheduler. Then we compared the performance of our (B&B) scheduler with that of LLVM's default scheduler on x86 (BURR scheduler on x86-32 and ILP on x86-64) using SPEC CPU2006. The results
 show that our B&B scheduler significantly improves the performance of some benchmarks relative to LLVM's default scheduler by up to 21%. The geometric-mean speedup on FP2006 is about 2.4% across the entire suite. We then observed that LLVM's ILP scheduler on x86-64 uses "rough" latency values. So, we added the precise latency values published by Agner (http://www.agner.org/optimize/) and that led to more speedup relative to LLVM's ILP scheduler on some benchmarks. The most significant gain from adding precise latencies was on the gromacs benchmark, which has a high degree of ILP. I am attaching the benchmarking results on x86-64 using both LLVM's rough latencies and Agner's precise latencies:
>
>This work makes two points:
>
>
>-A B&B 
algorithm can discover significantly better schedules than a heuristic 
can do for some larger hard-to-schedule blocks, and if such blocks 
happen to occur in hot code, their scheduling will have a substantial 
impact on 
performance.
>- A B&B algorithm is generally slower than a heuristic, but 
it is not a slow as most people think. By applying such an algorithm 
selectively to the hot blocks that are likely to benefit from it and 
setting some compile-time budget, a significant performance gain may be 
achieved with a relatively small increase in compile time.
>
>
>
>My questions are:
>
>
>1. Our current experimental results are based on LLVM 2.9. We 
definitely plan on upgrading to the latest LLVM version in our future 
work, but is there a fundamentally compelling reason for us to upgrade now to 3.1 for the sake of making the above points in the publication? 
>
Yes there is. While the pre-allocation scheduler has not had algorithmic changes during the past year it has received minor tweaks which can impact performance. Also note the scheduler is on its way out. I don't know when the article will be published. But it's possible by the time the paper is published, you would be essentially comparing against deprecated technology.

Ghassan: Are there any benchmarking results that quantify the impact of these tweaks? 
I remember reading on this list that the current scheduler will be replaced by a new scheduler, but will the new scheduler be using any new algorithms for reducing register pressure and balancing register pressure and ILP? I mean some new algorithmic technique that is fundamentally different from the current greedy heuristic approach that uses the Sethi-Ullman number as a priority scheme for register pressure and the critical-path distance as a priority scheme for ILP and makes a greedy choice between the two schemes depending on whether the current register pressure is above or below a certain threshold. If yes, can you point me to some LLVM documents or academic publications that describe this new algorithm? 


>
>2. The BURR scheduler on x86-32 appears to set all latencies to one (which makes it a pure RR scheduler with no ILP), while the ILP scheduler on 
x86-64 appears to set all latencies to 10 expect for a few long-latency 
instructions. For the sake of documenting this in the paper, does anyone know (or can point 
me to) a precise description of how the scheduler sets latency values? 
In the revised paper, I will add experimental results based on precise 
latency values (see the attached spreadsheet) and would like to clearly 
document how LLVM's rough latencies for x86 are determined.
>
I don't think your information is correct. The ILP scheduler is not setting the latencies to 10. LLVM does not have machine models for x86 (except for atom) so it's using a uniform latency model (one cycle).

Ghassan: I know that LLVM does not have a machine model for x86, but as far as latency is concerned, setting all latencies to one means totally ignoring ILP and scheduling for the sole objective of reducing register pressure. That's what the BURR scheduler does on x86-32, but on x86-64 the default scheduler is the ILP scheduler, which tries to do some minimal ILP scheduling. I am 100% sure that this ILP scheduler sets some latency values to 10, because I actually print the latency values and look at them. I have not attempted to do a thorough study that identifies the latency value used for each instruction, but apparently most latencies are set to one except a few long-latency instructions such as DIV. I am looking for a more precise documentation of how the ILP scheduler sets these latencies.




>
>3. Was the choice to use rough latency values in the ILP scheduler based 
on the fact that using precise latencies makes it much harder for a 
heuristic non-backtracking scheduler to balance ILP and RP or the choice was made simply because nobody bothered to write an x86 itinerary?  
No one has bothered to write the itinerary.



>
>4. Does the ILP scheduler ever consider scheduling a stall (leaving a 
cycle empty) when there are ready instructions? Here is a small hypothetical example that explains what I mean:
>
>Suppose
 that at Cycle C the register pressure (RP) is equal to the physical 
limit and all ready instructions in that cycle start new live ranges, 
thus increasing the RP above the physical register limit. However, in a 
later cycle C+Delta some instruction X that closes a currently open live
 range will become ready. If the objective is minimizing RP, the right
 choice to make in this case is leaving Cycles C through C+Delta-1 empty
 and scheduling Instruction X in Cycle C+Delta. Otherwise, we will be 
increasing the RP. Does the ILP scheduler ever make such a choice or it will 
always schedule an instruction when the ready list is not empty?
>
I don't believe so.
Ghassan: That can lead to significant increases in register pressure for basic blocks having many long-latency instructions, do you agree? Do you know if the new scheduling algorithm is going to resolve this problem? That's a very hard problem to solve. 


Evan



>Thank you in advance!
>
>-Ghassan  
>
>
>
>Ghassan Shobaki
>Assistant Professor
>Department of Computer Science
>Princess Sumaya University for Technology
>Amman, Jordan
>
>
>Attachments inlined:
>
>
>Rough Latencies
>
>
>Benchmark Branch-and-Bound LLVM 
> 
>
> SPEC Score SPEC Score % Score Difference 
>400.perlbench 21.2 20.2 4.95% 
>401.bzip2 13.9 13.6 2.21% 
>403.gcc 19.5 19.8 -1.52% 
>429.mcf 20.5 20.5 0.00% 
>445.gobmk 18.6 18.6 0.00% 
>456.hmmer 11.1 11.1 0.00% 
>458.sjeng 19.3 19.3 0.00% 
>462.libquantum 39.5 39.5 0.00% 
>464.h264ref 28.5 28.5 0.00% 
>471.omnetpp 15.6 15.6 0.00% 
>473.astar 13 13 0.00% 
>483.xalancbmk 21.9 21.9 0.00% 
>GEOMEAN 19.0929865 19.00588287     0.46% 
>410.bwaves  15.2 15.2 0.00% 
>416.gamess CE CE #VALUE! 
>433.milc  19 18.6 2.15% 
>434.zeusmp    14.2 14.2 0.00% 
>435.gromacs       11.6 11.3 2.65% 
>436.cactusADM 8.31 7.89 5.32% 
>437.leslie3d 11 11 0.00% 
>444.namd   16 16 0.00% 
>447.dealII 25.4 25.4 0.00% 
>450.soplex 26.1 26.1 0.00% 
>453.povray 20.5 20.5 0.00% 
>454.calculix 8.44 8.3 1.69% 
>459.GemsFDTD  10.7 10.7 0.00% 
>465.tonto CE CE #VALUE! 
>470.lbm 38.1 31.5 20.95% 
>481.wrf 11.6 11.6 0.00% 
>482.sphinx3 28.2 26.9 4.83% 
>GEOMEAN 15.91486307 15.54419555    2.38% 
>
>
>Precise Latencies
>
>
>Benchmark Branch-and-Bound LLVM 
> 
>
> SPEC Score SPEC Score % Score Difference 
>400.perlbench 21.2 20.2 4.95% 
>401.bzip2 13.9 13.6 2.21% 
>403.gcc 19.6 19.8 -1.01% 
>429.mcf 20.8 20.5 1.46% 
>445.gobmk 18.8 18.6 1.08% 
>456.hmmer 11.1 11.1 0.00% 
>458.sjeng 19.3 19.3 0.00% 
>462.libquantum 39.5 39.5 0.00% 
>464.h264ref 28.5 28.5 0.00% 
>471.omnetpp 15.6 15.6 0.00% 
>473.astar 13 13 0.00% 
>483.xalancbmk 21.9 21.9 0.00% 
>GEOMEAN 19.14131861 19.00588287  
> 0.71% 
>410.bwaves  15.5 15.2 1.97% 
>416.gamess CE CE #VALUE! 
>433.milc  19.3 18.6 3.76% 
>434.zeusmp    14.2 14.2 0.00% 
>435.gromacs       12.4 11.3 9.73% 
>436.cactusADM 7.7 7.89 -2.41% 
>437.leslie3d 11 11 0.00% 
>444.namd   16.2 16 1.25% 
>447.dealII 25.4 25.4 0.00% 
>450.soplex 26.1 26.1 0.00% 
>453.povray 20.5 20.5 0.00% 
>454.calculix 8.55 8.3 3.01% 
>459.GemsFDTD  10.5 10.7 -1.87% 
>465.tonto CE CE #VALUE! 
>470.lbm 38.8 31.5 23.17% 
>481.wrf 11.6 11.6 0.00% 
>482.sphinx3 28 26.9 4.09% 
>GEOMEAN 15.96082174 15.54419555     2.68% 
>
><x86-64_BB_vs_LLVM_roughLatencies.xls><x86-64_BB_vs_LLVM_preciseLatencies.xls>_______________________________________________
>LLVM Developers mailing list
>LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
>http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120929/65feb547/attachment.html>


More information about the llvm-dev mailing list