[LLVMdev] MI Scheduler vs SD Scheduler?

Ghassan Shobaki ghassan_shobaki at yahoo.com
Fri Jun 28 12:38:57 PDT 2013


Hi,

We are currently in the process of upgrading from LLVM 2.9 to LLVM 3.3. We are working on instruction scheduling 
(mainly for register pressure reduction). I have been following the llvmdev mailing list and have learned that a machine instruction (MI) 
scheduler has been implemented to replace (or work with?) the selection DAG (SD) 
scheduler. However, I could not find any document that describes the new MI scheduler and how it differs from and relates to the SD scheduler. So, I would appreciate any pointer to a document (or a blog) that may help us understand the difference and the relation between the two schedulers and figure out how to deal with them. We are trying to answer the following questions: 


- A comment at the top of the file ScheduleDAGInstrs says that this file implements re-scheduling of machine instructions. So, what does re-scheduling mean? Does it mean that the real scheduling algorithms (such as reg pressure reduction) are currently implemented in the SD scheduler, while the MI scheduler does some kind of complementary work (fine tuning) at a lower level representation of the code? 
And what's the future plan? Is it to move the real scheduling algorithms into the MI scheduler and get rid of the SD scheduler? Will that happen in 3.4 or later?


- Based on our initial investigation of the default behavior at -O3 on x86-64, it appears that the SD scheduler is called while the MI scheduler is not. That's consistent with the above interpretation of re-scheduling, but I'd appreciate any advice on what we should do at this point. Should we integrate our work (an alternate register pressure reduction scheduler) into the SD scheduler or the MI scheduler? 


- Our SPEC testing on x86-64 has shown a significant performance improvement of 
LLVM 3.3 relative to LLVM 2.9 (about 5% in geomean on INT2006 and 15% in geomean on FP2006), but our spill code measurements have shown that 
LLVM 3.3 generates significantly more spill code on most benchmarks. We will be doing more investigation on this, but are there any known facts that explain this behavior? Is 
this caused by a known regression in scheduling and/or allocation (which I doubt) or by the 
implementation (or enabling) of some new optimization(s) that naturally 
increase(s) register pressure?

Thank you in advance!

Ghassan Shobaki

Assistant Professor 

Department of Computer Science 

Princess Sumaya University for Technology 

Amman, Jordan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130628/aa8126ee/attachment.html>


More information about the llvm-dev mailing list