[LLVMdev] [RFC] Scheduler Rework

Andrew Trick atrick at apple.com
Tue May 8 19:06:12 PDT 2012


On Apr 24, 2012, at 8:59 AM, dag at cray.com wrote:

> Andrew Trick <atrick at apple.com> writes:
> 
>> We plan to move to the MachineScheduler by 3.2. The structure is:
> 
> How hard will this be to backport to 3.1?  Has woprk on this started
> yet?

In my previous message I outlined the steps that I would take to bring up the new scheduler. I'm about to checkin the register pressure reducing scheduler. The next step will be plugging in the target itinerary.

>> ScheduleDAG: Abstract DAG of SUnits and SDeps
>>  |
>>  v
>> ScheduleDAGInstrs: Build the DAG from MachineInstrs, each SUnit tied to an MI
>>                   Delimit the current "region" of code being scheduled.
>>  |
>>  v
>> ScheduleDAGMI: Concrete implementation that supports both top-down and bottom-up scheduling
>>               with live interval update. It divides the region into three zones:
>>               Top-scheduled, bottom-scheduled, and unscheduled.
> 
> So does this happen after SDNodes are converted to MachineInstrs?  It's
> not clear to me given your description of ScheduleDAGInstrs.  I assume
> it uses the current SDNode->SUnit mechanism but I just want to make
> sure.

Machine scheduling occurs in the vicinity of register allocation. It uses the existing MachineInstr->SUnit mechanism.

> ...
> I'm glad to hear the top-down scheduler will get some attention.  We'll
> be wanting to use that.

Out of curiosity what about top-down works better for your target?

> 
>> Start by composing your scheduler from the pieces that are available,
>> e.g.  HazardChecker, RegisterPressure... (There's not much value
>> providing a scheduling queue abstraction on top of vector or
>> priority_queue).
> 
> What do you mean by this last point?  We absolutely want to be able to
> swap out different queue implementations.  There is a significant
> compile time tradeoff to be made here.

Use whatever data structure you like for your queue. I don't have plans to make a reusable one yet. They're not complicated.

>> 2. Division of the target-defined resources into "interlocked"
>> vs. "buffered". The HazardChecker will continue to handle the
>> interlocks for the resources that the hardware handles in
>> order. 
> 
> So by "interlocks" you mean hardware-implemented interlocks?  So that
> the scheduler will attempt to avoid them.  Not that we have a machine
> like this, but what about machines with no interlocks where software is
> responsible for correctness (VLIW, early MIPS, etc.)?  I suppose they
> can be handled with a similar mechanism but the latter is much more
> strict than the former.

I'm not designing a mechanism for inserting nops to pad latency. If someone needs that, it's easy to add.

>> Buffered resources, which the hardware can handle out of order. These
>> will be considered separately for scheduling priority. We will also
>> make a distinction between minimum and expected operation latency.
> Does this mean you want to model the various queues, hardware scheduling
> windows, etc.?  I'm just trying to get a clearer idea of what this
> means.

I don't intend to directly model microarchitectural features of OOO processors at the level of buffer sizes. Although that could be done by a highly motivated developer.

I do intend to allow a target to identify arbitrary categories of resources, how many are available in each cycle on average, and indicate which resources are used by an operation. I'll initially piggyback on the existing functional units to avoid rewriting target itineraries.

> As I said, this is a time-critical thing for us.  Is there any way I can
> help to move this along?

In general, fixing LLVM bugs and keeping build bots green is always helpful.

As far as the scheduler, pieces are starting to fall into place. People are starting to use those pieces and contribute. This is a pluggable pass, so there's no reason you can't develop your own machine scheduler in parallel and gradually take advantage of features as I add them. Please expect to do a little code copy-pasting into your target until the infrastructure is mature enough to expose more target interfaces. I'm not going to waste time redesigning APIs until we have working components.

It would be very useful to have people testing new features, finding bugs early, and hopefully fixing those bugs. I would also like people to give me interesting bits of code with performance issues that can work as test cases. That's hard if I can't run your target.

-Andy


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120508/3f036ae4/attachment.html>


More information about the llvm-dev mailing list