[LLVMdev] Questions about MachineScheduler

Andrew Trick atrick at apple.com
Mon Jul 22 19:12:04 PDT 2013

On Jul 22, 2013, at 11:50 AM, Tom Stellard <tom at stellard.net> wrote:

> Hi,
> I'm working on defining a SchedMachineModel for the Southern Islands
> family of GPUs, and I have two questions related to the
> MachineScheduler.
> 1. I have a resource that can process 15 instructions at the same time.
> In the TableGen definitions, should I do:
> def HWVMEM   : ProcResource<15>;
> or
> let BufferSize = 15 in {
> def HWVMEM : ProcResource<1>;
> }

For in-order processors you always want BufferSize=0. In the current generic scheduler (ConvergingScheduler) it's effectively a boolean that specifies inorder vs OOO. (I have code that models the buffers in an OOO processor, but I think it’s too heavy-weight to go in the scheduler. Maybe someday it can be an analysis tool.)

let BufferSize = 0 {
def HWVMEM   : ProcResource<15>;

Now since you’ll want to plugin your own scheduling strategy, how you interpret the machine model is mostly up to you. What the TargetSchedModel interface does for you is normalize the resources to processor cycles. This is exposed with scaling factors (to avoid division): getResourceFactor, getMicroOpFactor, getLatencyFactor.

So if you have
def HW1   : ProcResource<15>;
def HW2  : ProcResource<3>;


> 2. Southern Islands has 256 registers, but there is a significant
> performance penalty if you use more than a certain amount.  Do any of
> the MachineSchedulers support switching into an 'optimize for register
> pressure mode' once it detects register pressure above a certain limit?

The code in ConvergingScheduler (I’ll rename it to GenericScheduler soon) is meant to demonstrate most of the features so developers can copy what they need into their own strategy, add heuristics and change the underlying data structures, which often makes sense. You can decide whether you want only bottom-up, top-down, or both.

For an in-order processor, I think this becomes much simpler. You do away with most of the complexity in ConvergingScheduler::SchedBoundary and implement a straightforward reservation table. If it’s fully pipelined then you just count resource units for the current cycle until one reaches the latency factor. If it’s not fully pipelined, then you need to define ResourceCycles in the machine’s SchedWrite definitions and implement a simple reservation table (mark earliest cycle at which a resource is used for bottom-up scheduling). Some of this can be made a generic utility, but it’s not much to implement.

Since the strategy defines the priority queues, you can do whatever you want for your register pressure heuristics. From scanning the full queue each time with dynamic heuristics, to resorting, to dynamically deferring nodes...

Note that the register pressure tracking is handled outside of the strategy, in ScheduleDAGMI. So you get this for free without duplication.

However, querying pressure change for a candidate is done by the strategy. The generic interface, getMaxPressureDelta(), is very clunky now. I’m going to improve it, but If you’re writing a target specific strategy, it’s probably easier to directly query a pressure set for a specific register class.

RC =TRI->getRegClass(R)
*PSetID = TRI->getRegClassPressureSets(RC)

Now you have a raw pointer right into the machine model’s null terminated array of PSetIDs that are affected by a register class (targets often have several overlapping register classes). You can choose one of those sets to track or track them all. I’m about to commit a patch that will have them sorted by number of regs in the set, so you can easily grab the largest (end of the list).

Then you can directly query pressure for a specific set...

P = RPTracker.getPressureAfterInst(I)
diff = P[PsetID] - RPTracker.getRegSetPressureAtPos()[PSetID]

Note that how you define your target’s registers can make a big difference in the pressure set formation. Yours don’t look to bad, but in general remember to use isAllocatable=0 for any classes that don’t take part in regalloc.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130722/2f9cf2ff/attachment.html>

More information about the llvm-dev mailing list