[llvm-dev] OptBisect implementation for new pass manager

Kaylor, Andrew via llvm-dev llvm-dev at lists.llvm.org
Thu Sep 27 14:25:38 PDT 2018


> At this point it makes no sense to worry about the code generation pipeline. As long as there is no new-PM design for that,
> this point is just moot. Designing opt-bisect against code generation without actually understanding what we're designing
> against is guaranteed to produce a bad architecture. So lets figure out the optimizer bisect first, and incrementally upgrade
> that once we've ironed out codegen.

I agree that we shouldn’t be speculatively designing for something that doesn’t exist, but the ability to have the compilation result in executable code is a firm requirement for me. We’re using the legacy code generation mechanism now, so let’s design something that hooks up with that (as Sergeev suggested in his original message). Where we go from there can certainly be deferred.

> Mixing OptNone and bisect is a software engineering bug: it's mixing different layers of abstraction. Bisect is something that's
> at pass manager scope: run passes until whatever. OptNone in turn doesn't belong in the pass manager layer. It only concerns
> function passes, and crumbles quickly considering other IRUnits or even codegen. I don't have a clear vision what OptNone
> handling should look like, but at this point I consider it entirely orthogonal to bisect handling.

In one sense you are absolutely right, OptNone and opt-bisect are completely different things, but from the perspective of the pass they mean exactly the same thing: should I optimize this IR unit. It’s also true that the pass shouldn’t be making that decision, and it really isn’t in the legacy implementation. It’s calling a function to ask whether or not it should perform the optimization. That’s also less than ideal. In a perfect world, the pass wouldn’t be called at all if it should be run. I think we’re actually in agreement here with regard to OptBisect.

OptNone is out of the scope of the current proposal, but I agree that in principle the pass manager shouldn’t be handling it. On the other hand, it’s not entirely clear to me that the pass itself should be handling it. There is nothing pass-specific in the interpretation of the OptNone attribute. The OptNone attribute says “don’t optimize this function.” So should we really be calling the run() method of an optimization pass on a function that we aren’t supposed to be optimizing. It feels to me like there’s a missing player here.  As I said, that’s really outside the scope of the current discussion except to say that the relevant question is what component should make the decision about whether or not a pass should be run.

So getting back to OptBisect, the problem is the same. What component should decide whether or not a pass should be run? The pass manager doesn’t know what the pass does, so it can’t decide whether or not it can be skipped. The pass itself should have no idea that the OptBisect process even exists. So the pass manager needs some way of discovering whether or not a pass can be skipped.

I don’t have strong feelings about how this happens. Off the cuff, it could be added to the pass registration information or it could be a function provided by the pass, preferably through one of the mixins so that pass developers don’t need to think about it.

-Andy

From: Philip Pfaffe [mailto:philip.pfaffe at gmail.com]
Sent: Thursday, September 27, 2018 2:46 AM
To: Kaylor, Andrew <andrew.kaylor at intel.com>
Cc: Fedor Sergeev <fedor.sergeev at azul.com>; llvm-dev <llvm-dev at lists.llvm.org>; zhizhouy at google.com; dag at cray.com; David Blaikie <dblaikie at gmail.com>; Chandler Carruth <chandlerc at gmail.com>
Subject: Re: [llvm-dev] OptBisect implementation for new pass manager

Hi Andrew,

We absolutely need to be able to generate executable programs using opt-bisect, so some mechanism for not skipping required passes is needed. It might be nice to have a mode where no passes are skipped and the IR/MIR is dumped when the bisect limit is reached, but I don't see that as a requirement.
At this point it makes no sense to worry about the code generation pipeline. As long as there is no new-PM design for that, this point is just moot. Designing opt-bisect against code generation without actually understanding what we're designing against is guaranteed to produce a bad architecture. So lets figure out the optimizer bisect first, and incrementally upgrade that once we've ironed out codegen.


Regarding opt-in versus opt-out, I think we want to make this as easy and transparent to pass developers as possible. It would be nice to have the mechanism be opt-out so that passes that were added with no awareness of opt-bisect would be automatically included. However, there is a small wrinkle to this. I can't defend this as a reasonable design choice, but the SelectionDAGISel pass has a sort of hybrid behavior. It can't actually be skipped, but it OptBisect says it should be skipped it drops the optimization level to OptNone. That's a machine function pass, so it doesn't matter so much right now. It's just something to think about.

One of the reasons that we combined the optnone handling and the opt-bisect handling is that we specifically wanted these two behaviors to be linked. The exact rule we would like to use for opt bisect is that no pass which runs at O0 is skipped by opt-bisect. There's a test that verifies this. Conversely, if a pass is able to respect the optnone attribute then it should also be skippable by opt-bisect. Of course, I would be open to considering a use case where this reasoning isn't desirable.

Mixing OptNone and bisect is a software engineering bug: it's mixing different layers of abstraction. Bisect is something that's at pass manager scope: run passes until whatever. OptNone in turn doesn't belong in the pass manager layer. It only concerns function passes, and crumbles quickly considering other IRUnits or even codegen. I don't have a clear vision what OptNone handling should look like, but at this point I consider it entirely orthogonal to bisect handling.

From a software architecture perspective I don't see a reason why passes should even _know_ about something like bisect happening. That is simply not their domain. If a pass shouldn't be skipped for whatever reason, that's not something the pass should worry about, that's the bisect driver's problem! My proposal here would be make it an opt-out design, but let the driver control that. E.g., for skipping, let the user provide a list of passes they don't want skipped.


With regard to there being one OptBisect object per compilation pipeline, I have some concerns. Specifically, the behavior of opt-bisect depends on the sequence of passes run before the limit is reached being consistent and repeatable. My inclination would be to not allow parallel compilation when opt-bisect is enabled.
I don't have a strong opinion here, but just as a data point: my mental model here is to expect bisect to expect a deterministic outcome _per module_. That model isn't threatened by parallel execution.


Cheers,
Philip

I can imagine cases where you might specifically want to debug something that only happens in a parallel build, but it's more difficult to imagine something that only happens in a parallel build and doesn't depend on interactions between threads. In such a case, would we be able to guarantee that the sequence of passes and any interaction between pipelines was repeatable. Basically, here I feel like I'm exploring a hypothetical idea where other people have specific use cases. If so, please explain the use case to me.

-Andy

-----Original Message-----
From: Fedor Sergeev [mailto:fedor.sergeev at azul.com<mailto:fedor.sergeev at azul.com>]
Sent: Wednesday, September 26, 2018 9:54 AM
To: llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>>; Zhizhou Yang <zhizhouy at google.com<mailto:zhizhouy at google.com>>; David Greene <dag at cray.com<mailto:dag at cray.com>>; David Blaikie <dblaikie at gmail.com<mailto:dblaikie at gmail.com>>; Kaylor, Andrew <andrew.kaylor at intel.com<mailto:andrew.kaylor at intel.com>>; Chandler Carruth <chandlerc at gmail.com<mailto:chandlerc at gmail.com>>
Subject: OptBisect implementation for new pass manager

Greetings!

As the generic Pass Instrumentation framework for new pass manager is finally *in*, I'm glad to start the discussion on implementation of -opt-bisect through that framework.

As it has already been discovered while porting other features (namely,
-time-passes)
blindly copying the currently existing legacy implementation is most likely not a perfect way forward. Now is a chance to take a fresh look at the overall approach and perhaps do better, without the restrictions that legacy pass manager framework imposed on the implementation.

Kind of a summary of what we have now:
   - There is a single OptBisect object, requested through LLVMContext
     (managed as ManagedStatic).

   - OptBisect is defined in lib/IR, but does use analyses,
     which is a known layering issue

   - Pass hierarchy provides skipModule etc helper functions

   - Individual passes opt-in to OptBisect activities by manually calling skip* helper functions
     whenever appropriate

With current state of new-pm PassInstrumentation potential OptBisect implementation will have the following properties/issues:
   - OptBisect object that exists per compilation pipeline, managed similar to PassBuilder/PassManagers
     (which makes it more suitable for use in parallel compilations)

   - no more layering issues imposed by implementation since instrumentations by design
     can live anywhere - lib/Analysis, lib/Passes etc

   - since Codegen is still legacy-only we will have to make a joint implementation that
     provides a sequential passes numbering through both new-PM IR and legacy Codegen pipelines

   - as of right now there is no mechanism for opt-in/opt-out, so it needs to be designed/implemented
     Here I would like to ask:
         - what would be preferable - opt-in or opt-out?

         - with legacy implementation passes opt-in both for bisect and attribute-optnone support at once.
           Do we need to follow that in new-pm implementation?

Also, I would like to ask whether people see current user interface for opt-bisect limiting?
Do we need better controls for more sophisticated bisection?
Basically I'm looking for any ideas on improving opt-bisect user experience that might affect design approaches we take on the initial implementation.

regards,
   Fedor.

_______________________________________________
LLVM Developers mailing list
llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180927/8175ec08/attachment.html>


More information about the llvm-dev mailing list