[llvm-dev] OptBisect implementation for new pass manager

Kaylor, Andrew via llvm-dev llvm-dev at lists.llvm.org
Thu Oct 4 16:10:59 PDT 2018


We (Intel) use opt-bisect most often in conjunction with clang and frequently with applications that have non-trivial compilation mechanisms. If there isn’t a way to get something hooked up through code generation, it doesn’t really meet the needs of the way we are using it.  As an interim solution it would be tolerable to have two separate bisect-like options, one that works its way through the pre-codegen passes and a different one that bisects codegen.

Skipping “necessary” passes is a somewhat different matter, because you can’t skip something like register allocation on your way to executable code.  Most of the passes that are really significant in this way are codegen passes, so we might still be able to work something out. I believe there are a few pre-codegen passes that are required but they do trivial things like removing intrinsics that don’t get lowered. Perhaps we could have some mechanism that builds a set of clean up passes that get run under some very limited circumstances. Is that what you were suggesting in your comment about running “some minimal second set of passes” after the bisection?

The current opt-bisect mechanism really doesn’t do much more than increment a counter, but it does produce output that indicates what was run and what wasn’t.  This is very useful for finding the proper owner for a bug once the failure has been diagnosed with opt-bisect.  I don’t really care how the mechanism is implemented, but I’d really like to know what the last optimization was that ran in a failing case and which IR unit it was run on.  Knowing the highest pass count that was used is also helpful to establish a starting point.

-Andy


From: Chandler Carruth [mailto:chandlerc at gmail.com]
Sent: Thursday, October 04, 2018 12:59 AM
To: Fedor Sergeev <fedor.sergeev at azul.com>
Cc: llvm-dev <llvm-dev at lists.llvm.org>; Zhizhou Yang <zhizhouy at google.com>; David Greene <dag at cray.com>; David Blaikie <dblaikie at gmail.com>; Kaylor, Andrew <andrew.kaylor at intel.com>; Chandler Carruth <chandlerc at gmail.com>
Subject: Re: [llvm-dev] OptBisect implementation for new pass manager

Sorry I'm late to the thread (conference + vacation delayed me). I've tried to skim the thread, but haven't found too much real conclusions to a few points I'd like to make. If any of the below re-hashes stuff that was already covered, my apologies and feel free to just mention by whom or what date and I'll read more carefully.


I feel like the design of this is made unnecessarily complex and could be simplified in a few ways. These all stem from a key aspect of bisection: this is a *development* activity. It doesn't have to hit some specific quality bar the way that `optnone` and -O0 (which are both exposed to users) need to....

Some immediate simplifications:

1) I don't think we need to go out of our way to connect the IR pass bisection (in the new PM) with codegen's IR pass bisection. We already have two tools (`opt` and `llc` and can drive them separately IMO).

2) I also don't think we need the subtle *correctness* guarantees of `optnone` which really and truly IMO require *passes* to make the decision rather than some abstract pass pipeline system.

3) I think we really do want high *resolution* of bisection even if it isn't 100% functional. Let's imagine that this skips a "necessary" pass for some behavior. The IR will still be valid, and this step of bisection can still be very useful for reducing crashes and assert failures.

4) I believe we can re-use the debug counter infrastructure that did not exist when OptBisect was first introduced rather than rolling a custom version. It's possible there are use cases this cannot handle, but it might be worth trying to avoid inventing another thing here.

Given the above, I'd really be interested in seeing how far we can get with a simple debug counter wired up to the new instrumentation framework, and nothing more. Could we get that working? Can we see where that would be genuinely insufficient for developers (as opposed to simply producing a slightly different workflow or command sequence)?


Regarding #2 which is I think the most surprising thing... Keep in mind that *after* we finish bisection, we can still run some minimal second set of passes in order to generate "correct" code. Also, I believe debug counters has the ability to disable only for a range of counts and then re-enable. Well designed bisection test scripts should be able to preserve and/or synthesize the necessary bits to keep IR "working" for whatever constraints a particular reduction has.


Hope all of this makes some sense.
-Chandler

On Wed, Sep 26, 2018 at 9:54 AM Fedor Sergeev via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:
Greetings!

As the generic Pass Instrumentation framework for new pass manager is
finally *in*,
I'm glad to start the discussion on implementation of -opt-bisect
through that framework.

As it has already been discovered while porting other features (namely,
-time-passes)
blindly copying the currently existing legacy implementation is most
likely not a perfect
way forward. Now is a chance to take a fresh look at the overall
approach and perhaps
do better, without the restrictions that legacy pass manager framework
imposed on
the implementation.

Kind of a summary of what we have now:
   - There is a single OptBisect object, requested through LLVMContext
     (managed as ManagedStatic).

   - OptBisect is defined in lib/IR, but does use analyses,
     which is a known layering issue

   - Pass hierarchy provides skipModule etc helper functions

   - Individual passes opt-in to OptBisect activities by manually
calling skip* helper functions
     whenever appropriate

With current state of new-pm PassInstrumentation potential OptBisect
implementation
will have the following properties/issues:
   - OptBisect object that exists per compilation pipeline, managed
similar to PassBuilder/PassManagers
     (which makes it more suitable for use in parallel compilations)

   - no more layering issues imposed by implementation since
instrumentations by design
     can live anywhere - lib/Analysis, lib/Passes etc

   - since Codegen is still legacy-only we will have to make a joint
implementation that
     provides a sequential passes numbering through both new-PM IR and
legacy Codegen pipelines

   - as of right now there is no mechanism for opt-in/opt-out, so it
needs to be designed/implemented
     Here I would like to ask:
         - what would be preferable - opt-in or opt-out?

         - with legacy implementation passes opt-in both for bisect and
attribute-optnone support at once.
           Do we need to follow that in new-pm implementation?

Also, I would like to ask whether people see current user interface for
opt-bisect limiting?
Do we need better controls for more sophisticated bisection?
Basically I'm looking for any ideas on improving opt-bisect user
experience that might
affect design approaches we take on the initial implementation.

regards,
   Fedor.

_______________________________________________
LLVM Developers mailing list
llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20181004/3ee5d4f6/attachment.html>


More information about the llvm-dev mailing list