[llvm-dev] [RFC] Introducing classes for the codegen driven by new pass manager

Arthur Eubanks via llvm-dev llvm-dev at lists.llvm.org
Mon Jul 13 12:49:18 PDT 2020


>
> While we're still working towards using NPM for optimizer pipeline by
> default, we still don't have a machine pass interface and the corresponding
> machine pass manager using NPM. The potential benefits using NPM aside,
> this inhibits us from making any progress on deprecating LPM for the
> codegen pipeline which blocks removing LPM altogether. The purpose of this
> series of patches is to (1) let pass developers write or port machine
> passes to a new machine pass interface and be able to test it with `llc`.
> (2) let a target have the choice of implementing the codegen pipeline using
> NPM (Work-in-Progress). Maybe it is obvious already, but I also want to
> mention that these patches do not intend to force a target to migrate to
> NPM right way.
>
Awesome!

It would be awesome to delete all the LPM infra at some point in the
future. But even just deleting all the optimizer pipeline LPM infra would
be a big win, and that shouldn't be tied to codegen.

>
> * Goal-1 *
> https://reviews.llvm.org/D67687
>
> Four member methods of a machine pass are recognized by the machine pass
> manager:
> (1) `PreservedAnalyses run(MachineFunction &,
> MachineFunctionAnalysisManager &)`. Majority of the machine passes use this.
> (2) `Error doInitialization(Module &, MachineFunctionAnalysisManager &)`.
> Passes like AsmPrinter needs a hook to lower/transform global constructs.
> (invoked before all passes `run` method)
> (3) `Error doFinalization(Module &, MachineFunctionAnalysisManager &)`.
> Client: PrintMIRPass.  This is also for completeness. (invoked after all
> passes `run` method)
> (4) `Error run(Module &, MachineFunctionAnalysisManager &)`. Client:
> MachineOutliner, DebugifyMachineModule. I would call this machine module
> pass which needs a global scope. It is like (1) but subject to pass
> ordering. Currently, a pass either define this or (1), but not both.
>
> (doInitialization/doFinalization is currently not supported by the NPM
> optimizer pipeline because there is no real need for it.
> http://lists.llvm.org/pipermail/llvm-dev/2018-September/126504.html)
>
Are doInitialization/doFinalization really necessary? As mentioned in the
previous discussion, it seems like usually the things in
doInitialization/doFinalization are not logically in the right place.
For example, maybe PrintMIRPass should just be a module pass, like
PrintModulePass.

>
>
> * Goal-2 *
> https://reviews.llvm.org/D67687
>
> Unlike IR where `has-a` relationship exists among
> module/function/loop/cgscc etc., the MIR does not have `has-a` relationship
> with any kind of IR. It does have a one-on-one relationship with IR
> function. So, transforming MIR does not change any IR unit at all.
> Invalidating a MIR analysis result also does not invalidate any IR analysis
> result.
>
Based on the above observation, the machine pass manager runs standalone,
> i.e. combining it with other types of pass managers using adaptor passes
> are not supported. There is also no proxy defined for machine analysis
> manager with any other types of analysis managers. The machine analysis
> manager does provide API to register and query IR analysis because machine
> passes need IR analysis result in general.
>
Maybe this is my lack of familiarity with codegen, but why doesn't a Module
contain all its MachineFunctions? It seems like that adaptor would be
necessary.

>
> ** Testing **
> - Since the `llc` options are compatible, as passes are ported to NPM and
> various issues got resolved, we should see more tests passing when `llc
> -enable-new-pm` is turned on implicitly via an (maybe) knob similar to
> `cmake -DENABLE_EXPERIMENTAL_NEW_PASS_MANAGER`.
> - A buildbot to make sure no regression when `llc -enable-new-pm` is
> implicitly on?
> - Any idea on this regard is much appreciated.
>
Manually running tests once in a while might be good enough, not sure if
the cost of setting up a bot that maintains some sort of list of tests that
have passed in the past is worth it. From my limited experience, tests
won't really tend to regress under NPM as long as you have some tests
explicitly testing NPM sprinkled around.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200713/8e1bf481/attachment.html>


More information about the llvm-dev mailing list