[llvm-dev] Common abstraction over SSA form IRs (LLVM IR, SSA-MIR, MLIR) for analyses

Nicolai Hähnle via llvm-dev llvm-dev at lists.llvm.org
Fri Oct 30 11:58:02 PDT 2020


I've talked about this a bit offline a bit with Alina and David, and
am going to follow up with a new from-scratch RFC on llvm-dev.

Cheers,
Nicolai

On Tue, Oct 27, 2020 at 9:34 PM Nicolai Hähnle <nhaehnle at gmail.com> wrote:
>
> Hi David,
>
> Thanks for taking the time to clear up some things more directly. I've
> reverted the change(s) for now.
>
>
> On Tue, Oct 27, 2020 at 4:16 AM David Blaikie <dblaikie at gmail.com> wrote:
> >> One issue I'm having with this discussion if I'm being honest is that
> >> despite all the back and forth I _still_ barely know what your opinion
> >> is, or whether you even have one.
> >
> >
> > Ah, sorry - I'll try to state a few parts of my views more directly:
> >
> > Most importantly, this seems like a new fairly significant abstraction intended for use when writing LLVM analyses (but not transformations, I assume?)
>
> Yes, that's correct. My thinking is that (1) trying to use this for
> transformations would increase the surface area by far too large an
> amount, and (2) the different IRs correspond roughly to different
> parts of the compiler pipeline, and transformations tend to have a
> specific point in the pipeline where they "want" to live. (There are
> arguable exceptions like InstCombine, but that's far too big a rock to
> address here.)
>
>
> > - more heavyweight (no matter the implementation choice) than GraphTraits, which already is itself showing its weight (I don't think it would've been designed the way it is if the use cases/amount of use were known when it was implemented - and I think it's probably worth a bit of a revisit at this point, as Alina and I have discussed a fair bit around the DomTree work). Given its intended use I think it merits more scrutiny than it has had - that's why I'd like it reverted so we can do that.
> >
> > I think the RFC and patch probably didn't receive as much review/attention as they were due may be in part due to the thread title/naming - as we've noted in this thread, "CFG" doesn't, I think, capture enough of what this abstraction is intended to cover (a CFG is a graph with certain semantics about what it represents - a CFG doesn't necessarily have values/SSA/etc). I think if a thread were proposed "A common abstraction over LLVM IR and MLIR for analyses" it might garner more attention from relevant developers (not necessarily, but at least a better chance/easier to explain to people why they should care about the design of this API).
> >
> > My initial thinking was that this was closer to GraphTraits - and some of the descriptions around what makes GraphTraits difficult to work with and why that motivates a solution involving dynamic polymorphism to me, sounded not especially C++ idiomatic - not that I think we should write everything with generic templates and layers of C++ metaprogramming - but I do want to make sure the API is fairly idiomatic to C++ rather than, by the sounds of part of the description, butting up against those design concepts and abstractions.
> >
> > To the actual design: I think traits are probably not the best idea - I think we've seen that with GraphTraits being hard to use/unweildy/limiting (not entirely a generic problem with traits, partly a problem with the way GraphTraits are implemented (eg: edge queries only taking a node, instead of a graph and a node)). They're useful when an API is immutable and you need to adapt things (see iterator traits), but LLVM is quite mutable and moving LLVM IR and MIR to have more common API syntax seems both good for generic algorithms and good for developers even when they're not writing template-generic code.
> >
> > With a syntactically common API (ie: I don't mean shared functions, but API with the same names, etc - ie: with LLVM IR and MIR implementing the same C++ concept, insofar as is needed for analyses) - I think it'd be fairly sensible/easy enough to write analyses in terms of that concept (same as writing C++ container-generic algorithms such as std::sort, etc) - but I'd also be open to discussing a runtime polymorphic API on top, though I do want to push back on that a fair bit due to some of the inherent complexity needed to balance performance (eg: cost of manifesting edge lists/use lists, rather than being able to iterate them directly) and API complexity (eg: same thing, but from a usability perspective - not having the more common iterator api, instead a "populate this vector" api, but iterator solutions here also probably wouldn't be great due to the runtime overhead of type erased iterators, could potentially use "narrower" iterators that don't implement the C++ iterator concept, but instead just a single virtual "Optional<Value> Next()" function, for instance - reducing the number of virtual calls compared to type erased iterators).
> >
> > I think I asked in one of my early rounds of review, I think it'd be really helpful to have simple (not real-world algorithms, but trivial code snippets) examples of how code walking the CFG's features would look under different implementations, eg:
> > 1) raw writing against the two abstractions, LLVM IR and MIR - two completely separate functions that exercise all the common features that are of interest
> > 2) a potential common API concept and the trivial walking algorithm in the form of a template over both IR types
> > 3) CfgTraits
> > 4) CfgInterface
> >
> > I think that'd help ground the discussion in the proposal(s) and make it easier to see the tradeoffs.
>
> I'm going to try to take this block as a whole rather than cut it to pieces.
>
> First, the motivation for doing dynamic polymorphism has always been
> about tooling, programmer convenience, and maintainability. Here's an
> example:
>
>   struct DynamicExample {
>     SmallVector<CfgValueRef, 4> vec;
>
>     void foo(CfgValueRef v, unsigned w) {
>       vec.push_back(w);
>     }
>   };
>
>   template <typename SomeTraits> struct StaticExample {
>     using ValueRef = typename SomeTraits::ValueRef;
>
>     SmallVector<ValueRef, 4> vec;
>
>     void foo(ValueRef v, unsigned w) {
>       vec.push_back(w);
>     }
>   };
>
> DynamicExample and StaticExample are trying to do the same thing and
> contain the same error. A decent IDE will flag the error in the
> DynamicExample but not in the StaticExample, where the earliest point
> where a tool will inform you about the error is when the compiler
> tries to instantiate the template. I say the earliest point, because
> StaticExample can be successfully instantiated if SomeTraits refers to
> MachineIR, in which case ValueRef is a Register and so an implicit
> conversion from unsigned is possible. The error message you get from
> the compiler in the StaticExample case is also worse (admittedly not
> by much in this simple example).
>
> Obviously one can have different opinions about how bad these
> downsides of the static approach are. I personally found it a huge
> relief to be able to focus on the actual algorithms.
>
> One thing I'd like to emphasize is that the example doesn't contain
> any _real_ surface area of the traits. That is on purpose: in the code
> I'm writing using this machinery, the fraction of code that directly
> invokes the API of the IR is very small. It's grating when a large
> body of code is forced to pay the price of worse tooling because of a
> comparatively small part of it.
>
> I definitely appreciate the problem of iteration. At one point in the
> development of the various algorithms, I wanted to do a DFS of the
> block graph. llvm::depth_first obviously can't be used on CfgBlockRef
> as-is. This could be addressed if the GraphTraits API was changed to
> be aware of a graph object, as you alluded to. I didn't go down that
> path because I didn't want to change too much at once. As of right
> now, I don't even need (unmodified) DFS any more, but it's worth
> considering.
>
> That said, I do agree with you that just directly being able to use
> llvm::successor-style iteration (or block->successors()) would be
> best. My problem with it is that this is only compatible with dynamic
> polymorphism / type erasure if all relevant basic block types have a
> common base class. You hedged your language a bit when you talked
> about this further down, but if you think this is an option we can
> seriously pursue, then I'd actually be very happy to do so because
> it's clearly a better solution. This would also significantly reduce
> the surface area where dynamic method dispatch is needed in the first
> place, so perhaps it would make dynamic polymorphism more palatable to
> you.
>
> Specifically, I think it's plausible that we could introduce common
> base classes for basic block and instruction concepts for all of {LLVM
> IR, MLIR, MachineIR}. This would be far more invasive than the change
> I proposed, but it _would_ be better. I don't think having a common
> base class for values is feasible in a reasonable timeframe, because
> the IRs are just way too different in that area. The surface area
> could be reasonably limited though, to basically:
>
> - Value -> Instruction that defines it
> - Instruction -> operand Values
>
> Speaking of reasonable timeframes, do you have any thoughts on that?
> We have a lot of important changes riding on this ability, including
> superficially unrelated ones like GlobalISel, as I mentioned before. I
> don't want to rush things artificially, but at the same time I'd
> appreciate it if we didn't let the perfect be the enemy of the good.
> For example, further down you mention massaging LLVM to use something
> more like the mlir::Value structs. I'd be totally on board with that,
> having had the same idea myself already; but we can't wait on such a
> large change.
>
>
> > We may be better off just stopping this email thread at this point ^ and discussing that stuff, rather than going down to the point-by-point discussion later in this email
>
> Yeah, that's reasonable.
>
> Cheers,
> Nicolai
>
>
> (I've debated not replying to the rest of the email to try to focus
> the discussion - these really long emails get exponentially costlier
> for me from both a practical time perspective, but also on a deeper
> level in terms of how it feels to try to write another long-winded
> reply).
> >
> >>
> >> This puts me in a difficult situation. I have no indication of whether
> >> there is _any_ version of this that you would accept, and if so, what
> >> it would look like, so there is no way forward for me. In the
> >> meantime, there is a substantial (and increasing) amount of work that
> >> builds on what we're talking about here.
> >
> >
> > Depends what you mean by "version of this" - if by that you mean "some abstraction over the general SSA graph nature of LLVM IR and MIR" yes, I believe I understand the value in having such an abstraction & think there are options within that space that are acceptable to me. It's possible that they don't involve type traits or dynamic polymorphism, though.
> >
> >>
> >> > Other folks weighing in would be great & certainly there's a line at which my opinion isn't the gatekeeper here if other folks are in favor.
> >>
> >> Agreed. Part of the dysfunction here is that it's difficult to get
> >> people to pay attention in the first place. The few folks who replied
> >> to my llvm-dev thread in July did have questions but seemed to be
> >> generally okay with the direction.
> >
> >
> > Yeah, can be a bit like pulling teeth, for sure. I think some of the issue might've been the subject line - people probably filter pretty aggressively by that & might've missed some of the purpose of the proposal because of it (I think I did). But also we're all just busy people with our own stuff to do too - so sometimes takes actively seeking out people who might have a vested interest in pieces of common infrastructure, etc.
> >
> >>
> >>
> >>
> >> >> [0] With that in mind, let me "import"
> >> >> the last comment on the Phabricator review:
> >> >>
> >> >>
> >> >> > > > Ah, I see, the "append" functions are accessors, of a sort. Returning a container might be more clear than using an out parameter - alternatively, a functor parameter (ala std::for_each) that is called for each element, that can then be used to populate an existing container if desired, or to do immediate processing without the need for an intermediate container.
> >> >> > >
> >> >> > > The code is trying to strike a balance here in terms of performance. Since dynamic polymorphism is used, a functor-based traversal can't be inlined and so the number of indirect function calls increases quite a bit. There are a number of use cases where you really do want to just append successors or predecessors to a vectors, like during a graph traversal. An example graph traversal is here: https://github.com/nhaehnle/llvm-project/blob/controlflow-wip-v7/llvm/lib/Analysis/GenericConvergenceUtils.cpp#L329
> >> >> >
> >> >> > One way to simplify the dynamic polymorphism overhead of iteration would be to invert/limit the API - such as having a "node.forEachEdge([](const Edge& E) { ... });" or the like.
> >> >>
> >> >> I assume you're not talking about runtime overhead here? If we're
> >> >> talking about that, I'd expect that variant to often be slower,
> >> >> although I obviously don't have numbers.
> >> >
> >> >
> >> > I think I was referring to runtime overhead - you mentioned the increase in indirect calls. A callback-like API can reduce the number of such calls. (rather than one virtual call per increment, inequality test, and dereference - there would be only one polmorphic call per iteration)
> >>
> >> CfgInterface::appendSuccessors requires only a single indirect call
> >> for the entire loop, which is why I chose to do that over some
> >> getSuccessorByIndex-type interface.
> >
> >
> > Oh, indeed - hadn't mentioned that tradeoff, but that one then comes at a memory overhead cost of having to populate that memory/destroy it/etc.
> >
> >>
> >> A CfgInterface::forEachSuccessor method is a trade-off: it's an
> >> interface that is more in line with patterns you see elsewhere
> >> (std::for_each), at the cost of more indirect calls.
> >
> >
> > Yep - and even then it's limiting in that you can't partially iterate, or save where you're up to(without incurring more memory usage by giving the resulting container an even longer lifetime). (I've seen both of those use-cases with GraphTraits, for instance).
> >
> >>
> >> [snip[
> >> >> > I don't mean to treat CfgGraph to be thought of like the template parameter to, say, std::vector - I meant thinking of CfgGraph as something like std::vector itself. Rather than using traits to access containers in the C++ standard library, the general concept of a container is used to abstract over a list, a vector, etc.
> >> >> >
> >> >> > eg, if you want to print the elements of any C++ container, the code looks like:
> >> >> >
> >> >> > template<typename Container>
> >> >> > void print(const Container &C, std::ostream &out) {
> >> >> >  out << '{';
> >> >> >  bool first = true;
> >> >> >  for (const auto &E : C) {
> >> >> >    if (!first)
> >> >> >      out << ", ";
> >> >> >    first = false;
> >> >> >    out << E;
> >> >> >  }
> >> >> >  out << '}';
> >> >> > }
> >> >> >
> >> >> > Which, yes, is much more legible than what one could imagine a GraphTraits-esque API over
> >> >> > containers might be:
> >> >> >
> >> >> > template<typename Container, typename Traits = ContainerTraits<Container>>
> >> >> > void print(const Container &C, std::ostream &out) {
> >> >> >   out << '{';
> >> >> >  bool first = true;
> >> >> >  for (const auto &E : Traits::children(C)) {
> >> >> >    if (!first)
> >> >> >      out << ", ";
> >> >> >    first = false;
> >> >> >    out << Traits::element(E);
> >> >> >  }
> >> >> >  out << '}';
> >> >> > }
> >> >> >
> >> >> > Or something like that - and the features you'd gain from that would be the ability to sort of "decorate" your container without having to create an actual container decorator - instead implementing a custom trait type that, say, iterates container elements in reverse. But generally a thin decorator using the first non-traits API would be nicer (eg: llvm::reverse(container) which gives you a container decorator that reverses order).
> >> >>
> >> >> I agree that the first form is nicer. I'd raise two points.
> >> >>
> >> >> First, the first form works nicely if the concept of container is
> >> >> sufficiently well understood and all relevant containers really are
> >> >> sufficiently similar.
> >> >
> >> >
> >> > I'm not sure how a dynamic interface is likely to be significantly different/better in this regard.
> >>
> >> It's mostly orthogonal. Though dynamic polymorphism does end up having
> >> better compiler support for _enforcing_ concepts. This is a common
> >> theme in this discussion: the tooling support for dynamic polymorphism
> >> is far better. That matters.
> >
> >
> > Agreed, it does. I'm a bit concerned it's being too heavily weighted though.
> >
> >>
> >> >> Neither is the case here. We're trying to cover
> >> >> different pre-existing IRs, and even though all of them are SSA IRs,
> >> >> the differences are substantial.
> >> >
> >> >
> >> > I guess that's another thing that might benefit from further clarity: Currently we have generic graph traits, notionally that's for any graph (collection of nodes and edges). The proposed abstraction is for CFG graphs, but due to the added layers of abstraction complexity, dynamic and static APIs, etc, it's not been especially clear to me what makes this abstraction specific to/only for CFG graphs[1] (& then how that relates to IR I'm not sure either - control flow seems like it would be fairly independent of the values within that graph).
> >>
> >> Maybe it would help if CfgTraits was renamed to SsaTraits? Or
> >> SsaCfgTraits? Being able to do some limited inspection of the SSA
> >> values (and instructions, really) is key to what we're trying to do
> >> here -- there's an example below.
> >
> >
> > SsaTraits, maybe? Not sure. But that is probably closer than CfgTraits, I think/guess.
> >
> >>
> >>
> >> If this was only about nodes and edges, I would have added dynamic
> >> polymorphism on top of the existing GraphTraits.
> >>
> >>
> >>
> >> >> Currently, we only really care about
> >> >> LLVM IR and MachineIR in SSA form; their main differences is that the
> >> >> concept of "value" is a C++ object in LLVM IR, while in MIR-SSA it's a
> >> >> plain unsigned int (now wrapped in a Register). MLIR would add yet
> >> >> another way of thinking about values. The fact that we have to work
> >> >> with those pre-existing IRs means we have to compromise.
> >> >
> >> >
> >> > Not sure I follow what compromises you're referring to - if we have a container abstraction, some containers are over ints, some over objects, etc - the container can specify via typedefs, for instance, what type its elements are.
> >>
> >> Simple example: Let's say we have an SSA value and need to find the
> >> basic block in which it is defined.
> >>
> >> In LLVM IR, that's `value->getParent()`, where value is an `llvm::Value*`.
> >>
> >> In SSA-form MachineIR, that's
> >> `m_machineRegInfo->getVRegDef(value)->getParent()`, where value is an
> >> `llvm::Register`.
> >>
> >> In MLIR, that's `value.getParentBlock()`, where value is an `mlir::Value`.
> >>
> >> This kind of operation is needed for a complete divergence analysis,
> >> and so it needs to be abstracted somehow.
> >
> >
> > Yeah, I understand their APIs are different now - but I think there might be value in trying to make them more common. If it weren't for LLVM IR handling Value*s all the time, I'd say moving to structs for the value handles and having everything look roughly like MLIR. As that API change is probably too invasive for LLVM IR, having some kind of "module.getParent(value)" on each of these 3 entities seems feasible.
> >
> > For this specific example - perhaps we could have a Module::getParent(Value*) function and a MIRModule (whatever that is) getParent(value) too. I agree it's not perfect/maybe the LLVM IR one at least owuld only end up being used by SSA-generic code (though the MIR one might actually be more convenient/readable than the current version) so maybe it's not so far from traits (if it's only used for generic code, then it being in a generic code utility like a traits class wouldn't be too surprising - but if it can make the MIR and LLMIR more commonly usable then I think that's an improvement for generic code and for generic developers moving between these abstractions with less mismatch between the APIs/having to remember which set of APIs they're using).
> >
> >>
> >> >> Second, none of this applies to dynamic polymorphism, which is really
> >> >> the main goal here at least for me.
> >> >
> >> >
> >> > And one I'm trying to push back on - my hope is that a common set of abstractions (much like the container concepts in the STL) would be suitable here. To potentially subsume the existing GraphTraits (could have a "GraphTraits adapter" that gives a Graph-concept-implementing-container given GraphTraits) and potentially has layers to add on more expressive/narrower abstractions (whatever extra features are relevant to CFG graphs that aren't relevant to all graphs in general([1] above - not super sure what those are)) - much like generic C++ containers, to sequential containers, random access containers, etc - various refinements of the general concept of a container.
> >> >
> >> > What I'm especially interested in is not having two distinct concepts of graphs and graph algorithms with no plan to merge/manage these together, since as you say, the algorithms are complicated enough already - having two distinct and potentially competing abstractions in LLVM seems harmful to the codebase.
> >>
> >> It seems to me that there are two orthogonal issues here.
> >>
> >> One is whether dynamic polymorphism can be used or not. If you're in
> >> fundamental opposition to that, then we have a serious problem (a bit
> >> on that further down). Dynamic polymorphism doesn't contradict the
> >> goal of a common set of abstractions.
> >
> >
> > Yep, indeed - I agree they're two issues. If the 3 IRs implemented the same C++ concept we could still talk about whether or not to add a dynamically polymorphic API on top of them. (& conversely, we could have traits without dynamic polymorphism - ala GraphTraits).
> >
> >>
> >> The other is how _exactly_ the abstractions are factored. For example,
> >> on the dynamic polymorphism side we could have:
> >>
> >> - a `GraphInterface` class which provides methods for iterating
> >> predecessors and successors of `CfgBlockRef` (presumably we'd rename
> >> that to something like NodeRef?), and
> >> - an `SsaCfgInterface` class which provides additional methods for
> >> instructions and values.
> >>
> >> The question is whether you'd be willing to support something like
> >> that eventually, once we've hashed it out.
> >
> >
> > I don't think I'd say I'm fundamentally opposed to the introduction of a dynamically polymorphic abstraction over IR - but I have serious doubts/concerns and I think adding it requires buy-in from a lot more folks before I think it should be accepted in LLVM. (coming back to edit this: May not be many more folks to reach out for opinions, I've already poked a few folks - so might at least get some more voices, but not many more I'm guessing)
> >
> >>
> >> >> Not sure where that leaves us. We need something like CfgTraits to
> >> >> cover the substantial differences between the IRs, but perhaps we can
> >> >> work over time to reduce those differences? Over time, instead of
> >> >> relying on CfgTraits for _everything_ in CfgInterfaceImpl, we could
> >> >> instead use unified functionalities directly?
> >> >>
> >> >> Some of that could probably already be done today. For example, we
> >> >> could probably just use llvm::successors
> >> >
> >> >
> >> > By way of example, I /think/ the future we'd probably all want for llvm::successors would be to move towards entities having their own successors() function rather than the non-member/ADL-based lookup. (the same way that it's far more common/readable/expected to use x.begin() rather than "using std::begin; begin(x);" construct, or wrapping that in some std::adl_begin(x) function)
> >>
> >> Makes sense, though it doesn't really respond to what I was writing
> >> here. My point was about how the bridge between the world of dynamic
> >> polymorphism and that of static polymorphism works.
> >
> >
> > Given how long GraphTraits has lived (admittedly, without any particular assessment that the direction was to deprecate/remove it by having more API compatibility built-in - though, admittedly, it's abstracting over more different kinds of graphs than the new thing is likely to (owing to having fewer semantics so broader applicability) so it's a bit more viable in its situation, even though even then it's still a bit of a pain to work with) I'd be hesitant to start by adding a new thing like that with the expectation that it would be eliminated over time - I think it's feasible/reasonable to frontload the work of making the relevant CFG graphs API compatible.
> >
> >>
> >> >> directly in
> >> >> CfgInterfaceImpl::appendSuccessors (and your proposed
> >> >> forEachSuccessor) instead of going via CfgTraits.
> >> >>
> >> >>
> >> >> > If you had a runtime polymorphic API over containers in C++, then it might look something
> >> >> > like this:
> >> >> >
> >> >> > void print(const ContainerInterface& C, std::ostream& out) {
> >> >> >  out << '{';
> >> >> >  bool first = true;
> >> >> >  C.for_each([&](const auto &E) {
> >> >> >    if (!first)
> >> >> >      out << ", ";
> >> >> >    first = false;
> >> >> >    E.print(out);
> >> >> >  });
> >> >> >  out << '}';
> >> >> > }
> >> >>
> >> >> Yes, though it wouldn't quite work like that with the IRs we have in
> >> >> LLVM. Most of them go to great length to avoid embedding virtual
> >> >> methods in the IR classes themselves. That's why you'd more likely see
> >> >> something like the following:
> >> >>
> >> >> void print(const CfgInterface& iface, CfgBlockRef block, raw_ostream& out) {
> >> >>   out << "Successors of " << iface.printableName(block) << ':';
> >> >>   iface.forEachSuccessor(block, [&](CfgBlockRef succ) {
> >> >>     if (first)
> >> >>       out << ' ';
> >> >>     else
> >> >>       out << ", ";
> >> >>     first = false;
> >> >>     iface.printName(succ, out);
> >> >>   });
> >> >>   out << '\n';
> >> >> }
> >> >>
> >> >> The CfgBlockRef / CfgValueRef / etc. appear in the design to avoid
> >> >> having an abstract base class for the different IRs, and also to cover
> >> >> the difference between LLVM IR `Value *` and MIR-SSA `Register`.
> >> >
> >> >
> >> > If a dynamically polymorphic API is needed/ends up being chosen, I'd /probably/ be inclined to try to add an abstract base class for blocks too (unclear about values) - but not wedded to it. In general I'm more pushing back on the dynamically polymorphic API in general than what it might look like if it exists. In part because of the complexities of having opaque handle objects that are passed out and back into APIs to model things, rather than being able to model them more directly. The APIs become a bit unwieldy because of that, in my opinion. (again, looking at GraphTraits awkwardness of load-bearing node pointers - though having the graph as a parameter ('this' parameter or otherwise) to the edge/successor/etc queries (rather than having the queries only based on the node) would go a long way to addressing that particular shortcoming if such a token/ref-passing API were needed)
> >>
> >> In the end, it boils down to a tooling issue, which in turn is a C++
> >> language issue. When you're writing template code, you're basically
> >> throwing the type system out of the window. Sure, it comes back in
> >> once you _instantiate_ the template, but that doesn't help if all the
> >> complexity is in the template code. The kinds of algorithms we're
> >> trying to write have enough inherent complexity as it is. Asking us to
> >> simultaneously fight the accidental complexity caused by how C++
> >> templates work is unreasonable. The sentiment that C++ templates hurt
> >> here has been shared in this discussion by people with experience
> >> working on the dominator tree implementation (which is arguably a
> >> simpler problem than what we're trying to solve).
> >>
> >>
> >> Keep in mind that it's not just a question of current development but
> >> also of maintainability going forward.
> >>
> >> It would help to understand _which_ APIs you think are becoming
> >> unwieldy, because you may be imagining something misleading. For
> >> example, consider the updated generic dominator tree. It has a
> >> `GenericDominatorTreeBase` class which is type-erased and provides all
> >> the usual query functions in terms of opaque handles. Then there is a
> >> derived `DominatorTreeBase<NodeT>` class. If you're writing a
> >> non-generic algorithm, or a template-generic algorithm, you never
> >> interact with GenericDominatorTreeBase directly. Instead, you interact
> >> with `DominatorTreeBase<NodeT>`, which has all the usual methods where
> >> you pass in `NodeT*` instead of opaque handles. You only interact with
> >> GenericDominatorTreeBase directly if you're writing a
> >> dynamically-generic algorithm, where you consciously chose to enter
> >> the world of opaque handles.
> >>
> >> There ends up being some boilerplate for wrapping/unwrapping opaque
> >> handles in `DominatorTreeBase<NodeT>`, but it's pretty straightforward
> >> and hidden from users of the class.
> >>
> >> (FWIW, the updated dominator tree analysis itself is still implemented
> >> as a template. I converted it to dynamic polymorphism as an
> >> experiment, and there was one outlier test case with 1-2% compile time
> >> performance loss. I haven't pursued this further, and I don't want to
> >> convert this particular algorithm just for the sake of it. The update
> >> to dominator trees is only intended to allow other, more complex
> >> algorithms to be built on top of it using dynamic polymorphism.)
> >>
> >> If you're looking for an example of an algorithm that is written using
> >> dynamic polymorphism, you could take a look here:
> >> https://github.com/nhaehnle/llvm-project/blob/controlflow-wip-v7/llvm/lib/Analysis/GenericUniformAnalysis.cpp
> >>
> >>
> >>
> >>
> >> >> > I'd really like to see examples like this ^ using the different abstractions under consideration here (classic GraphTraits, CfgTraits dynamic and static typed, perhaps what a static API would look like if it wasn't trying to be dynamic API compatible).
> >> >>
> >> >> I don't really care that much about what the static side of things
> >> >> looks like. CfgTraits is the way it is on purpose because I felt it
> >> >> would be useful to have a homogenous view of what generic aspects of
> >> >> IR will ultimately be exposed by the CfgInterface. In the absence of
> >> >> compiler-enforced concepts, that feels helpful to me. If you put
> >> >> yourself in the shoes of an author of an algorithm that is supposed to
> >> >> be generic over IRs,
> >> >
> >> >
> >> > Perhaps that's part of my confusion - generic over IRs sounds like a far richer/more complicated API than generic over CFGs.
> >>
> >> Yes, indeed :)
> >>
> >>
> >> >> you'll realize that it's great to have some easy
> >> >> way of discovering which things really _are_ generic over IRs.
> >> >
> >> >
> >> > Writing out the concept would be a good start, I think, same as the C++ standard does for containers. (not to the kind of C++ language-lawyerly level of detail, but the basic summary of the conceptual API - independent of any of the implementation choices so far considered - this is why I'd really like to see especially trivial/mock use cases, yeah, they won't be representative of the full complexity of the API in use, but give a sense of what the semantics of the API are so we can consider different syntactic/implementation choices)
> >>
> >> The complete set of operations we seem to need is in CfgInterface and
> >> CfgPrinter. I do have some further development on it, where they look
> >> as follows:
> >>
> >> class CfgInterface {
> >>   virtual void anchor();
> >>
> >> public:
> >>   virtual ~CfgInterface() = default;
> >>
> >>   /// Escape-hatch for obtaining a printer e.g. in debug code. Prefer to
> >>   /// explicitly pass a CfgPrinter where possible.
> >>   virtual std::unique_ptr<CfgPrinter> makePrinter() const = 0;
> >>
> >>   virtual CfgParentRef getBlockParent(CfgBlockRef block) const = 0;
> >>
> >>   virtual void appendBlocks(CfgParentRef parent,
> >>                             SmallVectorImpl<CfgBlockRef> &list) const = 0;
> >>
> >>   virtual bool comesBefore(CfgInstructionRef lhs,
> >>                            CfgInstructionRef rhs) const = 0;
> >>
> >>   virtual void appendPredecessors(CfgBlockRef block,
> >>                                   SmallVectorImpl<CfgBlockRef> &list) const = 0;
> >>   virtual void appendSuccessors(CfgBlockRef block,
> >>                                 SmallVectorImpl<CfgBlockRef> &list) const = 0;
> >>   virtual ArrayRef<CfgBlockRef>
> >>   getPredecessors(CfgBlockRef block,
> >>                   SmallVectorImpl<CfgBlockRef> &store) const = 0;
> >>   virtual ArrayRef<CfgBlockRef>
> >>   getSuccessors(CfgBlockRef block,
> >>                 SmallVectorImpl<CfgBlockRef> &store) const = 0;
> >>
> >>   virtual void appendBlockDefs(CfgBlockRef block,
> >>                                SmallVectorImpl<CfgValueRef> &list) const = 0;
> >>   virtual CfgBlockRef getValueDefBlock(CfgValueRef value) const = 0;
> >> };
> >>
> >> class CfgPrinter {
> >>   virtual void anchor();
> >>
> >> protected:
> >>   const CfgInterface &m_iface;
> >>
> >>   CfgPrinter(const CfgInterface &iface) : m_iface(iface) {}
> >>
> >> public:
> >>   virtual ~CfgPrinter() {}
> >>
> >>   const CfgInterface &interface() const { return m_iface; }
> >>
> >>   virtual void printBlockName(raw_ostream &out, CfgBlockRef block) const = 0;
> >>   virtual void printValue(raw_ostream &out, CfgValueRef value) const = 0;
> >>   virtual void printInstruction(raw_ostream &out,
> >>                                 CfgInstructionRef instruction) const = 0;
> >>
> >>   Printable printableBlockName(CfgBlockRef block) const {
> >>     return Printable(
> >>         [this, block](raw_ostream &out) { printBlockName(out, block); });
> >>   }
> >>   Printable printableValue(CfgValueRef value) const {
> >>     return Printable(
> >>         [this, value](raw_ostream &out) { printValue(out, value); });
> >>   }
> >>   Printable printableInstruction(CfgInstructionRef instruction) const {
> >>     return Printable([this, instruction](raw_ostream &out) {
> >>       printInstruction(out, instruction);
> >>     });
> >>   }
> >> };
> >>
> >> Iterating over this design in-tree seems perfectly viable to me.
> >>
> >>
> >> >> The
> >> >> comparison with containers is slightly lacking because the potential
> >> >> surface area is so much larger. But perhaps we can iterate on the
> >> >> design and find a better way.
> >> >
> >> >
> >> > Larger surface area somewhat concerns me and why I'd like to see more sort of example usage to get a better sense of what this is expected to abstract over now and in the future.
> >>
> >> See the example GenericUniformAnalysis I linked to above.
> >
> >
> > I think some smaller examples of the different API options would be really helpful for me to understand the tradeoffs here.
> >
> >>
> >>
> >>
> >> >> This is kind of a restatement of what I wrote above: If we can get
> >> >> general agreement in the community that there is a desire that the
> >> >> different IRs in LLVM follow common concepts "directly", then we can
> >> >> iterate towards that over time. I'm personally convinced that unifying
> >> >> the various IRs is a worthy goal (the introduction of MLIR really was
> >> >> an unfortunate step backwards as far as that is concerned), it just
> >> >> feels like it'd be a thankless project because it would go through
> >> >> _many_ reviews that feel like the one that triggered this thread.
> >> >
> >> >
> >> > Perhaps - though getting design review/directional buy-in first can go a long way to making incremental reviews much less contentious/easier to commit. And a design that can be implemented via incremental improvement can also mean smaller patches that are easier to review/commit.
> >>
> >> If people were paying attention, sure :) Unfortunately, that's not how
> >> things seem to work in practice...
> >>
> >> I'm all for smaller patches as well, which is part of why I think
> >> in-tree iteration is better.
> >
> >
> > I meant for a different direction - if we got buy-in for making LLVM IR and MIR implement the same C++ concept (& wrote down what that concept was - which APIs, how they look textually) then I think it would be relatively easy to get the small/mechanical renaming patches reviewed. Incremental from day 1, not big new abstraction, then incremental.
> >
> > - Dave
> >
> >>
> >>
> >> Cheers,
> >> Nicolai
> >>
> >>
> >>
> >>
> >> >
> >> >>
> >> >> Even
> >> >> without a dysfunctional review process it'd be a long refactoring, and
> >> >> some sort of trait class is unavoidable at least for now. At a
> >> >> minimum, something is needed to abstract over the large differences
> >> >> between SSA value representations.
> >> >>
> >> >> Cheers,
> >> >> Nicolai
> >> >>
> >> >> [0] What I'm _not_ going to do is write patches based on vague guesses
> >> >> of what other people might want. I don't want even more changes
> >> >> hanging in Phabricator limbo for months. Be explicit about what it is
> >> >> that you want, and we can make progress.
> >> >>
> >> >> --
> >> >> Lerne, wie die Welt wirklich ist,
> >> >> aber vergiss niemals, wie sie sein sollte.
> >> >> _______________________________________________
> >> >> LLVM Developers mailing list
> >> >> llvm-dev at lists.llvm.org
> >> >> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> >>
> >>
> >>
> >> --
> >> Lerne, wie die Welt wirklich ist,
> >> aber vergiss niemals, wie sie sein sollte.
>
>
>
> --
> Lerne, wie die Welt wirklich ist,
> aber vergiss niemals, wie sie sein sollte.



-- 
Lerne, wie die Welt wirklich ist,
aber vergiss niemals, wie sie sein sollte.


More information about the llvm-dev mailing list