[llvm-commits] [PATCH] Invariants (and Assume Aligned) - LLVM

Hal Finkel hfinkel at anl.gov
Thu Dec 6 14:57:30 PST 2012


----- Original Message -----
> From: "Dan Gohman" <dan433584 at gmail.com>
> To: "Hal Finkel" <hfinkel at anl.gov>
> Cc: llvm-commits at cs.uiuc.edu, reviews+D150+public+94bd4758cb777256 at llvm-reviews.chandlerc.com
> Sent: Thursday, December 6, 2012 3:54:35 PM
> Subject: Re: [llvm-commits] [PATCH] Invariants (and Assume Aligned) - LLVM
> 
> On Thu, Dec 6, 2012 at 12:51 PM, Hal Finkel <hfinkel at anl.gov> wrote:
> > ----- Original Message -----
> >> From: "Dan Gohman" <dan433584 at gmail.com>
> >> To: "Hal Finkel" <hfinkel at anl.gov>
> >> Cc: llvm-commits at cs.uiuc.edu,
> >> reviews+D150+public+94bd4758cb777256 at llvm-reviews.chandlerc.com
> >> Sent: Thursday, December 6, 2012 12:56:28 PM
> >> Subject: Re: [llvm-commits] [PATCH] Invariants (and Assume
> >> Aligned) - LLVM
> >>
> >> On Wed, Dec 5, 2012 at 8:15 PM, Hal Finkel <hfinkel at anl.gov>
> >> wrote:
> >> >>
> >> >> Note that this approach doesn't automatically eliminate the
> >> >> problem
> >> >> of
> >> >> anchoring the intrinsic with respect to the surrounding control
> >> >> flow
> >> >> and side effects. Presumably, it would still be useful to let
> >> >> the
> >> >> intrinsic appear to have side effects to achieve this, as it
> >> >> does
> >> >> in
> >> >> the current patch. That will interfere with GVN, DSE, LICM, and
> >> >> many
> >> >> other optimizations, but these problems can be special-cased
> >> >> around.
> >> >
> >> > The objections to my first patch, which was implemented along
> >> > these
> >> > lines, was that there were too many special cases (even in that
> >> > patch, and I had only touched some of them). In some sense,
> >> > doing
> >> > this implements a new kind of bitcast (albeit one with which it
> >> > is
> >> > less convenient to work). On the other hand, perhaps my original
> >> > patch simply had a lot of special cases without a sufficient
> >> > upside (being that it applied only to the alignment case). With
> >> > invariants, the cost/benefit analysis might be different.
> >> >
> >> > To propose an alternative, we could consider the invariant to be
> >> > solely a frontend matter, and lower as:
> >> >
> >> > (things before)
> >> > __builtin_assume(x);
> >> > (things after)
> >> >
> >> > as
> >> >
> >> > (things before)
> >> > if (x) {
> >> >   (things after)
> >> > } else
> >> >   unreachable;
> >> >
> >> > This, of course, may have its own issues ;)
> >>
> >> This is, of course, not a new idea, and it does indeed have its
> >> own
> >> issues ;-).
> >>
> >> >> so they can hold early loop exits ephemerally live, which means
> >> >> they
> >> >> could complicate loop optimization. It's a design principle in
> >> >> LLVM
> >> >> that the main middle-end optimizations work to enable other
> >> >> optimizations. Introducing artificial constructs which hold
> >> >> values
> >> >> and
> >> >> branches and stores live interferes with this in a very
> >> >> philosophical
> >> >> way.
> >> >
> >> > You're right; and having the invariants could certainly
> >> > interfere
> >> > with optimization. One way of looking at it is this:
> >> > Representing
> >> > the invariant using IR will naturally keep the invariant itself
> >> > from being optimized away. This is a good thing when the
> >> > invariant
> >> > is assisting with "large" optimizations: higher-level loop
> >> > transformations, autovectorization, conditional-branch
> >> > short-circuiting, etc. This is probably not a good thing after
> >> > those kinds of transformations have run and we're more concerned
> >> > with lower-level transformations. A good trade-off may be that
> >> > at
> >> > some point, unless we're doing LTO, we should drop the
> >> > invariants.
> >>
> >> Another perspective is that many of these higher-level
> >> optimizations
> >> are naturally fragile, and so it's important to do as much
> >> clean-up
> >> and canonicalization optimization as possible before they run, to
> >> give
> >> the high-level passes the best opportunity possible.
> >
> > I am not sure this is generally true. For one thing, higher-level
> > optimizations generally need some form of symbolic reasoning, and
> > thus generally need to build their own internal normalized
> > representations. In our case we have SE; and my feeling is that by
> > using SE we become less sensitive to these kinds of normalization
> > issues. Furthermore, some of these low-level optimizations
> > actually end up confusing SE, because SE does not deal explicitly
> > with bitwise operations (whereas it is quite happy with add, sub,
> > mul, etc.). What optimizations did you have in mind?
> 
> ScalarEvolution really is intended to be run after Instcombine.
> Instcombine's preference for or over add in its canonical form is not
> universally best, but in exchange for living with that issue,
> ScalarEvolution gets the benefit of a large number of unambiguously
> useful expression simplifications. ScalarEvolution isn't meant to
> duplicate all of instcombine's simplification logic for itself.
> 
> Also, ScalarEvolution benefits from running after LoopSimplify, and
> LoopSimplify and ScalarEvolution both benefit from being run after
> SimplifyCFG, and ScalarEvolution, LoopSimplify, and SimplifyCFG all
> benefit from being run after Instcombine.

Fair enough; this is all true. Obviously, as you've also said, we need to think carefully about the use cases. Nevertheless, we don't automatically lose Instcombine, or perhaps even most of it, but exactly how much we lose for any given class of use cases is certainly an open question.

I think that the more problematic issue, as you've pointed out, is the scope ambiguity. The solution you propose, which is in line with how I initially proposed to implement the alignment-assumptions intrinsic, resolves these issues; however, it will require adding a lot of code to different parts of the transformation and analysis framework (essentially, any place that deals explicitly with BitCast should deal with this, plus some other places that deal with intrinsics of other kinds). Perhaps there is no less-invasive way. I'm certainly open to suggestions.

Thanks again,
Hal

> 
> Dan
> 

-- 
Hal Finkel
Postdoctoral Appointee
Leadership Computing Facility
Argonne National Laboratory



More information about the llvm-commits mailing list