[LLVMdev] llvm.meta (was Rotated loop identification)

Chandler Carruth chandlerc at google.com
Sat Sep 7 12:38:52 PDT 2013

On Mon, Aug 19, 2013 at 10:06 PM, Andrew Trick <atrick at apple.com> wrote:

> Metadata is a better approach because it doesn’t interfere with
> optimizations that are free to ignore it.

But in practice, they aren't going to be free to ignore it.

If you have metadata which encodes an invariant (in the most general sense,
lifetime instrinsics currently encode an invariant, branch weights do not)
that is useful to an optimization pass to transform the instructions in a
way that would not otherwise be safe or well defined, then you have a new
problem: what happens when an optimization pass does some unrelated
transform that obliquely invalidates this invariant? This is equally true
of an intrinsic which encodes the invariant.

However, there are generally speaking two ways to approach this: 1) use a
construct that optimizers will treat conservatively and explicitly handle
the construct in the places where it is a safe way to do so (and worth
doing), 2) use a construct that the optimizers will ignore, and explicitly
update the construct in all of the places where it ceases to be correct.
Metadata is a design which requires #2, and intrinsics are a design which
requires #1. The question is, which is better.

How my pro/con list breaks down for these approaches:
pro 1: very low probability of producing an invalid program due to missing
handling for the invariant in some pass
pro 1: can teach passes about the invariant lazily as a concrete need
arises, no need to be comprehensive
con 1: will impose a tax on the use of an invariant due to added entries in
the Use graph.

pro 2: zero-overhead on the Use graph or optimizers which don't care
con 2: requires comprehensive audit of transformations for cases where they
invalidate the invariant, *for each invariant*
con 2: when we fail to catch an invalidation, we miscompile the program
rather than failing to optimize the program

I'm strongly in favor of the first strategy for these reasons. Perhaps
there are other reasons I've missed?

> Yes, if the invariant expression requires IR values and real uses, then
> that defeats the purpose. I prefer a meta-data language to the alternative
> of ephemeral expressions. I could be convinced either way. I just think it
> will be confusing to mix ephemeral expressions with program logic and
> impractical to make it transparent to optimizations. Not the end of the
> world though, and I’ll listen to any strong arguments to the contrary.

I've given my argument above. I will add that there is one important
assumption about my argument that impacts the scaling factors: I'm assuming
that there will be relatively few values in the IR which are the subject of
invariants of any form. I also assume that there are likely to be a
reasonable number of different invariants we want to provide. Were we to
have only 1 or 2 invariant forms, and to apply them to a large % of the
values, perhaps strategy #2 would be better, although it would still seem
questionable to me. Debugging a miscompile is incredibly harder than
debugging *this* kind of missed optimization (IE, a simplification that we
can see wasn't performed, no micro-arch details here).

That said, I'm not sure if email is conveying my thoughts here effectively.
Let me know if this would be a good thing for us to talk about in person at
some point? (Hopefully it doesn't need to wait for the dev mtg...)

> On the other hand, when I started down this road I was motivated by a
> desire to implement support for __builtin_assume_aligned. I would still
> like to do this, or something like this to get equivalent functionality.
> The idea of metadata references seems like it might work well for alignment
> assumptions, as a restricted special case. Maybe something like this:
> tail call void @llvm.assume.aligned(metadata !{i32* %ptr}, i32 16)
> I think this concept is sound except for the problem of enforcing SSA
> properties. Do meta-data operands need to dominate? If so, we need to fix
> SSA updater to deal with this. Preserving the meta-data after code
> duplication would effectively require a meta-phi.

Here is a great example of what I described above. I think that the cost of
updating the passes to correctly model invariants will be significantly
higher than updating them to disregard invariant uses of the values.

> I’m a bit less enthusiastic about meta-intrinsics after realizing that the
> only current example, dbg.value, is a poor use of the concept. It should
> have been meta-data attached directly to the value with source line info
> instead.
> If we could demonstrate the advantage of converting lifetime intrinsics to
> meta-instrinsics (not real uses), then I’d be all for it.

Having worked closely with the lifetime intrinsics, the cost of handling
them in their current form is *really* low. I feel like the cost of having
meta-SSA markers would be significantly more complex. Happy to be shown
wrong here though as well. Currently, lifetime markers seem to be really
easy to handle.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130907/5f448e58/attachment.html>

More information about the llvm-dev mailing list