[LLVMdev] llvm.meta (was Rotated loop identification)

Andrew Trick atrick at apple.com
Mon Sep 9 14:05:17 PDT 2013


On Sep 7, 2013, at 12:38 PM, Chandler Carruth <chandlerc at google.com> wrote:

> 
> On Mon, Aug 19, 2013 at 10:06 PM, Andrew Trick <atrick at apple.com> wrote:
> Metadata is a better approach because it doesn’t interfere with optimizations that are free to ignore it.
> 
> But in practice, they aren't going to be free to ignore it.
> 
> If you have metadata which encodes an invariant (in the most general sense, lifetime instrinsics currently encode an invariant, branch weights do not) that is useful to an optimization pass to transform the instructions in a way that would not otherwise be safe or well defined, then you have a new problem: what happens when an optimization pass does some unrelated transform that obliquely invalidates this invariant? This is equally true of an intrinsic which encodes the invariant.
> 
> However, there are generally speaking two ways to approach this: 1) use a construct that optimizers will treat conservatively and explicitly handle the construct in the places where it is a safe way to do so (and worth doing), 2) use a construct that the optimizers will ignore, and explicitly update the construct in all of the places where it ceases to be correct. Metadata is a design which requires #2, and intrinsics are a design which requires #1. The question is, which is better.
> 
> How my pro/con list breaks down for these approaches:
> pro 1: very low probability of producing an invalid program due to missing handling for the invariant in some pass
> pro 1: can teach passes about the invariant lazily as a concrete need arises, no need to be comprehensive
> con 1: will impose a tax on the use of an invariant due to added entries in the Use graph.
> 
> pro 2: zero-overhead on the Use graph or optimizers which don't care
> con 2: requires comprehensive audit of transformations for cases where they invalidate the invariant, *for each invariant*
> con 2: when we fail to catch an invalidation, we miscompile the program rather than failing to optimize the program
> 
> I'm strongly in favor of the first strategy for these reasons. Perhaps there are other reasons I've missed?

I see your point with regard to lifetime markers. I can imagine an IR pass merging allocas and invalidating the markers. Do you know of any existing passes that can invalidate them?

A value invariant is quite different because we're *only* constraining the value itself at that point in the CFG, not the semantics of the data pointed to at other places in the CFG. The question is, can RAUW change the value? I've been assuming the answer is "no". It seems like a terrible thing to do, but there's no rule against it and no way to verify.

OTOH, more importantly, we should avoid supporting new IR constructs for marginal features. So, if we need a to handle transparent lifetime marker uses, we should use the same mechanism for value invariants.

>  
> Yes, if the invariant expression requires IR values and real uses, then that defeats the purpose. I prefer a meta-data language to the alternative of ephemeral expressions. I could be convinced either way. I just think it will be confusing to mix ephemeral expressions with program logic and impractical to make it transparent to optimizations. Not the end of the world though, and I’ll listen to any strong arguments to the contrary.
> 
> I've given my argument above. I will add that there is one important assumption about my argument that impacts the scaling factors: I'm assuming that there will be relatively few values in the IR which are the subject of invariants of any form. I also assume that there are likely to be a reasonable number of different invariants we want to provide. Were we to have only 1 or 2 invariant forms, and to apply them to a large % of the values, perhaps strategy #2 would be better, although it would still seem questionable to me. Debugging a miscompile is incredibly harder than debugging *this* kind of missed optimization (IE, a simplification that we can see wasn't performed, no micro-arch details here).

I can imagine a domain that uses a few kinds of invariants, like typechecks, liberally. But I don't know of any plans to make use of that, and it could be argued that LLVM IR is not the right level for these sort of invariants

> That said, I'm not sure if email is conveying my thoughts here effectively. Let me know if this would be a good thing for us to talk about in person at some point? (Hopefully it doesn't need to wait for the dev mtg...)
>  
> 
>> On the other hand, when I started down this road I was motivated by a desire to implement support for __builtin_assume_aligned. I would still like to do this, or something like this to get equivalent functionality. The idea of metadata references seems like it might work well for alignment assumptions, as a restricted special case. Maybe something like this:
>> 
>> tail call void @llvm.assume.aligned(metadata !{i32* %ptr}, i32 16)
> 
> I think this concept is sound except for the problem of enforcing SSA properties. Do meta-data operands need to dominate? If so, we need to fix SSA updater to deal with this. Preserving the meta-data after code duplication would effectively require a meta-phi.
> 
> Here is a great example of what I described above. I think that the cost of updating the passes to correctly model invariants will be significantly higher than updating them to disregard invariant uses of the values.

This problem arises from meta-data uses in general. dbg.value already has the problem. Treating them as strict SSA def-use is probably a bad idea. It would be interesting to see what would happen if dbg.value were handled like lifetime in terms of overhead and defeated optimization.

I think we’re getting very lucky with lifetimes because there’s not much we can do anyway to optimize an alloca that didn’t get SROA’d.

> I’m a bit less enthusiastic about meta-intrinsics after realizing that the only current example, dbg.value, is a poor use of the concept. It should have been meta-data attached directly to the value with source line info instead.
> 
> If we could demonstrate the advantage of converting lifetime intrinsics to meta-instrinsics (not real uses), then I’d be all for it.
> 
> Having worked closely with the lifetime intrinsics, the cost of handling them in their current form is *really* low. I feel like the cost of having meta-SSA markers would be significantly more complex. Happy to be shown wrong here though as well. Currently, lifetime markers seem to be really easy to handle.


Until there is a known, serious problem with either of the following approaches, we should probably restrict meta-data to:

(1) MD nodes attached to concrete values.
(2) Real uses feeding "meta" intrinsics.

-Andy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130909/61d0fa2b/attachment.html>


More information about the llvm-dev mailing list