<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 19, 2013 at 10:06 PM, Andrew Trick <span dir="ltr"><<a href="mailto:atrick@apple.com" target="_blank" class="cremed">atrick@apple.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Metadata is a better approach because it doesn’t interfere with optimizations that are free to ignore it.</div></blockquote>
<div><br></div><div>But in practice, they aren't going to be free to ignore it.</div><div><br></div><div>If you have metadata which encodes an invariant (in the most general sense, lifetime instrinsics currently encode an invariant, branch weights do not) that is useful to an optimization pass to transform the instructions in a way that would not otherwise be safe or well defined, then you have a new problem: what happens when an optimization pass does some unrelated transform that obliquely invalidates this invariant? This is equally true of an intrinsic which encodes the invariant.</div>
<div><br></div><div>However, there are generally speaking two ways to approach this: 1) use a construct that optimizers will treat conservatively and explicitly handle the construct in the places where it is a safe way to do so (and worth doing), 2) use a construct that the optimizers will ignore, and explicitly update the construct in all of the places where it ceases to be correct. Metadata is a design which requires #2, and intrinsics are a design which requires #1. The question is, which is better.</div>
<div><br></div><div>How my pro/con list breaks down for these approaches:</div><div>pro 1: very low probability of producing an invalid program due to missing handling for the invariant in some pass</div><div>pro 1: can teach passes about the invariant lazily as a concrete need arises, no need to be comprehensive</div>
<div>con 1: will impose a tax on the use of an invariant due to added entries in the Use graph.</div><div><br></div><div>pro 2: zero-overhead on the Use graph or optimizers which don't care</div><div>con 2: requires comprehensive audit of transformations for cases where they invalidate the invariant, *for each invariant*</div>
<div>con 2: when we fail to catch an invalidation, we miscompile the program rather than failing to optimize the program</div><div><br></div><div>I'm strongly in favor of the first strategy for these reasons. Perhaps there are other reasons I've missed?</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Yes, if the invariant expression requires IR values and real uses, then that defeats the purpose. I prefer a meta-data language to the alternative of ephemeral expressions. I could be convinced either way. I just think it will be confusing to mix ephemeral expressions with program logic and impractical to make it transparent to optimizations. Not the end of the world though, and I’ll listen to any strong arguments to the contrary.</div>
</blockquote><div><br></div><div>I've given my argument above. I will add that there is one important assumption about my argument that impacts the scaling factors: I'm assuming that there will be relatively few values in the IR which are the subject of invariants of any form. I also assume that there are likely to be a reasonable number of different invariants we want to provide. Were we to have only 1 or 2 invariant forms, and to apply them to a large % of the values, perhaps strategy #2 would be better, although it would still seem questionable to me. Debugging a miscompile is incredibly harder than debugging *this* kind of missed optimization (IE, a simplification that we can see wasn't performed, no micro-arch details here).</div>
<div><br></div><div>That said, I'm not sure if email is conveying my thoughts here effectively. Let me know if this would be a good thing for us to talk about in person at some point? (Hopefully it doesn't need to wait for the dev mtg...)</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="im"><br><blockquote type="cite" dir="auto"><div style="font-size:12px;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;line-height:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">
On the other hand, when I started down this road I was motivated by a desire to implement support for __builtin_assume_aligned. I would still like to do this, or something like this to get equivalent functionality. The idea of metadata references seems like it might work well for alignment assumptions, as a restricted special case. Maybe something like this:<br>
<br>tail call void @llvm.assume.aligned(metadata !{i32* %ptr}, i32 16)<br></div></blockquote><div><br></div></div><div>I think this concept is sound except for the problem of enforcing SSA properties. Do meta-data operands need to dominate? If so, we need to fix SSA updater to deal with this. Preserving the meta-data after code duplication would effectively require a meta-phi.</div>
</div></blockquote><div><br></div><div>Here is a great example of what I described above. I think that the cost of updating the passes to correctly model invariants will be significantly higher than updating them to disregard invariant uses of the values.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>I’m a bit less enthusiastic about meta-intrinsics after realizing that the only current example, dbg.value, is a poor use of the concept. It should have been meta-data attached directly to the value with source line info instead.<br>
</div><div><br></div><div>If we could demonstrate the advantage of converting lifetime intrinsics to meta-instrinsics (not real uses), then I’d be all for it.</div></blockquote></div><br></div><div class="gmail_extra">Having worked closely with the lifetime intrinsics, the cost of handling them in their current form is *really* low. I feel like the cost of having meta-SSA markers would be significantly more complex. Happy to be shown wrong here though as well. Currently, lifetime markers seem to be really easy to handle.</div>
</div>