<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Apr 22, 2015 at 11:44 PM, Andrew Trick <span dir="ltr"><<a href="mailto:atrick@apple.com" target="_blank">atrick@apple.com</a>></span> wrote:<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">This feature will keep being requested. I agree LLVM should support it, and am happy to see it being done right.</blockquote><div><br></div><div>+1</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
> I plan to break the design into two parts, roughly following the<br>
> statepoint philosophy:<br>
><br>
> # invokable @llvm.(load|store)_with_trap intrinsics<br>
><br>
> We introduce two new intrinsic families<br>
><br>
> T @llvm.load_with_trap(T*) [modulo name mangling]<br>
> void @llvm.store_with_trap(T, T*) [modulo name mangling]<br>
><br>
> They cannot be `call`ed, they can only be `invoke`d.<br></span></blockquote><div><br></div><div>Why not allow non-nounwind calls, in other words, an intrinsic call that may throw?</div><div><br></div><div>In most languages with implicit null checks, there are far more functions that do field accesses and method calls than there are functions that catch exceptions. The common case is that the frame with the load will have nothing to do other than propagate the exception to the parent frame, and we should allow the runtime to handle that efficiently.</div><div><br></div><div>Essentially, in this model, the signal handler is responsible for identifying the signal as a null pointer exception (i.e. SIGSEGVs on a small pointer value with a PC in code known to use this EH personality) and transitioning to the exception handling machinery in the language runtime.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
> Semantically, they try to load from or store to the pointer passed to<br>
> them as normal load / store instructions do. @llvm.load_with_trap<br>
> returns the loaded value on the normal return path. If the load or<br>
> store traps then they dispatch to their unwind destination. The<br>
> landingpad for the unwind destination can only be a cleanup<br>
> landingpad, and the result of the landingpad instruction itself is<br>
> always undef. The personality function in the landingpad instruction<br>
> is ignored.<br></span></blockquote><div><br></div><div>The landingpad personality normally controls what kind of EH tables are emitted, so if you want something other than the __gxx_personality_v0 LSDA table, you could invent your own personality and use that to control what gets emitted. This might be useful for interoperating with existing language runtimes.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
> These intrinsics require support from the language runtime to work.<br>
> During code generation, the invokes are lowered into normal load or<br>
> store instructions, followed by a branch (explicit `jmp` or just<br>
> fall-through) to the normal destination. The PC for the unwind<br>
> destination is recorded in a side-table along with the PC for the load<br>
> or store. When a load or store traps or segfaults at runtime, the<br>
> runtime searches this table to see if the trap is from a PC recorded<br>
> in the side-table. If so, it the runtime jumps to the unwind<br>
> destination, otherwise it aborts.<br>
<br>
</span>The intrinsics need to be lowered to a pseudo instruction just like patchpoint (so that a stackmap can be emitted). In my mind the real issue here is how to teaching this pseudo instruction to emit the proper load/store for the target.</blockquote><div><br></div><div>Does it really have to be a per-target pseudo? The way I see it, we can handle this all in selection dag. All we need to do is emit the before label, the load/store operation, and the end label, and establish control dependence between them all to prevent folding. Does that seem reasonable, or is this an overly simplistic perspective? :-)</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
> Note that the signal handler / runtime do not themselves raise<br>
> exceptions at the ABI level (though they do so conceptually), but the<br>
> landing pad block can raise one if it wants to.<br>
><br>
> The table mapping load/store PCs to unwind PCs can be reported to the<br>
> language runtime via an __llvm_stackmaps like section. I am strongly<br>
> in favor of making this section as easy to parse as possible.<br>
<br>
</span>Let’s just be clear that it is not recommended for the frontend to produce these intrinsics. They are a compiler backend convenience. (I don’t want InstCombine or any other standard pass to start trafficking in these.)</blockquote><div><br></div><div>Would you be OK with simply documenting that these intrinsics are optimization-hostile, in the same way that early safepoint insertion is? There are some language constructs (__try / __except) that allow catching memory faults like this. Such constructs are rare and don't really need to be optimized. I just want to make sure that mid-level optimizations don't actively break these.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
> Running this pass sufficiently late in the optimization pipeline will<br>
> allow for all the usual memory related optimization passes to work as<br>
> is -- they won't have to learn about the special semantics for the new<br>
> (load|store)_with_trap intrinsics to be effective.<br>
<br>
</span>Good. This is a codegen feature. We can’t say that enough. If you really cared about the best codegen this would be done in machine IR after scheduling and target LoadStore opts.</blockquote><div><br></div><div>I agree, with the caveat above. Mid-level passes shouldn't actively break these intrinsics.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
> This pass will have to be a profile-guided optimization pass for<br>
> fundamental reasons: implicit null checks are a pessimization even if<br>
> a small fraction of the implicit null checks fail. Typically language<br>
> runtimes that use page-fault based null checks recompile methods with<br>
> failed implicit null checks to use an explicit null check instead (e.g. [2]).<br>
<br>
</span>I don’t think making it profile-guided is important. Program behavior can change after compilation and you’re back to the same problem. I think recovering from repeated traps is important. That’s why you need to combine this feature with either code invalidation points or patching implemented via llvm.stackmap, patchpoint, (or statepoint) — they’re all the same thing.<br>
<span class=""><br>
> What do you think? Does this make sense?<br>
<br>
</span>Well, you need the features that patchpoint gives you (stackmaps entries) and you’ll need to use patchpoints or stackmaps anyway for invalidation or patching. So why are you bothering with a totally new, independent intrinsic? Why not just extend the existing intrinsics. We could have a variant that<br>
<br>
- emits a load instead of a call<br>
<br>
- looks at the landing pad to generate a special stackmap entry in addition to the normal exception table (I don’t even see why you need this, except that the runtime doesn’t know how to parse an exception table.)</blockquote></div></div></div>