<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><br><div><div>On Oct 23, 2013, at 1:17 AM, Gaël Thomas <<a href="mailto:gael.thomas@lip6.fr">gael.thomas@lip6.fr</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div style="font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">Hi all,<br><br>I don't know if I understand everything, but it seems really<br>interesting for a runtime developer, stackmap and patchpoint looks<br>perfect for a lot of optimizations :) I just have few question to<br>verify if I understand what are these stackmaps and patchpoints, and I<br>discuss the GC after.<br><br>* I have a first very simple scenario (useful in vmkit). Let's imagine<br>that we want to lazily build the layout of an object at runtime, i.e.,<br>we don't know the layout of the object when we are emitting the code.<br>And, we want to access to a field of this object identified by a<br>symbol. If I understand correctly, we can use your stackmap to define<br>the offset of this field and then patch the code that use this offset?<br>The machine code will like mov %(rax)offset, .., and the stackmap<br>will generate a map that contains the location of "offset" in the<br>code? If it's the case, it's perfect.<br></div></blockquote><div><br></div><div>I agree with Filip's response. I'm not sure I exactly what you envision though.</div><div><br></div><div>Ignoring calling conventions for the moment, with the current implementation you'll get something like this for llvm.patchpoint(ID, 12, %resolveGetField, %obj):</div><div><br></div><div>mov (%rax), %rcx ; load object pointer</div><div>--- patchpoint ---</div><div>movabsq $resolveGetField, $r11</div><div>callq *$r11</div><div>--- end patchpoint ---</div><div>ret ; field value in %rax</div><div><br></div><div>The stack map will indicate that %rcx holds the object pointer.</div><div><br></div><div>You can then patch it with:</div><div><br></div><div>mov (%rax), %rcx ; load object pointer</div><div>--- patchpoint ---</div><div>mov offset(%rcx), $rax ; load field at offset</div><div>nop</div><div>nop...</div><div>--- end patchpoint ---</div><div>ret ; field value in %rax</div><div><br></div><blockquote type="cite"><div style="font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">* Now, let's imagine that I want to lazily call a virtual method (aka,<br>a single dispatch call in Java). I have two problems. First, I have to<br>know the offset of the method in the virtual table (just like for<br>virtual calls in c++). Second, the entry in the table should contain a<br>stub able to link/compile the target method at runtime. And the stub<br>has to know which object has received the call (which drives the<br>method resolution and the update of the virtual table). With stackmaps<br>and patchpoints, I can imagine something like that (in pseudo-llvm<br>without typing)<br><br>%r0 = load %obj, 0 ; the virtual table is at offset 0<br>%r1 = 0<br>stackmap %r1, ID_OFFSET ; contains the offset of the target method in<br>the virtual table<br>%r2 = add %r1, %r0<br>%r3 = load %r2<br>patchpoint ID_CALL %r3, %obj, other parameters ; to find %obj in the stub<br><br>I should be able to:<br>- patch ID_OFFSET when I load the description of obj (before the call,<br>when the object is allocated)<br>- use ID_CALL to know which object is the target of the call in order<br>to find the appropriate method. If it could be the case, your<br>patchpoint and safepoint are very interesting for vmkit. We just need<br>a function to retreive, in the caller, from which patchpoint we are<br>coming from (probably by inspecting the stack, we can find the program<br>pointer of the call site and then find the patchpoint descriptor?)<br></div></blockquote><div><br></div><div>I think you're invisioning patching instructions that are emitted by LLVM outside of a patchpoint's reserved space. While that is possible to do using llvm.stackmap, it is not a good idea to assume anything about LLVM instruction selection or scheduling.</div><div><br></div><div>As Filip said, just use a single patchpoint for the vtable call sequence and only patch within the reserved bytes.</div><div><br></div><div>llvm.patchpoint(ID, nbytes, %resolveVCall, %obj):</div><div><br></div><div>Say the stackmap locates %obj in $r1 and a return value in $r3, you would simply patch 'nbytes' of code as follows:</div><div><br></div><div>--- patchpoint ---</div><div>mov (%r1), $r2 ; load vtable ptr</div><div>mov $vtableofs($r2), $r3</div><div>--- end patchpoint ---</div><div><br></div><div>You could potentially expose the vtable ptr load to LLVM IR.</div><div><br></div><div>Note that patching at llvm.stackmap (not llvm.patchpoint) would only be done if you don't want to resume execuction within the same compiled function after reaching stack map.</div><div><br></div><blockquote type="cite"><div style="font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">* Now, for the GC, if I understand correctly, instead of declaring a<br>variable as a root, you can declare explicitly the safepoints by using<br>patchpoints with something like<br><br>patchpoint ID_safepoint_17, suspendTheThreadForCollection, list of the<br>alloca (or registers) that contains objects<br><br>Then in the suspendTheThreadForCollection, we can see that we are<br>coming for the safepoint_17 and then find the locations of the<br>objects? If a patchpoint can work like this, it's probably a good<br>building block for the gc.<br><br>Currently, we have to declare the root objects with the root<br>intrinsic, then add the appropriate safepoints (it's just a call to<br>GCFunctionInfo.addSafePoint). As root objects are marked as root,<br>modifying GCFunctionInfo.addSafepoint to generate a patchpoint with<br>all the gc roots as argument (instead of using the current<br>infrastructure) should not be difficult. And it probably means that<br>the current gc infrastructure could use patchpoint as a backend. The<br></div></blockquote><div><br></div><div><div>Yes, but I think it's more robust to call addSafepoint during MC lowering when we know that register assignments can't change. I see two options</div><div><br></div><div>- Call addSafepoint() within the AsmPrinter where we currently call StackMaps::recordStackMap.</div><div><br></div><div>- Run addGCPasses() after addPreEmitPass() in Passes.cpp. I'm not sure if there's a good reason we currently run it before block layout and target-specific passes.</div></div><br><blockquote type="cite"><div style="font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">only problem that I see is that all the objects will be transmitted as<br>arguments to suspendTheThreadForCollection, it's maybe not the best<br>way to do that. Probably, something like:<br>safepoint ID_safepoint_17, list of alloca that contains objects<br>patchpoint ID_safepoint_17, suspendTheThreadForCollection<br>should be better to avoid useless arguments?</div></blockquote><div><br></div><div>No, just use patchpoint, passing '0' as the number of calling convention arguments. All patchpoint operands after the arguments are identical to stackmap operands:</div><div><br></div><div> patchpoint(ID_safepoint_17, suspendThread, 0, list of alloca)</div><div><br></div><blockquote type="cite"><div style="font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><br>See you,<br>Gaël<br><br>PS: just, tell me if the code is already in the trunk, because I would</div></blockquote><blockquote type="cite"><div style="font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">like to see if these intrinsics can work for vmkit :)<br></div></blockquote><div><br></div><div>The implementation is not final, but a working patch set is up for review. See </div><div><a href="http://llvm-reviews.chandlerc.com/D1996">http://llvm-reviews.chandlerc.com/D1996</a> and its dependent patches.</div><div><br></div>-Andy</div><div><br><blockquote type="cite"><div style="font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><br><br>2013/10/23 Andrew Trick <<a href="mailto:atrick@apple.com">atrick@apple.com</a>>:<br><blockquote type="cite">I'm moving this to a different thread. I think the newly proposed<br>intrinsic definitions and their current implementation are valuable<br>regardless of how it gets tied into GC...<br><br>On Oct 22, 2013, at 6:24 PM, Philip R <<a href="mailto:listmail@philipreames.com">listmail@philipreames.com</a>> wrote:<br><br>Adding Gael as someone who has previously discussed vmkit topics on the<br>list. Since I'm assuming this is where the GC support came from, I wanted<br>to draw this conversation to the attention of someone more familiar with the<br>LLVM implementation than myself.<br><br>On 10/22/13 4:18 PM, Andrew Trick wrote:<br><br>On Oct 22, 2013, at 3:08 PM, Filip Pizlo <<a href="mailto:fpizlo@apple.com">fpizlo@apple.com</a>> wrote:<br><br>On Oct 22, 2013, at 1:48 PM, Philip R <<a href="mailto:listmail@philipreames.com">listmail@philipreames.com</a>> wrote:<br><br>On 10/22/13 10:34 AM, Filip Pizlo wrote:<br><br>On Oct 22, 2013, at 9:53 AM, Philip R <<a href="mailto:listmail@philipreames.com">listmail@philipreames.com</a>> wrote:<br><br>On 10/17/13 10:39 PM, Andrew Trick wrote:<br><br>This is a proposal for adding Stackmaps and Patchpoints to LLVM. The<br>first client of these features is the JavaScript compiler within the<br>open source WebKit project.<br><br>I have a couple of comments on your proposal. None of these are major<br>enough to prevent submission.<br><br>- As others have said, I'd prefer an experimental namespace rather than a<br>webkit namespace. (minor)<br>- Unless I am misreading your proposal, your proposed StackMap intrinsic<br>duplicates existing functionality already in llvm. In particular, much of<br>the StackMap construction seems similar to the Safepoint mechanism used by<br>the in-tree GC support. (See CodeGen/GCStrategy.cpp and<br>CodeGen/GCMetadata.cpp). Have you examined these mechanisms to see if you<br>can share implementations?<br>- To my knowledge, there is nothing that prevents an LLVM optimization pass<br>from manufacturing new pointers which point inside an existing data<br>structure. (e.g. an interior pointer to an array when blocking a loop)<br>Does your StackMap mechanism need to be able to inspect/modify these<br>manufactured temporaries? If so, I don't see how you could generate an<br>intrinsic which would include this manufactured pointer in the live variable<br>list. Is there something I'm missing here?<br><br>These stackmaps have nothing to do with GC. Interior pointers are a problem<br>unique to precise copying collectors.<br><br>I would argue that while the use of the stack maps might be different, the<br>mechanism is fairly similar.<br><br><br>It's not at all similar. These stackmaps are only useful for<br>deoptimization, since the only way to make use of the live state information<br>is to patch the stackmap with a jump to a deoptimization off-ramp. You<br>won't use these for a GC.<br><br>In general, if the expected semantics are the same, a shared implementation<br>would be desirable. This is more a suggestion for future refactoring than<br>anything else.<br><br><br>I think that these stackmaps and GC stackmaps are fairly different beasts.<br>While it's possible to unify the two, this isn't the intent here. In<br>particular, you can use these stackmaps for deoptimization without having to<br>unwind the stack.<br><br><br>I think Philip R is asking a good question. To paraphrase: If we introduce a<br>generically named feature, shouldn’t it be generically useful? Stack maps<br>are used in other ways, and there are other kinds of patching. I agree and I<br>think these are intended to be generically useful features, but not<br>necessarily sufficient for every use.<br><br>Thank you for the restatement. You summarized my view well.<br><br><br>The proposed stack maps are very different from LLVM’s gcroot because gcroot<br>does not provide stack maps! llvm.gcroot effectively designates a stack<br>location for each root for the duration of the current function, and forces<br>the root to be spilled to the stack at all call sites (the client needs to<br>disable StackColoring). This is really the opposite of a stack map and I’m<br>not aware of any functionality that can be shared. It also requires a C++<br>plugin to process the roots. llvm.stackmap generates data in a section that<br>MCJIT clients can parse.<br><br>Er, I think we're talking past each other again. Let me lay out my current<br>understanding of the terminology and existing infrastructure in LLVM.<br>Please correct me where I go wrong.<br><br>stack map - A mapping from "values" to storage locations. Storage locations<br>primarily take the form of register, or stack offsets, but could in<br>principal refer to other well known locations (i.e. offsets into thread<br>local state). A stack map is specific to a particular PC and describes the<br>state at that instruction only.<br><br>In a precise garbage collector, stack maps are used to ensure that the stack<br>can be understood by the collector. When a stop-the-world safepoint is<br>reached, the collector needs to be able to identify any pointers to heap<br>objects which may exist on the stack. This explicitly includes both the<br>frame which actually contains the safepoint and any caller frames back to<br>the root of thread. To accomplish this, a stack map is generated at any<br>call site and a stack map is generated for the safepoint itself.<br><br>In LLVM currently, the GCStrategy records "safepoints" which are really<br>points at which stack maps need to be remembered. (i.e. calls and actual<br>stop-the-world safepoints) The GCMetadata mechanism gives a generic way to<br>emit the binary encoding of a stack map in a collector specific way. The<br>current stack maps supported by this mechanism only allow abstract locations<br>on the stack which force all registers to be spilled around "safepoints"<br>(i.e. calls and stop-the-world safepoints). Also, the set of roots (which<br>are recorded in the stack map) must be provided separately using the gcroot<br>intrinsic.<br><br>In code:<br>- GCPoint in llvm/include/llvm/CodeGen/GCMetadata.h describes a request for<br>a location with a stack map. The SafePoints structure in GCFunctionInfo<br>contains a list of these locations.<br>- The Ocaml GC is probably the best example of usage. See<br>llvm/lib/CodeGen/AsmPrinter/OcamlGCPrinter.cpp<br><br>Note: The summary of existing LLVM details above is based on reading the<br>code. I haven't actually implemented anything which used this mechanism<br>yet. As such, take it with a grain of salt.<br><br><br>That's an excellent description of stack maps, GCStrategy, and<br>safepoints. Now let me explain how I see it.<br><br>GCStrategy provides layers of abstraction that allow plugins to<br>specialize GC metadata. Conceptually, a plugin can generate what looks<br>like stack map data to the collector. But there isn't any direct<br>support in LLVM IR for the kind of stack maps that we need.<br><br>When I talk about adding stack map support, I'm really talking about<br>support for mapping values to registers, where the set of values and<br>their locations are specific to the "safepoint".<br><br>We're adding an underlying implementation of per-safepoint live<br>values. There isn't a lot of abstraction built up around it. Just a<br>couple of intrinsics that directly expose the functionality.<br><br>We're also approaching the interface very differently. We're enabling<br>an MCJIT client. The interface to the client is the stack map format.<br><br><br>In your change, you are adding a mechanism which is intended to enable<br>runtime calls and inline cache patching. (Right?) Your stack maps seem to<br>match the definition of a stack map I gave above and (I believe) the<br>implementation currently in LLVM. The only difference might be that your<br>stack maps are partial (i.e. might not contain all "values" which are live<br>at a particular PC) and your implementation includes Register locations<br>which the current implementation in LLVM does not. One other possible<br>difference, are you intending to include "values" which aren't of pointer<br>type?<br><br><br>Yes, the values will be of various types (although only 32/64 bit<br>types are currently allowed because of DWARF register number<br>weirdness). More importantly, our stack maps record locations of a<br>specific set of values, which may be in registers, at a specific<br>location. In fact, that, along with reserving space for code patching,<br>is *all* we're doing. GCRoot doesn't do this at all. So there is<br>effectively no overlap in implementation.<br><br><br>Before moving on, am I interpreting your proposal and changes correctly?<br><br><br>Yes, except I don’t see a direct connection between the functionality we’re<br>adding and “the implementation currently in LLVM”.<br><br>Assuming I'm still correct so far, how might we combine these<br>implementations? It looks like your implementation is much more mature than<br>what exists in tree at the moment. One possibility would be to express the<br>needed GC stack maps in terms of your new infrastructure. (i.e. convert a<br>GCStrategy request for a safepoint into a StackMap (as you've implemented<br>it) with the list of explicit GC roots as it's arguments). What would you<br>think of this?<br><br><br>I can imagine someone wanting to leverage some of the new<br>implementation without using it end-to-end as-is. Although I'm not<br>entirely sure what the motivation would be. For example:<br><br>- A CodeGenPrepare pass could insert llvm.safepoint or llvm.patchpoint<br> calls at custom safepoints after determining GC root liveness at<br> those points.<br><br>- Something like a GCStrategy could intercept our implementation of<br> stack map generation and emit a custom format. Keep in mind though<br> that the format that LLVM emits does not need to be the format read<br> by the collector. The JIT/runtime can parse LLVM's stack map data<br> and encode it using it's own data structures. That way, the<br> JIT/runtime can change without customizing LLVM.<br><br>As far as hooking the new stack map support into the GCMetaData<br>abstraction, I'm not sure how that would work. GCMachineCodeAnalysis<br>is currently a standalone MI pass. We can't generate our stack maps<br>here. Technically, a preEmitPass can come along later and reassign<br>registers invalidating the stack map. That's why we generate the maps<br>during MC lowering.<br><br>So, currently, the new intrinsics are serving a different purpose than<br>GCMetaData. I think someone working on GC support needs to be<br>convinced that they really need the new stack map features. Then we<br>can build something on top of the underlying functionality that works<br>for them.<br><br>-Andy<br></blockquote><br><br><br>--<span class="Apple-converted-space"> </span><br>-------------------------------------------------------------------<br>Gaël Thomas, Associate Professor, UPMC<br><a href="http://pagesperso-systeme.lip6.fr/Gael.Thomas/">http://pagesperso-systeme.lip6.fr/Gael.Thomas/</a><br>-------------------------------------------------------------------</div></blockquote></div><br></body></html>