[PATCH] D45633: [XRay] Typed event logging intrinsic

Dean Michael Berris via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Sat Apr 14 22:55:38 PDT 2018


dberris accepted this revision.
dberris added a comment.
This revision is now accepted and ready to land.

LGTM



================
Comment at: include/llvm/IR/Intrinsics.td:900
+def int_xray_typedevent : Intrinsic<[], [llvm_i16_ty, llvm_ptr_ty, llvm_i32_ty],
+                                        [NoCapture<1>, ReadOnly<1>, IntrWriteMem]>;
 //===----------------------------------------------------------------------===//
----------------
kpw wrote:
> I confess I copied IntrWriteMem from the custom event intrinsic without understanding why it is the right choice.
This is so that the optimisation passes cannot assume that the call to the intrinsic will not touch any global memory.


================
Comment at: lib/Target/X86/X86MCInstLower.cpp:1098-1100
+      // FIXME: Add reordering to the stashing so we never clobber a register
+      // before we've stashed it. e.g. RDI may be the third argument of the
+      // caller.
----------------
kpw wrote:
> The reason I didn't do this now is that I wasn't sure whether it should be done with a sled version change.
> 
> While it is obviously a change, it doesn't require a change in the patching/unpatching logic.
I think we can change the lowering now, as long as the number of bytes doesn't change. Also we don't need to change the sled version just for the re-ordering.


================
Comment at: lib/Target/X86/X86MCInstLower.cpp:1188
+    if (auto Op = MCIL.LowerMachineOperand(&MI, MI.getOperand(I))) {
+      // TODO: Is register only support adequate?
+      assert(Op->isReg() && "Only supports arguments in registers");
----------------
Yes, mostly because we don't want to make the stack-based and/or constant global value handling special. It could be possible to support those too, but it's not as simple (and it's unclear whether we can do that with the same amount of bytes in the sled).


================
Comment at: test/CodeGen/X86/xray-typed-event-log.ll:13
+    %val = load i32, i32* %eventsize
+    call void @llvm.xray.typedevent(i16 %type, i8* %eventptr, i32 %val)
+    ; CHECK-LABEL: Lxray_typed_event_sled_0:
----------------
kpw wrote:
> I would have liked to have an additional test for when the registers match up and the sled uses noops, but I couldn't figure out how to force that in IR.
> 
> I tried a variant with "call cc 78 void  ..." to try and force SystemV, but I think at the ir level, but it didn't have the desired effect. Any ideas?
There's a lot of randomness in how the register allocators decide which registers to use. Unfortunately I don't think we can force that at the IR level. If you'd like to see how this works at the MIR level, that's an option but I don't know enough how to do this yet. Maybe look around at existing MIR tests to see whether that might be something you can try?


Repository:
  rL LLVM

https://reviews.llvm.org/D45633





More information about the llvm-commits mailing list