<div dir="ltr">I agree its not ideal. The name of the version we use for fencing could certainly be improved at the very least. The regular LOCK_OR32mi8 would normally have a MemOperand which would specify the atomicness. But for this fence case there is no MemOperand. Not sure if that makes the target independent backend code as conservative as if we had the atomic MemOperand? Even then the fact that an atomic rmw to the top of the stack is as good as a fence seems like an X86 specific detail that mayload/maystore wouldn't be strong enough to convey to target independent code.<div><br></div><div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature">~Craig</div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, May 15, 2019 at 7:41 PM Philip Reames via llvm-commits <<a href="mailto:llvm-commits@lists.llvm.org">llvm-commits@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Craig,<br>
<br>
I'm not objecting to this patch per se, but this really seems like a<br>
serious code smell.  Two different spellings of this same instruction<br>
with different side effects?  That seems distinctly less than ideal from<br>
a clarity of code perspective.<br>
<br>
Philip<br>
<br>
On 5/14/19 9:15 PM, Craig Topper via llvm-commits wrote:<br>
> Author: ctopper<br>
> Date: Tue May 14 21:15:46 2019<br>
> New Revision: 360747<br>
><br>
> URL: <a href="http://llvm.org/viewvc/llvm-project?rev=360747&view=rev" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project?rev=360747&view=rev</a><br>
> Log:<br>
> [X86] Use OR32mi8Locked instead of LOCK_OR32mi8 in emitLockedStackOp.<br>
><br>
> They encode the same way, but OR32mi8Locked sets hasUnmodeledSideEffects set<br>
> which should be stronger than the mayLoad/mayStore on LOCK_OR32mi8. I think<br>
> this makes sense since we are using it as a fence.<br>
><br>
> This also seems to hide the operation from the speculative load hardening pass<br>
> so I've reverted r360511.<br>
><br>
> Modified:<br>
>     llvm/trunk/lib/Target/X86/X86ISelLowering.cpp<br>
>     llvm/trunk/lib/Target/X86/X86SpeculativeLoadHardening.cpp<br>
>     llvm/trunk/test/CodeGen/X86/speculative-load-hardening.ll<br>
><br>
> Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.cpp<br>
> URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=360747&r1=360746&r2=360747&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=360747&r1=360746&r2=360747&view=diff</a><br>
> ==============================================================================<br>
> --- llvm/trunk/lib/Target/X86/X86ISelLowering.cpp (original)<br>
> +++ llvm/trunk/lib/Target/X86/X86ISelLowering.cpp Tue May 14 21:15:46 2019<br>
> @@ -25872,7 +25872,7 @@ static SDValue emitLockedStackOp(Selecti<br>
>        DAG.getRegister(0, MVT::i16),                         // Segment.<br>
>        Zero,<br>
>        Chain};<br>
> -    SDNode *Res = DAG.getMachineNode(X86::LOCK_OR32mi8, DL, MVT::i32,<br>
> +    SDNode *Res = DAG.getMachineNode(X86::OR32mi8Locked, DL, MVT::i32,<br>
>                                       MVT::Other, Ops);<br>
>      return SDValue(Res, 1);<br>
>    }<br>
> @@ -25887,7 +25887,7 @@ static SDValue emitLockedStackOp(Selecti<br>
>      Zero,<br>
>      Chain<br>
>    };<br>
> -  SDNode *Res = DAG.getMachineNode(X86::LOCK_OR32mi8, DL, MVT::i32,<br>
> +  SDNode *Res = DAG.getMachineNode(X86::OR32mi8Locked, DL, MVT::i32,<br>
>                                     MVT::Other, Ops);<br>
>    return SDValue(Res, 1);<br>
>  }<br>
><br>
> Modified: llvm/trunk/lib/Target/X86/X86SpeculativeLoadHardening.cpp<br>
> URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86SpeculativeLoadHardening.cpp?rev=360747&r1=360746&r2=360747&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86SpeculativeLoadHardening.cpp?rev=360747&r1=360746&r2=360747&view=diff</a><br>
> ==============================================================================<br>
> --- llvm/trunk/lib/Target/X86/X86SpeculativeLoadHardening.cpp (original)<br>
> +++ llvm/trunk/lib/Target/X86/X86SpeculativeLoadHardening.cpp Tue May 14 21:15:46 2019<br>
> @@ -1719,11 +1719,9 @@ void X86SpeculativeLoadHardeningPass::tr<br>
>  <br>
>          // If we have at least one (non-frame-index, non-RIP) register operand,<br>
>          // and neither operand is load-dependent, we need to check the load.<br>
> -        // Also handle explicit references to RSP as used by idempotent atomic<br>
> -        // or with 0.<br>
>          unsigned BaseReg = 0, IndexReg = 0;<br>
>          if (!BaseMO.isFI() && BaseMO.getReg() != X86::RIP &&<br>
> -            BaseMO.getReg() != X86::RSP && BaseMO.getReg() != X86::NoRegister)<br>
> +            BaseMO.getReg() != X86::NoRegister)<br>
>            BaseReg = BaseMO.getReg();<br>
>          if (IndexMO.getReg() != X86::NoRegister)<br>
>            IndexReg = IndexMO.getReg();<br>
><br>
> Modified: llvm/trunk/test/CodeGen/X86/speculative-load-hardening.ll<br>
> URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/speculative-load-hardening.ll?rev=360747&r1=360746&r2=360747&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/speculative-load-hardening.ll?rev=360747&r1=360746&r2=360747&view=diff</a><br>
> ==============================================================================<br>
> --- llvm/trunk/test/CodeGen/X86/speculative-load-hardening.ll (original)<br>
> +++ llvm/trunk/test/CodeGen/X86/speculative-load-hardening.ll Tue May 14 21:15:46 2019<br>
> @@ -1148,17 +1148,12 @@ entry:<br>
>  define void @idempotent_atomic(i32* %x) speculative_load_hardening {<br>
>  ; X64-LABEL: idempotent_atomic:<br>
>  ; X64:       # %bb.0:<br>
> -; X64-NEXT:    movq %rsp, %rax<br>
> -; X64-NEXT:    movq $-1, %rcx<br>
> -; X64-NEXT:    sarq $63, %rax<br>
> -; X64-NEXT:    lock orl $0, -64(%rsp)<br>
> -; X64-NEXT:    shlq $47, %rax<br>
> -; X64-NEXT:    orq %rax, %rsp<br>
> +; X64-NEXT:    lock orl $0, -{{[0-9]+}}(%rsp)<br>
>  ; X64-NEXT:    retq<br>
>  ;<br>
>  ; X64-LFENCE-LABEL: idempotent_atomic:<br>
>  ; X64-LFENCE:       # %bb.0:<br>
> -; X64-LFENCE-NEXT:    lock orl $0, -64(%rsp)<br>
> +; X64-LFENCE-NEXT:    lock orl $0, -{{[0-9]+}}(%rsp)<br>
>  ; X64-LFENCE-NEXT:    retq<br>
>    %tmp = atomicrmw or i32* %x, i32 0 seq_cst<br>
>    ret void<br>
><br>
><br>
> _______________________________________________<br>
> llvm-commits mailing list<br>
> <a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a><br>
> <a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits</a><br>
_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a><br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits</a><br>
</blockquote></div>