[llvm] r337673 - [x86/SLH] Fix a bug where we would harden tail calls twice -- once as
Chandler Carruth via llvm-commits
llvm-commits at lists.llvm.org
Mon Jul 23 00:56:15 PDT 2018
Author: chandlerc
Date: Mon Jul 23 00:56:15 2018
New Revision: 337673
URL: http://llvm.org/viewvc/llvm-project?rev=337673&view=rev
Log:
[x86/SLH] Fix a bug where we would harden tail calls twice -- once as
a call, and then again as a return.
Also added a comment to try and explain better why we would be doing
what we're doing when hardening the (non-call) returns.
Modified:
llvm/trunk/lib/Target/X86/X86SpeculativeLoadHardening.cpp
llvm/trunk/test/CodeGen/X86/speculative-load-hardening-indirect.ll
Modified: llvm/trunk/lib/Target/X86/X86SpeculativeLoadHardening.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86SpeculativeLoadHardening.cpp?rev=337673&r1=337672&r2=337673&view=diff
==============================================================================
--- llvm/trunk/lib/Target/X86/X86SpeculativeLoadHardening.cpp (original)
+++ llvm/trunk/lib/Target/X86/X86SpeculativeLoadHardening.cpp Mon Jul 23 00:56:15 2018
@@ -525,7 +525,11 @@ bool X86SpeculativeLoadHardeningPass::ru
continue;
MachineInstr &MI = MBB.back();
- if (!MI.isReturn())
+
+ // We only care about returns that are not also calls. For calls, that
+ // happen to also be returns (tail calls) we will have already handled
+ // them as calls.
+ if (!MI.isReturn() || MI.isCall())
continue;
hardenReturnInstr(MI);
Modified: llvm/trunk/test/CodeGen/X86/speculative-load-hardening-indirect.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/speculative-load-hardening-indirect.ll?rev=337673&r1=337672&r2=337673&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/X86/speculative-load-hardening-indirect.ll (original)
+++ llvm/trunk/test/CodeGen/X86/speculative-load-hardening-indirect.ll Mon Jul 23 00:56:15 2018
@@ -37,9 +37,6 @@ define i32 @test_indirect_tail_call(i32
; X64-NEXT: movq %rsp, %rax
; X64-NEXT: movq $-1, %rcx
; X64-NEXT: sarq $63, %rax
-; X64-NEXT: movq %rax, %rcx
-; X64-NEXT: shlq $47, %rcx
-; X64-NEXT: orq %rcx, %rsp
; X64-NEXT: shlq $47, %rax
; X64-NEXT: orq %rax, %rsp
; X64-NEXT: jmpq *(%rdi) # TAILCALL
@@ -77,9 +74,6 @@ define i32 @test_indirect_tail_call_glob
; X64-NEXT: movq %rsp, %rax
; X64-NEXT: movq $-1, %rcx
; X64-NEXT: sarq $63, %rax
-; X64-NEXT: movq %rax, %rcx
-; X64-NEXT: shlq $47, %rcx
-; X64-NEXT: orq %rcx, %rsp
; X64-NEXT: shlq $47, %rax
; X64-NEXT: orq %rax, %rsp
; X64-NEXT: jmpq *{{.*}}(%rip) # TAILCALL
More information about the llvm-commits
mailing list