[PATCH] D20529: [x86, AVX] allow explicit calls to VZERO* to modify state in VZeroUpperInserter pass
Sanjay Patel via llvm-commits
llvm-commits at lists.llvm.org
Wed May 25 09:46:09 PDT 2016
This revision was automatically updated to reflect the committed changes.
Closed by commit rL270718: [x86, AVX] allow explicit calls to VZERO* to modify state in… (authored by spatel).
Changed prior to commit:
http://reviews.llvm.org/D20529?vs=58111&id=58444#toc
Repository:
rL LLVM
http://reviews.llvm.org/D20529
Files:
llvm/trunk/lib/Target/X86/X86VZeroUpper.cpp
llvm/trunk/test/CodeGen/X86/vzero-excess.ll
Index: llvm/trunk/test/CodeGen/X86/vzero-excess.ll
===================================================================
--- llvm/trunk/test/CodeGen/X86/vzero-excess.ll
+++ llvm/trunk/test/CodeGen/X86/vzero-excess.ll
@@ -1,7 +1,8 @@
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx | FileCheck %s
-; FIXME: The vzeroupper added by the VZeroUpperInserter pass is unnecessary in these tests.
+; In the following 4 tests, the existing call to VZU/VZA ensures clean state before
+; the call to the unknown, so we don't need to insert a second VZU at that point.
define <4 x float> @zeroupper_v4f32(<8 x float> *%x, <8 x float> %y) nounwind {
; CHECK-LABEL: zeroupper_v4f32:
@@ -11,7 +12,6 @@
; CHECK-NEXT: vmovups %ymm0, (%rsp) # 32-byte Spill
; CHECK-NEXT: movq %rdi, %rbx
; CHECK-NEXT: vzeroupper
-; CHECK-NEXT: vzeroupper
; CHECK-NEXT: callq the_unknown
; CHECK-NEXT: vmovups (%rsp), %ymm0 # 32-byte Reload
; CHECK-NEXT: vaddps (%rbx), %ymm0, %ymm0
@@ -37,7 +37,6 @@
; CHECK-NEXT: subq $56, %rsp
; CHECK-NEXT: vmovups %ymm0, (%rsp) # 32-byte Spill
; CHECK-NEXT: vzeroupper
-; CHECK-NEXT: vzeroupper
; CHECK-NEXT: callq the_unknown
; CHECK-NEXT: vmovups (%rsp), %ymm0 # 32-byte Reload
; CHECK-NEXT: addq $56, %rsp
@@ -55,7 +54,6 @@
; CHECK-NEXT: vmovups %ymm0, (%rsp) # 32-byte Spill
; CHECK-NEXT: movq %rdi, %rbx
; CHECK-NEXT: vzeroall
-; CHECK-NEXT: vzeroupper
; CHECK-NEXT: callq the_unknown
; CHECK-NEXT: vmovups (%rsp), %ymm0 # 32-byte Reload
; CHECK-NEXT: vaddps (%rbx), %ymm0, %ymm0
@@ -81,7 +79,6 @@
; CHECK-NEXT: subq $56, %rsp
; CHECK-NEXT: vmovups %ymm0, (%rsp) # 32-byte Spill
; CHECK-NEXT: vzeroall
-; CHECK-NEXT: vzeroupper
; CHECK-NEXT: callq the_unknown
; CHECK-NEXT: vmovups (%rsp), %ymm0 # 32-byte Reload
; CHECK-NEXT: addq $56, %rsp
Index: llvm/trunk/lib/Target/X86/X86VZeroUpper.cpp
===================================================================
--- llvm/trunk/lib/Target/X86/X86VZeroUpper.cpp
+++ llvm/trunk/lib/Target/X86/X86VZeroUpper.cpp
@@ -188,14 +188,15 @@
bool IsReturnFromX86INTR = IsX86INTR && MI->isReturn();
bool IsControlFlow = MI->isCall() || MI->isReturn();
- // Shortcut: don't need to check regular instructions in dirty state.
- if ((!IsControlFlow || IsReturnFromX86INTR) && CurState == EXITS_DIRTY)
+ // An existing VZERO* instruction resets the state.
+ if (MI->getOpcode() == X86::VZEROALL ||
+ MI->getOpcode() == X86::VZEROUPPER) {
+ CurState = EXITS_CLEAN;
continue;
+ }
- // Ignore existing VZERO* instructions.
- // FIXME: The existence of these instructions should be used to modify the
- // current state and/or used when deciding whether we need to create a VZU.
- if (MI->getOpcode() == X86::VZEROALL || MI->getOpcode() == X86::VZEROUPPER)
+ // Shortcut: don't need to check regular instructions in dirty state.
+ if ((!IsControlFlow || IsReturnFromX86INTR) && CurState == EXITS_DIRTY)
continue;
if (hasYmmReg(MI)) {
-------------- next part --------------
A non-text attachment was scrubbed...
Name: D20529.58444.patch
Type: text/x-patch
Size: 3149 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20160525/96d3ff01/attachment.bin>
More information about the llvm-commits
mailing list