[llvm] r305184 - [X86][SSE] Change memop fragment to inherit from vec128load with local alignment controls

Simon Pilgrim via llvm-commits llvm-commits at lists.llvm.org
Mon Jun 12 03:01:27 PDT 2017


Author: rksimon
Date: Mon Jun 12 05:01:27 2017
New Revision: 305184

URL: http://llvm.org/viewvc/llvm-project?rev=305184&view=rev
Log:
[X86][SSE] Change memop fragment to inherit from vec128load with local alignment controls

First possible step towards merging SSE/AVX memory folding pattern fragments.

Also allows us to remove the duplicate non-temporal load logic.

Differential Revision: https://reviews.llvm.org/D33902

Modified:
    llvm/trunk/lib/Target/X86/X86InstrFragmentsSIMD.td

Modified: llvm/trunk/lib/Target/X86/X86InstrFragmentsSIMD.td
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrFragmentsSIMD.td?rev=305184&r1=305183&r2=305184&view=diff
==============================================================================
--- llvm/trunk/lib/Target/X86/X86InstrFragmentsSIMD.td (original)
+++ llvm/trunk/lib/Target/X86/X86InstrFragmentsSIMD.td Mon Jun 12 05:01:27 2017
@@ -737,19 +737,15 @@ def alignedloadv8f64  : PatFrag<(ops nod
 def alignedloadv8i64  : PatFrag<(ops node:$ptr),
                                 (v8i64  (alignedload512 node:$ptr))>;
 
-// Like 'load', but uses special alignment checks suitable for use in
+// Like 'vec128load', but uses special alignment checks suitable for use in
 // memory operands in most SSE instructions, which are required to
 // be naturally aligned on some targets but not on others.  If the subtarget
 // allows unaligned accesses, match any load, though this may require
 // setting a feature bit in the processor (on startup, for example).
 // Opteron 10h and later implement such a feature.
-// Avoid non-temporal aligned loads on supported targets.
-def memop : PatFrag<(ops node:$ptr), (load node:$ptr), [{
-  return (Subtarget->hasSSEUnalignedMem() ||
-          cast<LoadSDNode>(N)->getAlignment() >= 16) &&
-         (!Subtarget->hasSSE41() ||
-          !(cast<LoadSDNode>(N)->getAlignment() >= 16 &&
-            cast<LoadSDNode>(N)->isNonTemporal()));
+def memop : PatFrag<(ops node:$ptr), (vec128load node:$ptr), [{
+  return Subtarget->hasSSEUnalignedMem() ||
+         cast<LoadSDNode>(N)->getAlignment() >= 16;
 }]>;
 
 // 128-bit memop pattern fragments




More information about the llvm-commits mailing list