[llvm-commits] [llvm] r47939 - /llvm/trunk/lib/Target/X86/README-SSE.txt

Chris Lattner sabre at nondot.org
Tue Mar 4 23:22:40 PST 2008


Author: lattner
Date: Wed Mar  5 01:22:39 2008
New Revision: 47939

URL: http://llvm.org/viewvc/llvm-project?rev=47939&view=rev
Log:
add a note

Modified:
    llvm/trunk/lib/Target/X86/README-SSE.txt

Modified: llvm/trunk/lib/Target/X86/README-SSE.txt
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/README-SSE.txt?rev=47939&r1=47938&r2=47939&view=diff

==============================================================================
--- llvm/trunk/lib/Target/X86/README-SSE.txt (original)
+++ llvm/trunk/lib/Target/X86/README-SSE.txt Wed Mar  5 01:22:39 2008
@@ -748,3 +748,33 @@
 right, but we shouldn't have to custom lower anything.  This is probably related
 to <2 x i64> ops being so bad.
 
+//===---------------------------------------------------------------------===//
+
+'select' on vectors and scalars could be a whole lot better.  We currently 
+lower them to conditional branches.  On x86-64 for example, we compile this:
+
+double test(double a, double b, double c, double d) { return a<b ? c : d; }
+
+to:
+
+_test:
+	ucomisd	%xmm0, %xmm1
+	ja	LBB1_2	# entry
+LBB1_1:	# entry
+	movapd	%xmm3, %xmm2
+LBB1_2:	# entry
+	movapd	%xmm2, %xmm0
+	ret
+
+instead of:
+
+_test:
+	cmpltsd	%xmm1, %xmm0
+	andpd	%xmm0, %xmm2
+	andnpd	%xmm3, %xmm0
+	orpd	%xmm2, %xmm0
+	ret
+
+For unpredictable branches, the later is much more efficient.  This should
+just be a matter of having scalar sse map to SELECT_CC and custom expanding
+or iseling it.





More information about the llvm-commits mailing list