[llvm-commits] CVS: llvm/lib/Target/X86/README.txt
Chris Lattner
sabre at nondot.org
Sun Dec 10 17:20:40 PST 2006
Changes in directory llvm/lib/Target/X86:
README.txt updated: 1.146 -> 1.147
---
Log message:
Update note, with the SROA change, we now produce:
_pairtest:
movl 8(%esp), %eax
movl 4(%esp), %ecx
movd %eax, %xmm0
movd %ecx, %xmm1
addss %xmm0, %xmm1
movl 12(%esp), %eax
movss %xmm1, (%eax)
ret
instead of:
_pairtest:
subl $12, %esp
movl 20(%esp), %eax
movl %eax, 4(%esp)
movl 16(%esp), %eax
movl %eax, (%esp)
movss (%esp), %xmm0
addss 4(%esp), %xmm0
movl 24(%esp), %eax
movss %xmm0, (%eax)
addl $12, %esp
ret
---
Diffs of the changes: (+11 -10)
README.txt | 21 +++++++++++----------
1 files changed, 11 insertions(+), 10 deletions(-)
Index: llvm/lib/Target/X86/README.txt
diff -u llvm/lib/Target/X86/README.txt:1.146 llvm/lib/Target/X86/README.txt:1.147
--- llvm/lib/Target/X86/README.txt:1.146 Tue Nov 28 13:59:25 2006
+++ llvm/lib/Target/X86/README.txt Sun Dec 10 19:20:25 2006
@@ -430,16 +430,13 @@
We currently generate this code with llvmgcc4:
_pairtest:
- subl $12, %esp
- movl 20(%esp), %eax
- movl %eax, 4(%esp)
- movl 16(%esp), %eax
- movl %eax, (%esp)
- movss (%esp), %xmm0
- addss 4(%esp), %xmm0
- movl 24(%esp), %eax
- movss %xmm0, (%eax)
- addl $12, %esp
+ movl 8(%esp), %eax
+ movl 4(%esp), %ecx
+ movd %eax, %xmm0
+ movd %ecx, %xmm1
+ addss %xmm0, %xmm1
+ movl 12(%esp), %eax
+ movss %xmm1, (%eax)
ret
we should be able to generate:
@@ -455,6 +452,10 @@
a single 32-bit integer stack slot. We should handle the safe cases above much
nicer, while still handling the hard cases.
+While true in general, in this specific case we could do better by promoting
+load int + bitcast to float -> load fload. This basically needs alignment info,
+the code is already implemented (but disabled) in dag combine).
+
//===---------------------------------------------------------------------===//
Another instruction selector deficiency:
More information about the llvm-commits
mailing list