[llvm-commits] [llvm] r123971 - in /llvm/trunk: lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp test/CodeGen/ARM/vcgt.ll test/CodeGen/Thumb2/2010-03-15-AsmCCClobber.ll
Andrew Trick
atrick at apple.com
Thu Jan 20 22:19:05 PST 2011
Author: atrick
Date: Fri Jan 21 00:19:05 2011
New Revision: 123971
URL: http://llvm.org/viewvc/llvm-project?rev=123971&view=rev
Log:
Enable support for precise scheduling of the instruction selection
DAG. Disable using "-disable-sched-cycles".
For ARM, this enables a framework for modeling the cpu pipeline and
counting stalls. It also activates several heuristics to drive
scheduling based on the model. Scheduling is inherently imprecise at
this stage, and until spilling is improved it may defeat attempts to
schedule. However, this framework provides greater control over
tuning codegen.
Although the flag is not target-specific, it should have very little
affect on the default scheduler used by x86. The only two changes that
affect x86 are:
- scheduling a high-latency operation bumps the current cycle so independent
operations can have their latency covered. i.e. two independent 4
cycle operations can produce results in 4 cycles, not 8 cycles.
- Two operations with equal register pressure impact and no
latency-based stalls on their uses will be prioritized by depth before height
(height is irrelevant if no stalls occur in the schedule below this point).
Modified:
llvm/trunk/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
llvm/trunk/test/CodeGen/ARM/vcgt.ll
llvm/trunk/test/CodeGen/Thumb2/2010-03-15-AsmCCClobber.ll
Modified: llvm/trunk/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp?rev=123971&r1=123970&r2=123971&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp (original)
+++ llvm/trunk/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp Fri Jan 21 00:19:05 2011
@@ -67,7 +67,7 @@
createILPListDAGScheduler);
static cl::opt<bool> DisableSchedCycles(
- "disable-sched-cycles", cl::Hidden, cl::init(true),
+ "disable-sched-cycles", cl::Hidden, cl::init(false),
cl::desc("Disable cycle-level precision during preRA scheduling"));
namespace {
Modified: llvm/trunk/test/CodeGen/ARM/vcgt.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/vcgt.ll?rev=123971&r1=123970&r2=123971&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/ARM/vcgt.ll (original)
+++ llvm/trunk/test/CodeGen/ARM/vcgt.ll Fri Jan 21 00:19:05 2011
@@ -161,9 +161,9 @@
; rdar://7923010
define <4 x i32> @vcgt_zext(<4 x float>* %A, <4 x float>* %B) nounwind {
;CHECK: vcgt_zext:
+;CHECK: vmov.i32 q10, #0x1
;CHECK: vcgt.f32 q8
-;CHECK: vmov.i32 q9, #0x1
-;CHECK: vand q8, q8, q9
+;CHECK: vand q8, q8, q10
%tmp1 = load <4 x float>* %A
%tmp2 = load <4 x float>* %B
%tmp3 = fcmp ogt <4 x float> %tmp1, %tmp2
Modified: llvm/trunk/test/CodeGen/Thumb2/2010-03-15-AsmCCClobber.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/Thumb2/2010-03-15-AsmCCClobber.ll?rev=123971&r1=123970&r2=123971&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/Thumb2/2010-03-15-AsmCCClobber.ll (original)
+++ llvm/trunk/test/CodeGen/Thumb2/2010-03-15-AsmCCClobber.ll Fri Jan 21 00:19:05 2011
@@ -1,4 +1,7 @@
-; RUN: llc < %s -mtriple=thumbv7-apple-darwin -mcpu=cortex-a8 | FileCheck %s
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin -mcpu=cortex-a8 \
+; RUN: -pre-RA-sched=source | FileCheck -check-prefix=SOURCE %s
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin -mcpu=cortex-a8 \
+; RUN: -pre-RA-sched=list-hybrid | FileCheck -check-prefix=HYBRID %s
; Radar 7459078
target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32-n32"
@@ -10,9 +13,11 @@
%s5 = type { i32 }
; Make sure the cmp is not scheduled before the InlineAsm that clobbers cc.
-; CHECK: InlineAsm End
-; CHECK: cmp
-; CHECK: beq
+; SOURCE: InlineAsm End
+; SOURCE: cmp
+; SOURCE: beq
+; HYBRID: InlineAsm End
+; HYBRID: cbz
define void @test(%s1* %this, i32 %format, i32 %w, i32 %h, i32 %levels, i32* %s, i8* %data, i32* nocapture %rowbytes, void (i8*, i8*)* %release, i8* %info) nounwind {
entry:
%tmp1 = getelementptr inbounds %s1* %this, i32 0, i32 0, i32 0, i32 1, i32 0, i32 0
More information about the llvm-commits
mailing list