[llvm-testresults] Geryon-X86-64 nightly tester results

daniel_dunbar at apple.com daniel_dunbar at apple.com
Wed Mar 3 01:55:28 PST 2010


http://smooshlab.apple.com/perf/db_nt_internal/nightlytest/5919/
Name:  clattner2.apple.com
Nickname: Geryon-X86-64:5

Run: 5919, Start Time: 2010-03-02 23:00:26, End Time: 2010-03-03 01:55:17
Comparing To: 5883, Start Time: 2010-03-01 23:00:25, End Time: 2010-03-02 01:56:05

--- Changes Summary ---
New Test Passes: 0
New Test Failures: 0
Added Tests: 0
Removed Tests: 0
Significant Changes: 29

--- Tests Summary ---
Total Tests: 3879
Total Test Failures: 133

Total Test Failures By Type:
  CBE: 51
  JIT: 25
  JIT codegen: 57

--- Changes Detail ---
New Test Passes:

New Test Failures:

Added Tests:

Removed Tests:

Significant Changes in Test Results:
JIT codegen:
 MultiSource/Benchmarks/MiBench/automotive-susan/automotive-susan: 7.02% (0.4671 => 0.4999)
 MultiSource/Benchmarks/Ptrdist/yacr2/yacr2: 11.73% (0.4332 => 0.4840)
CBE:
 MultiSource/Benchmarks/MiBench/automotive-basicmath/automotive-basicmath: 10.26% (0.3900 => 0.4300)
 MultiSource/Benchmarks/Ptrdist/yacr2/yacr2: 9.88% (0.8100 => 0.8900)
 SingleSource/Benchmarks/Shootout-C++/matrix: 6.93% (3.3200 => 3.5500)
LLC-BETA:
 MultiSource/Benchmarks/MiBench/automotive-basicmath/automotive-basicmath: 12.82% (0.3900 => 0.4400)
 SingleSource/Benchmarks/Shootout-C++/hash2: 10.47% (2.7700 => 3.0600)
 SingleSource/Benchmarks/Shootout/sieve: -16.98% (7.4800 => 6.2100)
LLC compile:
 Externals/SPEC/CINT2006/401_bzip2/401_bzip2: 6.83% (1.1563 => 1.2353)
 MultiSource/Benchmarks/MiBench/automotive-susan/automotive-susan: 7.19% (0.5340 => 0.5724)
 MultiSource/Benchmarks/Ptrdist/yacr2/yacr2: 11.43% (0.4260 => 0.4747)
JIT:
 Externals/SPEC/CFP2000/177_mesa/177_mesa: -7.96% (4.0200 => 3.7000)
 Externals/SPEC/CINT2000/252_eon/252_eon: 8.66% (4.0400 => 4.3900)
 Externals/SPEC/CINT2006/401_bzip2/401_bzip2: 5.51% (3.8100 => 4.0200)
 MultiSource/Applications/viterbi/viterbi: -5.37% (10.8100 => 10.2300)
 MultiSource/Benchmarks/MiBench/automotive-basicmath/automotive-basicmath: 9.52% (0.4200 => 0.4600)
 MultiSource/Benchmarks/MiBench/automotive-susan/automotive-susan: 5.56% (0.5400 => 0.5700)
 MultiSource/Benchmarks/Ptrdist/ft/ft: 7.41% (1.0800 => 1.1600)
 MultiSource/Benchmarks/Ptrdist/yacr2/yacr2: 6.43% (1.4000 => 1.4900)
 MultiSource/Benchmarks/Trimaran/netbench-url/netbench-url: 7.34% (2.5900 => 2.7800)
 MultiSource/Benchmarks/mafft/pairlocalalign: 16.78% (27.1800 => 31.7400)
 SingleSource/Benchmarks/CoyoteBench/huffbench: 15.42% (16.0200 => 18.4900)
 SingleSource/Benchmarks/Misc/mandel-2: -7.07% (0.9900 => 0.9200)
 SingleSource/Benchmarks/Shootout/lists: 5.67% (6.1700 => 6.5200)
 SingleSource/Benchmarks/Shootout/sieve: -13.95% (7.7400 => 6.6600)
LLC:
 MultiSource/Benchmarks/MiBench/automotive-basicmath/automotive-basicmath: 10.26% (0.3900 => 0.4300)
 SingleSource/Benchmarks/Shootout/sieve: -16.98% (7.4800 => 6.2100)
LLC-BETA compile:
 MultiSource/Benchmarks/MiBench/automotive-susan/automotive-susan: 6.39% (0.5365 => 0.5708)
 MultiSource/Benchmarks/Ptrdist/yacr2/yacr2: 12.01% (0.4271 => 0.4784)



More information about the llvm-testresults mailing list