[PATCH] D61675: [WIP] Update IRBuilder::CreateFNeg(...) to return a UnaryOperator

Cameron McInally via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Jul 25 08:55:10 PDT 2019


cameron.mcinally added a comment.

Ok, I think we're pretty close to having unary FNeg optimizations in line with binary FNeg optimizations. Is anyone aware of any obvious passes that I've missed?

Also I'm looking for some test-suite guidance. I've benchmarked the change in this patch and the results appear to be in the noise range (I think?):

  <scrubbed> llvm-project/test-suite-build> ../test-suite/utils/compare.py --filter-short fneg1.json fneg2.json fneg3.json vs stock1.json stock2.json stock3.json
  Warning: 'test-suite :: MicroBenchmarks/XRay/FDRMode/fdrmode-bench.test' has No metrics!
  Warning: 'test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.test' has No metrics!
  Warning: 'test-suite :: MicroBenchmarks/XRay/FDRMode/fdrmode-bench.test' has No metrics!
  Warning: 'test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.test' has No metrics!
  Warning: 'test-suite :: MicroBenchmarks/XRay/FDRMode/fdrmode-bench.test' has No metrics!
  Warning: 'test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.test' has No metrics!
  Warning: 'test-suite :: MicroBenchmarks/XRay/FDRMode/fdrmode-bench.test' has No metrics!
  Warning: 'test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.test' has No metrics!
  Warning: 'test-suite :: MicroBenchmarks/XRay/FDRMode/fdrmode-bench.test' has No metrics!
  Warning: 'test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.test' has No metrics!
  Warning: 'test-suite :: MicroBenchmarks/XRay/FDRMode/fdrmode-bench.test' has No metrics!
  Warning: 'test-suite :: MicroBenchmarks/XRay/ReturnReference/retref-bench.test' has No metrics!
  Tests: 546
  Short Running: 187 (filtered out)
  Remaining: 359
  Metric: exec_time
  
  Program                                        lhs    rhs    diff 
   test-suite...marks/CoyoteBench/lpbench.test     1.19   1.11 -6.5%
   test-suite...marks/Misc/matmul_f64_4x4.test     0.68   0.64 -5.8%
   test-suite...e/Benchmarks/McGill/chomp.test     0.67   0.63 -5.7%
   test-suite...BENCHMARK_ORDERED_DITHER/128/2    60.65  63.84  5.3%
   test-suite...HMARK_BICUBIC_INTERPOLATION/16    48.53  50.78  4.6%
   test-suite...mbolics-flt/Symbolics-flt.test     0.76   0.73 -4.0%
   test-suite...ks/Shootout/Shootout-hash.test     4.60   4.79  4.0%
   test-suite...ce/Benchmarks/Olden/bh/bh.test     0.97   0.93 -3.5%
   test-suite...oxyApps-C/miniGMG/miniGMG.test     0.65   0.63 -3.3%
   test-suite...-flt/LinearDependence-flt.test     1.35   1.40  3.3%
   test-suite...rolangs-C++/primes/primes.test     0.75   0.78  3.3%
   test-suite...ncils/fdtd-apml/fdtd-apml.test     0.86   0.83 -3.2%
   test-suite...s/Fhourstones/fhourstones.test     0.68   0.70  3.2%
   test-suite.../Benchmarks/nbench/nbench.test     1.09   1.12  3.1%
   test-suite...s/Misc/richards_benchmark.test     1.11   1.08 -3.0%
   Geomean difference                                           nan%
                   lhs            rhs        diff
  count  356.000000     359.000000     356.000000
  mean   2207.107388    2187.731742   -0.000983  
  std    17335.321906   17252.648405   0.011480  
  min    0.608100       0.602200      -0.065347  
  25%    1.948518       1.886300      -0.002897  
  50%    6.446969       5.985600      -0.000070  
  75%    111.044515     106.027453     0.001283  
  max    214183.043000  214138.878333  0.052571 

I've compared assembly for the notable differences, and unless I've made a horrible mistake, there are no generated asm differences. I suspect that the >5% swings are from I/O noise on my shared machine. Any thoughts/insights on using test-suite?


Repository:
  rL LLVM

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D61675/new/

https://reviews.llvm.org/D61675





More information about the llvm-commits mailing list