[test-suite] r318476 - [XRay][test-suite] Upgrade Google benchmark library

Eizan Miyamoto via llvm-commits llvm-commits at lists.llvm.org
Thu Nov 16 16:16:07 PST 2017


Author: eizan
Date: Thu Nov 16 16:16:06 2017
New Revision: 318476

URL: http://llvm.org/viewvc/llvm-project?rev=318476&view=rev
Log:
[XRay][test-suite] Upgrade Google benchmark library

Summary:
This change allows us to use an updated idiom for defining microbenchmarks.

Version being upgraded from 1.2.0 to 1.3.0.

Reviewers: dberris

Reviewed By: dberris

Subscribers: dschuff, mgorny, javed.absar, krytarowski, llvm-commits, fedor.sergeev

Differential Revision: https://reviews.llvm.org/D40154

Added:
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.clang-format
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.clang-format
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.gitignore
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.gitignore
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.travis-libcxx-setup.sh
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis-libcxx-setup.sh
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.travis.yml
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis.yml
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.ycm_extra_conf.py
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.ycm_extra_conf.py
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/AUTHORS
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/AUTHORS
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CMakeLists.txt
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CMakeLists.txt
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CONTRIBUTING.md
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTING.md
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CONTRIBUTORS
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTORS
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/LICENSE
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/LICENSE
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/README.md
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/README.md
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/appveyor.yml
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/appveyor.yml
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/AddCXXCompilerFlag.cmake
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/AddCXXCompilerFlag.cmake
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/CXXFeatureCheck.cmake
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/CXXFeatureCheck.cmake
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/Config.cmake.in
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/Config.cmake.in
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/GetGitVersion.cmake
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/GetGitVersion.cmake
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/gnu_posix_regex.cpp
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/gnu_posix_regex.cpp
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/posix_regex.cpp
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/posix_regex.cpp
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/std_regex.cpp
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/std_regex.cpp
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/steady_clock.cpp
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/steady_clock.cpp
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/thread_safety_attributes.cpp
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/thread_safety_attributes.cpp
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/docs/
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/docs/tools.md
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/docs/tools.md
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/benchmark.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/benchmark_api.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark_api.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/reporter.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/reporter.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/mingw.py
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/mingw.py
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/CMakeLists.txt
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/CMakeLists.txt
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/arraysize.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/arraysize.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark_api_internal.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_api_internal.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark_register.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_register.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/check.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/check.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/colorprint.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/colorprint.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/commandlineflags.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/commandlineflags.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/complexity.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/complexity.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/console_reporter.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/console_reporter.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/counter.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/counter.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/csv_reporter.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/csv_reporter.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/cycleclock.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/cycleclock.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/internal_macros.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/internal_macros.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/json_reporter.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/json_reporter.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/log.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/log.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/mutex.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/mutex.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/re.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/re.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/reporter.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/reporter.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sleep.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sleep.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/statistics.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/statistics.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/string_util.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/string_util.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sysinfo.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sysinfo.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/timers.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/timers.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/CMakeLists.txt
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/CMakeLists.txt
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/basic_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/basic_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/benchmark_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/benchmark_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/complexity_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/complexity_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/cxx03_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/cxx03_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/diagnostics_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/diagnostics_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/donotoptimize_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/donotoptimize_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/filter_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/filter_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/fixture_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/fixture_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/map_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/map_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/multiple_ranges_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/multiple_ranges_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/options_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/options_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/output_test.h
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/output_test_helper.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test_helper.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/register_benchmark_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/register_benchmark_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/reporter_output_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/reporter_output_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/skip_with_error_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/skip_with_error_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/templated_fixture_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/user_counters_tabular_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_tabular_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/user_counters_test.cc
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/compare_bench.py
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/compare_bench.py
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/Inputs/
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/Inputs/test1_run1.json
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run1.json
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/Inputs/test1_run2.json
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run2.json
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/__init__.py
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/__init__.py
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/report.py
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/report.py
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/util.py
      - copied, changed from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/util.py
Removed:
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.clang-format
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.gitignore
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis-libcxx-setup.sh
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis.yml
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.ycm_extra_conf.py
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/AUTHORS
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CMakeLists.txt
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTING.md
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTORS
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/LICENSE
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/README.md
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/appveyor.yml
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/AddCXXCompilerFlag.cmake
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/CXXFeatureCheck.cmake
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/Config.cmake.in
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/GetGitVersion.cmake
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/gnu_posix_regex.cpp
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/posix_regex.cpp
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/std_regex.cpp
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/steady_clock.cpp
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/thread_safety_attributes.cpp
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/docs/tools.md
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark_api.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/reporter.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/mingw.py
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/CMakeLists.txt
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/arraysize.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_api_internal.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_register.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/check.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/console_reporter.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/csv_reporter.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/cycleclock.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/internal_macros.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/json_reporter.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/log.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/mutex.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/re.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/reporter.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/stat.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/CMakeLists.txt
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/basic_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/benchmark_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/complexity_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/cxx03_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/diagnostics_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/donotoptimize_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/filter_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/fixture_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/map_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/multiple_ranges_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/options_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test.h
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test_helper.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/register_benchmark_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/reporter_output_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/skip_with_error_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_tabular_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_test.cc
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/compare_bench.py
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run1.json
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run2.json
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/__init__.py
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/report.py
    test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/util.py
Modified:
    test-suite/trunk/MicroBenchmarks/libs/CMakeLists.txt

Modified: test-suite/trunk/MicroBenchmarks/libs/CMakeLists.txt
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/CMakeLists.txt?rev=318476&r1=318475&r2=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/CMakeLists.txt (original)
+++ test-suite/trunk/MicroBenchmarks/libs/CMakeLists.txt Thu Nov 16 16:16:06 2017
@@ -1,3 +1,3 @@
-add_subdirectory(benchmark-1.2.0)
+add_subdirectory(benchmark-1.3.0)
 test_suite_add_build_dependencies(benchmark)
 test_suite_add_build_dependencies(output_test_helper)

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.clang-format
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.clang-format?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.clang-format (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.clang-format (removed)
@@ -1,5 +0,0 @@
----
-Language:        Cpp
-BasedOnStyle:  Google
-...
-

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.gitignore
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.gitignore?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.gitignore (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.gitignore (removed)
@@ -1,46 +0,0 @@
-*.a
-*.so
-*.so.?*
-*.dll
-*.exe
-*.dylib
-*.cmake
-!/cmake/*.cmake
-*~
-*.pyc
-__pycache__
-
-# lcov
-*.lcov
-/lcov
-
-# cmake files.
-/Testing
-CMakeCache.txt
-CMakeFiles/
-cmake_install.cmake
-
-# makefiles.
-Makefile
-
-# in-source build.
-bin/
-lib/
-/test/*_test
-
-# exuberant ctags.
-tags
-
-# YouCompleteMe configuration.
-.ycm_extra_conf.pyc
-
-# ninja generated files.
-.ninja_deps
-.ninja_log
-build.ninja
-install_manifest.txt
-rules.ninja
-
-# out-of-source build top-level folders.
-build/
-_build/

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis-libcxx-setup.sh
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis-libcxx-setup.sh?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis-libcxx-setup.sh (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis-libcxx-setup.sh (removed)
@@ -1,28 +0,0 @@
-#!/usr/bin/env bash
-
-# Install a newer CMake version
-curl -sSL https://cmake.org/files/v3.6/cmake-3.6.1-Linux-x86_64.sh -o install-cmake.sh
-chmod +x install-cmake.sh
-sudo ./install-cmake.sh --prefix=/usr/local --skip-license
-
-# Checkout LLVM sources
-git clone --depth=1 https://github.com/llvm-mirror/llvm.git llvm-source
-git clone --depth=1 https://github.com/llvm-mirror/libcxx.git llvm-source/projects/libcxx
-git clone --depth=1 https://github.com/llvm-mirror/libcxxabi.git llvm-source/projects/libcxxabi
-
-# Setup libc++ options
-if [ -z "$BUILD_32_BITS" ]; then
-  export BUILD_32_BITS=OFF && echo disabling 32 bit build
-fi
-
-# Build and install libc++ (Use unstable ABI for better sanitizer coverage)
-mkdir llvm-build && cd llvm-build
-cmake -DCMAKE_C_COMPILER=${C_COMPILER} -DCMAKE_CXX_COMPILER=${COMPILER} \
-      -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_INSTALL_PREFIX=/usr \
-      -DLIBCXX_ABI_UNSTABLE=ON \
-      -DLLVM_USE_SANITIZER=${LIBCXX_SANITIZER} \
-      -DLLVM_BUILD_32_BITS=${BUILD_32_BITS} \
-      ../llvm-source
-make cxx -j2
-sudo make install-cxxabi install-cxx
-cd ../

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis.yml
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis.yml?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis.yml (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis.yml (removed)
@@ -1,157 +0,0 @@
-sudo: required
-dist: trusty
-language: cpp
-
-env:
-  global:
-    - /usr/local/bin:$PATH
-
-matrix:
-  include:
-    - compiler: gcc
-      addons:
-        apt:
-          packages:
-            - lcov
-      env: COMPILER=g++ C_COMPILER=gcc BUILD_TYPE=Coverage
-    - compiler: gcc
-      env: COMPILER=g++ C_COMPILER=gcc BUILD_TYPE=Debug
-    - compiler: gcc
-      env: COMPILER=g++ C_COMPILER=gcc BUILD_TYPE=Release
-    - compiler: gcc
-      addons:
-        apt:
-          packages:
-            - g++-multilib
-      env: COMPILER=g++ C_COMPILER=gcc BUILD_TYPE=Debug BUILD_32_BITS=ON
-    - compiler: gcc
-      addons:
-        apt:
-          packages:
-            - g++-multilib
-      env: COMPILER=g++ C_COMPILER=gcc BUILD_TYPE=Release BUILD_32_BITS=ON
-    - compiler: gcc
-      addons:
-        apt:
-          sources:
-            - ubuntu-toolchain-r-test
-          packages:
-            - g++-6
-      env:
-        - COMPILER=g++-6 C_COMPILER=gcc-6  BUILD_TYPE=Debug
-        - EXTRA_FLAGS="-fno-omit-frame-pointer -g -O2 -fsanitize=undefined,address -fuse-ld=gold"
-    - compiler: clang
-      env: COMPILER=clang++ C_COMPILER=clang BUILD_TYPE=Debug
-    - compiler: clang
-      env: COMPILER=clang++ C_COMPILER=clang BUILD_TYPE=Release
-    # Clang w/ libc++
-    - compiler: clang
-      addons:
-        apt:
-          packages:
-            clang-3.8
-      env:
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Debug
-        - LIBCXX_BUILD=1
-        - EXTRA_FLAGS="-stdlib=libc++"
-    - compiler: clang
-      addons:
-        apt:
-          packages:
-            clang-3.8
-      env:
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Release
-        - LIBCXX_BUILD=1
-        - EXTRA_FLAGS="-stdlib=libc++"
-    # Clang w/ 32bit libc++
-    - compiler: clang
-      addons:
-        apt:
-          packages:
-            - clang-3.8
-            - g++-multilib
-      env:
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Debug
-        - LIBCXX_BUILD=1
-        - BUILD_32_BITS=ON
-        - EXTRA_FLAGS="-stdlib=libc++ -m32"
-    # Clang w/ 32bit libc++
-    - compiler: clang
-      addons:
-        apt:
-          packages:
-            - clang-3.8
-            - g++-multilib
-      env:
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Release
-        - LIBCXX_BUILD=1
-        - BUILD_32_BITS=ON
-        - EXTRA_FLAGS="-stdlib=libc++ -m32"
-    # Clang w/ libc++, ASAN, UBSAN
-    - compiler: clang
-      addons:
-        apt:
-          packages:
-            clang-3.8
-      env:
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Debug
-        - LIBCXX_BUILD=1 LIBCXX_SANITIZER="Undefined;Address"
-        - EXTRA_FLAGS="-stdlib=libc++ -g -O2 -fno-omit-frame-pointer -fsanitize=undefined,address -fno-sanitize-recover=all"
-        - UBSAN_OPTIONS=print_stacktrace=1
-    # Clang w/ libc++ and MSAN
-    - compiler: clang
-      addons:
-        apt:
-          packages:
-            clang-3.8
-      env:
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Debug
-        - LIBCXX_BUILD=1 LIBCXX_SANITIZER=MemoryWithOrigins
-        - EXTRA_FLAGS="-stdlib=libc++ -g -O2 -fno-omit-frame-pointer -fsanitize=memory -fsanitize-memory-track-origins"
-    # Clang w/ libc++ and MSAN
-    - compiler: clang
-      addons:
-        apt:
-          packages:
-            clang-3.8
-      env:
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=RelWithDebInfo
-        - LIBCXX_BUILD=1 LIBCXX_SANITIZER=Thread
-        - EXTRA_FLAGS="-stdlib=libc++ -g -O2 -fno-omit-frame-pointer -fsanitize=thread -fno-sanitize-recover=all"
-
-    - os: osx
-      osx_image: xcode8.3
-      compiler: clang
-      env:
-        - COMPILER=clang++ BUILD_TYPE=Debug
-    - os: osx
-      osx_image: xcode8.3
-      compiler: clang
-      env:
-        - COMPILER=clang++ BUILD_TYPE=Release
-
-before_script:
-  - if [ -z "$BUILD_32_BITS" ]; then
-      export BUILD_32_BITS=OFF && echo disabling 32 bit build;
-    fi
-  - if [ -n "${LIBCXX_BUILD}" ]; then
-      source .travis-libcxx-setup.sh;
-    fi
-  - mkdir build && cd build
-
-install:
-  - if [ "${BUILD_TYPE}" == "Coverage" -a "${TRAVIS_OS_NAME}" == "linux" ]; then
-      PATH=~/.local/bin:${PATH};
-      pip install --user --upgrade pip;
-      pip install --user cpp-coveralls;
-    fi
-
-script:
-  - cmake -DCMAKE_C_COMPILER=${C_COMPILER} -DCMAKE_CXX_COMPILER=${COMPILER} -DCMAKE_BUILD_TYPE=${BUILD_TYPE} -DCMAKE_CXX_FLAGS="${EXTRA_FLAGS}" -DBENCHMARK_BUILD_32_BITS=${BUILD_32_BITS} ..
-  - make
-  - ctest -C ${BUILD_TYPE} --output-on-failure
-
-after_success:
-  - if [ "${BUILD_TYPE}" == "Coverage" -a "${TRAVIS_OS_NAME}" == "linux" ]; then
-      coveralls --include src --include include --gcov-options '\-lp' --root .. --build-root .;
-    fi

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.ycm_extra_conf.py
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.ycm_extra_conf.py?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.ycm_extra_conf.py (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.ycm_extra_conf.py (removed)
@@ -1,115 +0,0 @@
-import os
-import ycm_core
-
-# These are the compilation flags that will be used in case there's no
-# compilation database set (by default, one is not set).
-# CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.
-flags = [
-'-Wall',
-'-Werror',
-'-pendantic-errors',
-'-std=c++0x',
-'-fno-strict-aliasing',
-'-O3',
-'-DNDEBUG',
-# ...and the same thing goes for the magic -x option which specifies the
-# language that the files to be compiled are written in. This is mostly
-# relevant for c++ headers.
-# For a C project, you would set this to 'c' instead of 'c++'.
-'-x', 'c++',
-'-I', 'include',
-'-isystem', '/usr/include',
-'-isystem', '/usr/local/include',
-]
-
-
-# Set this to the absolute path to the folder (NOT the file!) containing the
-# compile_commands.json file to use that instead of 'flags'. See here for
-# more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html
-#
-# Most projects will NOT need to set this to anything; you can just change the
-# 'flags' list of compilation flags. Notice that YCM itself uses that approach.
-compilation_database_folder = ''
-
-if os.path.exists( compilation_database_folder ):
-  database = ycm_core.CompilationDatabase( compilation_database_folder )
-else:
-  database = None
-
-SOURCE_EXTENSIONS = [ '.cc' ]
-
-def DirectoryOfThisScript():
-  return os.path.dirname( os.path.abspath( __file__ ) )
-
-
-def MakeRelativePathsInFlagsAbsolute( flags, working_directory ):
-  if not working_directory:
-    return list( flags )
-  new_flags = []
-  make_next_absolute = False
-  path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ]
-  for flag in flags:
-    new_flag = flag
-
-    if make_next_absolute:
-      make_next_absolute = False
-      if not flag.startswith( '/' ):
-        new_flag = os.path.join( working_directory, flag )
-
-    for path_flag in path_flags:
-      if flag == path_flag:
-        make_next_absolute = True
-        break
-
-      if flag.startswith( path_flag ):
-        path = flag[ len( path_flag ): ]
-        new_flag = path_flag + os.path.join( working_directory, path )
-        break
-
-    if new_flag:
-      new_flags.append( new_flag )
-  return new_flags
-
-
-def IsHeaderFile( filename ):
-  extension = os.path.splitext( filename )[ 1 ]
-  return extension in [ '.h', '.hxx', '.hpp', '.hh' ]
-
-
-def GetCompilationInfoForFile( filename ):
-  # The compilation_commands.json file generated by CMake does not have entries
-  # for header files. So we do our best by asking the db for flags for a
-  # corresponding source file, if any. If one exists, the flags for that file
-  # should be good enough.
-  if IsHeaderFile( filename ):
-    basename = os.path.splitext( filename )[ 0 ]
-    for extension in SOURCE_EXTENSIONS:
-      replacement_file = basename + extension
-      if os.path.exists( replacement_file ):
-        compilation_info = database.GetCompilationInfoForFile(
-          replacement_file )
-        if compilation_info.compiler_flags_:
-          return compilation_info
-    return None
-  return database.GetCompilationInfoForFile( filename )
-
-
-def FlagsForFile( filename, **kwargs ):
-  if database:
-    # Bear in mind that compilation_info.compiler_flags_ does NOT return a
-    # python list, but a "list-like" StringVec object
-    compilation_info = GetCompilationInfoForFile( filename )
-    if not compilation_info:
-      return None
-
-    final_flags = MakeRelativePathsInFlagsAbsolute(
-      compilation_info.compiler_flags_,
-      compilation_info.compiler_working_dir_ )
-  else:
-    relative_to = DirectoryOfThisScript()
-    final_flags = MakeRelativePathsInFlagsAbsolute( flags, relative_to )
-
-  return {
-    'flags': final_flags,
-    'do_cache': True
-  }

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/AUTHORS
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/AUTHORS?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/AUTHORS (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/AUTHORS (removed)
@@ -1,40 +0,0 @@
-# This is the official list of benchmark authors for copyright purposes.
-# This file is distinct from the CONTRIBUTORS files.
-# See the latter for an explanation.
-#
-# Names should be added to this file as:
-#	Name or Organization <email address>
-# The email address is not required for organizations.
-#
-# Please keep the list sorted.
-
-Albert Pretorius <pretoalb at gmail.com>
-Arne Beer <arne at twobeer.de>
-Christopher Seymour <chris.j.seymour at hotmail.com>
-David Coeurjolly <david.coeurjolly at liris.cnrs.fr>
-Dominic Hamon <dma at stripysock.com>
-Eric Fiselier <eric at efcs.ca>
-Eugene Zhuk <eugene.zhuk at gmail.com>
-Evgeny Safronov <division494 at gmail.com>
-Felix Homann <linuxaudio at showlabor.de>
-Google Inc.
-International Business Machines Corporation
-Ismael Jimenez Martinez <ismael.jimenez.martinez at gmail.com>
-Jern-Kuan Leong <jernkuan at gmail.com>
-Joao Paulo Magalhaes <joaoppmagalhaes at gmail.com>
-JianXiong Zhou <zhoujianxiong2 at gmail.com>
-Jussi Knuuttila <jussi.knuuttila at gmail.com>
-Kaito Udagawa <umireon at gmail.com>
-Lei Xu <eddyxu at gmail.com>
-Matt Clarkson <mattyclarkson at gmail.com>
-Maxim Vafin <maxvafin at gmail.com>
-Nick Hutchinson <nshutchinson at gmail.com>
-Oleksandr Sochka <sasha.sochka at gmail.com>
-Paul Redmond <paul.redmond at gmail.com>
-Radoslav Yovchev <radoslav.tm at gmail.com>
-Shuo Chen <chenshuo at chenshuo.com>
-Yixuan Qiu <yixuanq at gmail.com>
-Yusuke Suzuki <utatane.tea at gmail.com>
-Dirac Research 
-Zbigniew Skowron <zbychs at gmail.com>
-Dominik Czarnota <dominik.b.czarnota at gmail.com>

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CMakeLists.txt
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CMakeLists.txt?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CMakeLists.txt (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CMakeLists.txt (removed)
@@ -1,202 +0,0 @@
-cmake_minimum_required (VERSION 2.8.12)
-
-project (benchmark)
-
-foreach(p
-    CMP0054 # CMake 3.1
-    CMP0056 # export EXE_LINKER_FLAGS to try_run
-    )
-  if(POLICY ${p})
-    cmake_policy(SET ${p} NEW)
-  endif()
-endforeach()
-
-option(BENCHMARK_ENABLE_TESTING "Enable testing of the benchmark library." ON)
-option(BENCHMARK_ENABLE_EXCEPTIONS "Enable the use of exceptions in the benchmark library." ON)
-option(BENCHMARK_ENABLE_LTO "Enable link time optimisation of the benchmark library." OFF)
-option(BENCHMARK_USE_LIBCXX "Build and test using libc++ as the standard library." OFF)
-option(BENCHMARK_BUILD_32_BITS "Build a 32 bit version of the library" OFF)
-
-# Make sure we can import out CMake functions
-list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
-
-# Read the git tags to determine the project version
-include(GetGitVersion)
-get_git_version(GIT_VERSION)
-
-# Tell the user what versions we are using
-string(REGEX MATCH "[0-9]+\\.[0-9]+\\.[0-9]+" VERSION ${GIT_VERSION})
-message("-- Version: ${VERSION}")
-
-# The version of the libraries
-set(GENERIC_LIB_VERSION ${VERSION})
-string(SUBSTRING ${VERSION} 0 1 GENERIC_LIB_SOVERSION)
-
-# Import our CMake modules
-include(CheckCXXCompilerFlag)
-include(AddCXXCompilerFlag)
-include(CXXFeatureCheck)
-
-if (BENCHMARK_BUILD_32_BITS)
-  add_required_cxx_compiler_flag(-m32)
-endif()
-
-if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "MSVC")
-  # Turn compiler warnings up to 11
-  string(REGEX REPLACE "[-/]W[1-4]" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
-  set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /W4")
-  add_definitions(-D_CRT_SECURE_NO_WARNINGS)
-
-  if (NOT BENCHMARK_ENABLE_EXCEPTIONS)
-    add_cxx_compiler_flag(-EHs-)
-    add_cxx_compiler_flag(-EHa-)
-  endif()
-  # Link time optimisation
-  if (BENCHMARK_ENABLE_LTO)
-    set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /GL")
-    set(CMAKE_STATIC_LINKER_FLAGS_RELEASE "${CMAKE_STATIC_LINKER_FLAGS_RELEASE} /LTCG")
-    set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /LTCG")
-    set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} /LTCG")
-
-    set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} /GL")
-    string(REGEX REPLACE "[-/]INCREMENTAL" "/INCREMENTAL:NO" CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO}")
-    set(CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO} /LTCG")
-    string(REGEX REPLACE "[-/]INCREMENTAL" "/INCREMENTAL:NO" CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO}")
-    set(CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO} /LTCG")
-    string(REGEX REPLACE "[-/]INCREMENTAL" "/INCREMENTAL:NO" CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO}")
-    set(CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO} /LTCG")
-
-    set(CMAKE_CXX_FLAGS_MINSIZEREL "${CMAKE_CXX_FLAGS_MINSIZEREL} /GL")
-    set(CMAKE_STATIC_LINKER_FLAGS_MINSIZEREL "${CMAKE_STATIC_LINKER_FLAGS_MINSIZEREL} /LTCG")
-    set(CMAKE_SHARED_LINKER_FLAGS_MINSIZEREL "${CMAKE_SHARED_LINKER_FLAGS_MINSIZEREL} /LTCG")
-    set(CMAKE_EXE_LINKER_FLAGS_MINSIZEREL "${CMAKE_EXE_LINKER_FLAGS_MINSIZEREL} /LTCG")
-  endif()
-else()
-  # Try and enable C++11. Don't use C++14 because it doesn't work in some
-  # configurations.
-  add_cxx_compiler_flag(-std=c++11)
-  if (NOT HAVE_CXX_FLAG_STD_CXX11)
-    add_cxx_compiler_flag(-std=c++0x)
-  endif()
-
-  # Turn compiler warnings up to 11
-  add_cxx_compiler_flag(-Wall)
-
-  add_cxx_compiler_flag(-Wextra)
-  add_cxx_compiler_flag(-Wshadow)
-  add_cxx_compiler_flag(-Werror RELEASE)
-  add_cxx_compiler_flag(-Werror RELWITHDEBINFO)
-  add_cxx_compiler_flag(-Werror MINSIZEREL)
-  add_cxx_compiler_flag(-pedantic)
-  add_cxx_compiler_flag(-pedantic-errors)
-  add_cxx_compiler_flag(-Wshorten-64-to-32)
-  add_cxx_compiler_flag(-Wfloat-equal)
-  add_cxx_compiler_flag(-fstrict-aliasing)
-  if (NOT BENCHMARK_ENABLE_EXCEPTIONS)
-    add_cxx_compiler_flag(-fno-exceptions)
-  endif()
-  if (NOT BENCHMARK_USE_LIBCXX)
-    add_cxx_compiler_flag(-Wzero-as-null-pointer-constant)
-  endif()
-  if (HAVE_CXX_FLAG_FSTRICT_ALIASING)
-    if (NOT CMAKE_CXX_COMPILER_ID STREQUAL "Intel") #ICC17u2: Many false positives for Wstrict-aliasing
-      add_cxx_compiler_flag(-Wstrict-aliasing)
-    endif()
-  endif()
-  # ICC17u2: overloaded virtual function "benchmark::Fixture::SetUp" is only partially overridden
-  # (because of deprecated overload)
-  add_cxx_compiler_flag(-wd654)  
-  add_cxx_compiler_flag(-Wthread-safety)
-  if (HAVE_CXX_FLAG_WTHREAD_SAFETY)
-    cxx_feature_check(THREAD_SAFETY_ATTRIBUTES)
-  endif()
-
-  # On most UNIX like platforms g++ and clang++ define _GNU_SOURCE as a
-  # predefined macro, which turns on all of the wonderful libc extensions.
-  # However g++ doesn't do this in Cygwin so we have to define it ourselfs
-  # since we depend on GNU/POSIX/BSD extensions.
-  if (CYGWIN)
-    add_definitions(-D_GNU_SOURCE=1)
-  endif()
-
-  # Link time optimisation
-  if (BENCHMARK_ENABLE_LTO)
-    add_cxx_compiler_flag(-flto)
-    if ("${CMAKE_C_COMPILER_ID}" STREQUAL "GNU")
-      find_program(GCC_AR gcc-ar)
-      if (GCC_AR)
-        set(CMAKE_AR ${GCC_AR})
-      endif()
-      find_program(GCC_RANLIB gcc-ranlib)
-      if (GCC_RANLIB)
-        set(CMAKE_RANLIB ${GCC_RANLIB})
-      endif()
-    endif()
-  endif()
-
-  # Coverage build type
-  set(CMAKE_CXX_FLAGS_COVERAGE "${CMAKE_CXX_FLAGS_DEBUG}" CACHE STRING
-    "Flags used by the C++ compiler during coverage builds."
-    FORCE)
-  set(CMAKE_EXE_LINKER_FLAGS_COVERAGE
-    "${CMAKE_EXE_LINKER_FLAGS_DEBUG}" CACHE STRING
-    "Flags used for linking binaries during coverage builds."
-    FORCE)
-  set(CMAKE_SHARED_LINKER_FLAGS_COVERAGE
-    "${CMAKE_SHARED_LINKER_FLAGS_DEBUG}" CACHE STRING
-    "Flags used by the shared libraries linker during coverage builds."
-    FORCE)
-  mark_as_advanced(
-    CMAKE_CXX_FLAGS_COVERAGE
-    CMAKE_EXE_LINKER_FLAGS_COVERAGE
-    CMAKE_SHARED_LINKER_FLAGS_COVERAGE)
-  set(CMAKE_BUILD_TYPE "${CMAKE_BUILD_TYPE}" CACHE STRING
-    "Choose the type of build, options are: None Debug Release RelWithDebInfo MinSizeRel Coverage."
-    FORCE)
-  add_cxx_compiler_flag(--coverage COVERAGE)
-endif()
-
-if (BENCHMARK_USE_LIBCXX)
-  if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
-    add_cxx_compiler_flag(-stdlib=libc++)
-  elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU" OR
-          "${CMAKE_CXX_COMPILER_ID}" STREQUAL "Intel")
-    add_cxx_compiler_flag(-nostdinc++)
-    message("libc++ header path must be manually specified using CMAKE_CXX_FLAGS")
-    # Adding -nodefaultlibs directly to CMAKE_<TYPE>_LINKER_FLAGS will break
-    # configuration checks such as 'find_package(Threads)'
-    list(APPEND BENCHMARK_CXX_LINKER_FLAGS -nodefaultlibs)
-    # -lc++ cannot be added directly to CMAKE_<TYPE>_LINKER_FLAGS because
-    # linker flags appear before all linker inputs and -lc++ must appear after.
-    list(APPEND BENCHMARK_CXX_LIBRARIES c++)
-  else()
-    message(FATAL "-DBENCHMARK_USE_LIBCXX:BOOL=ON is not supported for compiler")
-  endif()
-endif(BENCHMARK_USE_LIBCXX)
-
-# C++ feature checks
-# Determine the correct regular expression engine to use
-cxx_feature_check(STD_REGEX)
-cxx_feature_check(GNU_POSIX_REGEX)
-cxx_feature_check(POSIX_REGEX)
-if(NOT HAVE_STD_REGEX AND NOT HAVE_GNU_POSIX_REGEX AND NOT HAVE_POSIX_REGEX)
-  message(FATAL_ERROR "Failed to determine the source files for the regular expression backend")
-endif()
-if (NOT BENCHMARK_ENABLE_EXCEPTIONS AND HAVE_STD_REGEX
-        AND NOT HAVE_GNU_POSIX_REGEX AND NOT HAVE_POSIX_REGEX)
-  message(WARNING "Using std::regex with exceptions disabled is not fully supported")
-endif()
-cxx_feature_check(STEADY_CLOCK)
-# Ensure we have pthreads
-find_package(Threads REQUIRED)
-
-# Set up directories
-include_directories(${PROJECT_SOURCE_DIR}/include)
-
-# Build the targets
-add_subdirectory(src)
-
-if (BENCHMARK_ENABLE_TESTING)
-  enable_testing()
-  add_subdirectory(test)
-endif()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTING.md
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTING.md?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTING.md (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTING.md (removed)
@@ -1,58 +0,0 @@
-# How to contribute #
-
-We'd love to accept your patches and contributions to this project.  There are
-a just a few small guidelines you need to follow.
-
-
-## Contributor License Agreement ##
-
-Contributions to any Google project must be accompanied by a Contributor
-License Agreement.  This is not a copyright **assignment**, it simply gives
-Google permission to use and redistribute your contributions as part of the
-project.
-
-  * If you are an individual writing original source code and you're sure you
-    own the intellectual property, then you'll need to sign an [individual
-    CLA][].
-
-  * If you work for a company that wants to allow you to contribute your work,
-    then you'll need to sign a [corporate CLA][].
-
-You generally only need to submit a CLA once, so if you've already submitted
-one (even if it was for a different project), you probably don't need to do it
-again.
-
-[individual CLA]: https://developers.google.com/open-source/cla/individual
-[corporate CLA]: https://developers.google.com/open-source/cla/corporate
-
-Once your CLA is submitted (or if you already submitted one for
-another Google project), make a commit adding yourself to the
-[AUTHORS][] and [CONTRIBUTORS][] files. This commit can be part
-of your first [pull request][].
-
-[AUTHORS]: AUTHORS
-[CONTRIBUTORS]: CONTRIBUTORS
-
-
-## Submitting a patch ##
-
-  1. It's generally best to start by opening a new issue describing the bug or
-     feature you're intending to fix.  Even if you think it's relatively minor,
-     it's helpful to know what people are working on.  Mention in the initial
-     issue that you are planning to work on that bug or feature so that it can
-     be assigned to you.
-
-  1. Follow the normal process of [forking][] the project, and setup a new
-     branch to work in.  It's important that each group of changes be done in
-     separate branches in order to ensure that a pull request only includes the
-     commits related to that bug or feature.
-
-  1. Do your best to have [well-formed commit messages][] for each change.
-     This provides consistency throughout the project, and ensures that commit
-     messages are able to be formatted properly by various git tools.
-
-  1. Finally, push the commits to your fork and submit a [pull request][].
-
-[forking]: https://help.github.com/articles/fork-a-repo
-[well-formed commit messages]: http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html
-[pull request]: https://help.github.com/articles/creating-a-pull-request

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTORS
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTORS?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTORS (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTORS (removed)
@@ -1,59 +0,0 @@
-# People who have agreed to one of the CLAs and can contribute patches.
-# The AUTHORS file lists the copyright holders; this file
-# lists people.  For example, Google employees are listed here
-# but not in AUTHORS, because Google holds the copyright.
-#
-# Names should be added to this file only after verifying that
-# the individual or the individual's organization has agreed to
-# the appropriate Contributor License Agreement, found here:
-#
-# https://developers.google.com/open-source/cla/individual
-# https://developers.google.com/open-source/cla/corporate
-#
-# The agreement for individuals can be filled out on the web.
-#
-# When adding J Random Contributor's name to this file,
-# either J's name or J's organization's name should be
-# added to the AUTHORS file, depending on whether the
-# individual or corporate CLA was used.
-#
-# Names should be added to this file as:
-#     Name <email address>
-#
-# Please keep the list sorted.
-
-Albert Pretorius <pretoalb at gmail.com>
-Arne Beer <arne at twobeer.de>
-Billy Robert O'Neal III <billy.oneal at gmail.com> <bion at microsoft.com>
-Chris Kennelly <ckennelly at google.com> <ckennelly at ckennelly.com>
-Christopher Seymour <chris.j.seymour at hotmail.com>
-David Coeurjolly <david.coeurjolly at liris.cnrs.fr>
-Dominic Hamon <dma at stripysock.com>
-Eric Fiselier <eric at efcs.ca>
-Eugene Zhuk <eugene.zhuk at gmail.com>
-Evgeny Safronov <division494 at gmail.com>
-Felix Homann <linuxaudio at showlabor.de>
-Ismael Jimenez Martinez <ismael.jimenez.martinez at gmail.com>
-Jern-Kuan Leong <jernkuan at gmail.com>
-Joao Paulo Magalhaes <joaoppmagalhaes at gmail.com>
-JianXiong Zhou <zhoujianxiong2 at gmail.com>
-Jussi Knuuttila <jussi.knuuttila at gmail.com>
-Kaito Udagawa <umireon at gmail.com>
-Kai Wolf <kai.wolf at gmail.com>
-Lei Xu <eddyxu at gmail.com>
-Matt Clarkson <mattyclarkson at gmail.com>
-Maxim Vafin <maxvafin at gmail.com>
-Nick Hutchinson <nshutchinson at gmail.com>
-Oleksandr Sochka <sasha.sochka at gmail.com>
-Pascal Leroy <phl at google.com>
-Paul Redmond <paul.redmond at gmail.com>
-Pierre Phaneuf <pphaneuf at google.com>
-Radoslav Yovchev <radoslav.tm at gmail.com>
-Ray Glover <ray.glover at uk.ibm.com>
-Shuo Chen <chenshuo at chenshuo.com>
-Tom Madams <tom.ej.madams at gmail.com> <tmadams at google.com>
-Yixuan Qiu <yixuanq at gmail.com>
-Yusuke Suzuki <utatane.tea at gmail.com>
-Tobias Ulvgård <tobias.ulvgard at dirac.se>
-Zbigniew Skowron <zbychs at gmail.com>
-Dominik Czarnota <dominik.b.czarnota at gmail.com>

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/LICENSE
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/LICENSE?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/LICENSE (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/LICENSE (removed)
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/README.md
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/README.md?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/README.md (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/README.md (removed)
@@ -1,726 +0,0 @@
-# benchmark
-[![Build Status](https://travis-ci.org/google/benchmark.svg?branch=master)](https://travis-ci.org/google/benchmark)
-[![Build status](https://ci.appveyor.com/api/projects/status/u0qsyp7t1tk7cpxs/branch/master?svg=true)](https://ci.appveyor.com/project/google/benchmark/branch/master)
-[![Coverage Status](https://coveralls.io/repos/google/benchmark/badge.svg)](https://coveralls.io/r/google/benchmark)
-
-A library to support the benchmarking of functions, similar to unit-tests.
-
-Discussion group: https://groups.google.com/d/forum/benchmark-discuss
-
-IRC channel: https://freenode.net #googlebenchmark
-
-[Known issues and common problems](#known-issues)
-
-[Additional Tooling Documentation](docs/tools.md)
-
-## Example usage
-### Basic usage
-Define a function that executes the code to be measured.
-
-```c++
-static void BM_StringCreation(benchmark::State& state) {
-  while (state.KeepRunning())
-    std::string empty_string;
-}
-// Register the function as a benchmark
-BENCHMARK(BM_StringCreation);
-
-// Define another benchmark
-static void BM_StringCopy(benchmark::State& state) {
-  std::string x = "hello";
-  while (state.KeepRunning())
-    std::string copy(x);
-}
-BENCHMARK(BM_StringCopy);
-
-BENCHMARK_MAIN();
-```
-
-### Passing arguments
-Sometimes a family of benchmarks can be implemented with just one routine that
-takes an extra argument to specify which one of the family of benchmarks to
-run. For example, the following code defines a family of benchmarks for
-measuring the speed of `memcpy()` calls of different lengths:
-
-```c++
-static void BM_memcpy(benchmark::State& state) {
-  char* src = new char[state.range(0)];
-  char* dst = new char[state.range(0)];
-  memset(src, 'x', state.range(0));
-  while (state.KeepRunning())
-    memcpy(dst, src, state.range(0));
-  state.SetBytesProcessed(int64_t(state.iterations()) *
-                          int64_t(state.range(0)));
-  delete[] src;
-  delete[] dst;
-}
-BENCHMARK(BM_memcpy)->Arg(8)->Arg(64)->Arg(512)->Arg(1<<10)->Arg(8<<10);
-```
-
-The preceding code is quite repetitive, and can be replaced with the following
-short-hand. The following invocation will pick a few appropriate arguments in
-the specified range and will generate a benchmark for each such argument.
-
-```c++
-BENCHMARK(BM_memcpy)->Range(8, 8<<10);
-```
-
-By default the arguments in the range are generated in multiples of eight and
-the command above selects [ 8, 64, 512, 4k, 8k ]. In the following code the
-range multiplier is changed to multiples of two.
-
-```c++
-BENCHMARK(BM_memcpy)->RangeMultiplier(2)->Range(8, 8<<10);
-```
-Now arguments generated are [ 8, 16, 32, 64, 128, 256, 512, 1024, 2k, 4k, 8k ].
-
-You might have a benchmark that depends on two or more inputs. For example, the
-following code defines a family of benchmarks for measuring the speed of set
-insertion.
-
-```c++
-static void BM_SetInsert(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    state.PauseTiming();
-    std::set<int> data = ConstructRandomSet(state.range(0));
-    state.ResumeTiming();
-    for (int j = 0; j < state.range(1); ++j)
-      data.insert(RandomNumber());
-  }
-}
-BENCHMARK(BM_SetInsert)
-    ->Args({1<<10, 1})
-    ->Args({1<<10, 8})
-    ->Args({1<<10, 64})
-    ->Args({1<<10, 512})
-    ->Args({8<<10, 1})
-    ->Args({8<<10, 8})
-    ->Args({8<<10, 64})
-    ->Args({8<<10, 512});
-```
-
-The preceding code is quite repetitive, and can be replaced with the following
-short-hand. The following macro will pick a few appropriate arguments in the
-product of the two specified ranges and will generate a benchmark for each such
-pair.
-
-```c++
-BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {1, 512}});
-```
-
-For more complex patterns of inputs, passing a custom function to `Apply` allows
-programmatic specification of an arbitrary set of arguments on which to run the
-benchmark. The following example enumerates a dense range on one parameter,
-and a sparse range on the second.
-
-```c++
-static void CustomArguments(benchmark::internal::Benchmark* b) {
-  for (int i = 0; i <= 10; ++i)
-    for (int j = 32; j <= 1024*1024; j *= 8)
-      b->Args({i, j});
-}
-BENCHMARK(BM_SetInsert)->Apply(CustomArguments);
-```
-
-### Calculate asymptotic complexity (Big O)
-Asymptotic complexity might be calculated for a family of benchmarks. The
-following code will calculate the coefficient for the high-order term in the
-running time and the normalized root-mean square error of string comparison.
-
-```c++
-static void BM_StringCompare(benchmark::State& state) {
-  std::string s1(state.range(0), '-');
-  std::string s2(state.range(0), '-');
-  while (state.KeepRunning()) {
-    benchmark::DoNotOptimize(s1.compare(s2));
-  }
-  state.SetComplexityN(state.range(0));
-}
-BENCHMARK(BM_StringCompare)
-    ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity(benchmark::oN);
-```
-
-As shown in the following invocation, asymptotic complexity might also be
-calculated automatically.
-
-```c++
-BENCHMARK(BM_StringCompare)
-    ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity();
-```
-
-The following code will specify asymptotic complexity with a lambda function,
-that might be used to customize high-order term calculation.
-
-```c++
-BENCHMARK(BM_StringCompare)->RangeMultiplier(2)
-    ->Range(1<<10, 1<<18)->Complexity([](int n)->double{return n; });
-```
-
-### Templated benchmarks
-Templated benchmarks work the same way: This example produces and consumes
-messages of size `sizeof(v)` `range_x` times. It also outputs throughput in the
-absence of multiprogramming.
-
-```c++
-template <class Q> int BM_Sequential(benchmark::State& state) {
-  Q q;
-  typename Q::value_type v;
-  while (state.KeepRunning()) {
-    for (int i = state.range(0); i--; )
-      q.push(v);
-    for (int e = state.range(0); e--; )
-      q.Wait(&v);
-  }
-  // actually messages, not bytes:
-  state.SetBytesProcessed(
-      static_cast<int64_t>(state.iterations())*state.range(0));
-}
-BENCHMARK_TEMPLATE(BM_Sequential, WaitQueue<int>)->Range(1<<0, 1<<10);
-```
-
-Three macros are provided for adding benchmark templates.
-
-```c++
-#if __cplusplus >= 201103L // C++11 and greater.
-#define BENCHMARK_TEMPLATE(func, ...) // Takes any number of parameters.
-#else // C++ < C++11
-#define BENCHMARK_TEMPLATE(func, arg1)
-#endif
-#define BENCHMARK_TEMPLATE1(func, arg1)
-#define BENCHMARK_TEMPLATE2(func, arg1, arg2)
-```
-
-## Passing arbitrary arguments to a benchmark
-In C++11 it is possible to define a benchmark that takes an arbitrary number
-of extra arguments. The `BENCHMARK_CAPTURE(func, test_case_name, ...args)`
-macro creates a benchmark that invokes `func`  with the `benchmark::State` as
-the first argument followed by the specified `args...`.
-The `test_case_name` is appended to the name of the benchmark and
-should describe the values passed.
-
-```c++
-template <class ...ExtraArgs>`
-void BM_takes_args(benchmark::State& state, ExtraArgs&&... extra_args) {
-  [...]
-}
-// Registers a benchmark named "BM_takes_args/int_string_test` that passes
-// the specified values to `extra_args`.
-BENCHMARK_CAPTURE(BM_takes_args, int_string_test, 42, std::string("abc"));
-```
-Note that elements of `...args` may refer to global variables. Users should
-avoid modifying global state inside of a benchmark.
-
-## Using RegisterBenchmark(name, fn, args...)
-
-The `RegisterBenchmark(name, func, args...)` function provides an alternative
-way to create and register benchmarks.
-`RegisterBenchmark(name, func, args...)` creates, registers, and returns a
-pointer to a new benchmark with the specified `name` that invokes
-`func(st, args...)` where `st` is a `benchmark::State` object.
-
-Unlike the `BENCHMARK` registration macros, which can only be used at the global
-scope, the `RegisterBenchmark` can be called anywhere. This allows for
-benchmark tests to be registered programmatically.
-
-Additionally `RegisterBenchmark` allows any callable object to be registered
-as a benchmark. Including capturing lambdas and function objects. This
-allows the creation
-
-For Example:
-```c++
-auto BM_test = [](benchmark::State& st, auto Inputs) { /* ... */ };
-
-int main(int argc, char** argv) {
-  for (auto& test_input : { /* ... */ })
-      benchmark::RegisterBenchmark(test_input.name(), BM_test, test_input);
-  benchmark::Initialize(&argc, argv);
-  benchmark::RunSpecifiedBenchmarks();
-}
-```
-
-### Multithreaded benchmarks
-In a multithreaded test (benchmark invoked by multiple threads simultaneously),
-it is guaranteed that none of the threads will start until all have called
-`KeepRunning`, and all will have finished before KeepRunning returns false. As
-such, any global setup or teardown can be wrapped in a check against the thread
-index:
-
-```c++
-static void BM_MultiThreaded(benchmark::State& state) {
-  if (state.thread_index == 0) {
-    // Setup code here.
-  }
-  while (state.KeepRunning()) {
-    // Run the test as normal.
-  }
-  if (state.thread_index == 0) {
-    // Teardown code here.
-  }
-}
-BENCHMARK(BM_MultiThreaded)->Threads(2);
-```
-
-If the benchmarked code itself uses threads and you want to compare it to
-single-threaded code, you may want to use real-time ("wallclock") measurements
-for latency comparisons:
-
-```c++
-BENCHMARK(BM_test)->Range(8, 8<<10)->UseRealTime();
-```
-
-Without `UseRealTime`, CPU time is used by default.
-
-
-## Manual timing
-For benchmarking something for which neither CPU time nor real-time are
-correct or accurate enough, completely manual timing is supported using
-the `UseManualTime` function. 
-
-When `UseManualTime` is used, the benchmarked code must call
-`SetIterationTime` once per iteration of the `KeepRunning` loop to
-report the manually measured time.
-
-An example use case for this is benchmarking GPU execution (e.g. OpenCL
-or CUDA kernels, OpenGL or Vulkan or Direct3D draw calls), which cannot
-be accurately measured using CPU time or real-time. Instead, they can be
-measured accurately using a dedicated API, and these measurement results
-can be reported back with `SetIterationTime`.
-
-```c++
-static void BM_ManualTiming(benchmark::State& state) {
-  int microseconds = state.range(0);
-  std::chrono::duration<double, std::micro> sleep_duration {
-    static_cast<double>(microseconds)
-  };
-
-  while (state.KeepRunning()) {
-    auto start = std::chrono::high_resolution_clock::now();
-    // Simulate some useful workload with a sleep
-    std::this_thread::sleep_for(sleep_duration);
-    auto end   = std::chrono::high_resolution_clock::now();
-
-    auto elapsed_seconds =
-      std::chrono::duration_cast<std::chrono::duration<double>>(
-        end - start);
-
-    state.SetIterationTime(elapsed_seconds.count());
-  }
-}
-BENCHMARK(BM_ManualTiming)->Range(1, 1<<17)->UseManualTime();
-```
-
-### Preventing optimisation
-To prevent a value or expression from being optimized away by the compiler
-the `benchmark::DoNotOptimize(...)` and `benchmark::ClobberMemory()`
-functions can be used.
-
-```c++
-static void BM_test(benchmark::State& state) {
-  while (state.KeepRunning()) {
-      int x = 0;
-      for (int i=0; i < 64; ++i) {
-        benchmark::DoNotOptimize(x += i);
-      }
-  }
-}
-```
-
-`DoNotOptimize(<expr>)` forces the  *result* of `<expr>` to be stored in either
-memory or a register. For GNU based compilers it acts as read/write barrier
-for global memory. More specifically it forces the compiler to flush pending
-writes to memory and reload any other values as necessary.
-
-Note that `DoNotOptimize(<expr>)` does not prevent optimizations on `<expr>`
-in any way. `<expr>` may even be removed entirely when the result is already
-known. For example:
-
-```c++
-  /* Example 1: `<expr>` is removed entirely. */
-  int foo(int x) { return x + 42; }
-  while (...) DoNotOptimize(foo(0)); // Optimized to DoNotOptimize(42);
-
-  /*  Example 2: Result of '<expr>' is only reused */
-  int bar(int) __attribute__((const));
-  while (...) DoNotOptimize(bar(0)); // Optimized to:
-  // int __result__ = bar(0);
-  // while (...) DoNotOptimize(__result__);
-```
-
-The second tool for preventing optimizations is `ClobberMemory()`. In essence
-`ClobberMemory()` forces the compiler to perform all pending writes to global
-memory. Memory managed by block scope objects must be "escaped" using
-`DoNotOptimize(...)` before it can be clobbered. In the below example
-`ClobberMemory()` prevents the call to `v.push_back(42)` from being optimized
-away.
-
-```c++
-static void BM_vector_push_back(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    std::vector<int> v;
-    v.reserve(1);
-    benchmark::DoNotOptimize(v.data()); // Allow v.data() to be clobbered.
-    v.push_back(42);
-    benchmark::ClobberMemory(); // Force 42 to be written to memory.
-  }
-}
-```
-
-Note that `ClobberMemory()` is only available for GNU or MSVC based compilers.
-
-### Set time unit manually
-If a benchmark runs a few milliseconds it may be hard to visually compare the
-measured times, since the output data is given in nanoseconds per default. In
-order to manually set the time unit, you can specify it manually:
-
-```c++
-BENCHMARK(BM_test)->Unit(benchmark::kMillisecond);
-```
-
-## Controlling number of iterations
-In all cases, the number of iterations for which the benchmark is run is
-governed by the amount of time the benchmark takes. Concretely, the number of
-iterations is at least one, not more than 1e9, until CPU time is greater than
-the minimum time, or the wallclock time is 5x minimum time. The minimum time is
-set as a flag `--benchmark_min_time` or per benchmark by calling `MinTime` on
-the registered benchmark object.
-
-## Reporting the mean and standard devation by repeated benchmarks
-By default each benchmark is run once and that single result is reported.
-However benchmarks are often noisy and a single result may not be representative
-of the overall behavior. For this reason it's possible to repeatedly rerun the
-benchmark.
-
-The number of runs of each benchmark is specified globally by the
-`--benchmark_repetitions` flag or on a per benchmark basis by calling
-`Repetitions` on the registered benchmark object. When a benchmark is run
-more than once the mean and standard deviation of the runs will be reported.
-
-Additionally the `--benchmark_report_aggregates_only={true|false}` flag or
-`ReportAggregatesOnly(bool)` function can be used to change how repeated tests
-are reported. By default the result of each repeated run is reported. When this
-option is 'true' only the mean and standard deviation of the runs is reported.
-Calling `ReportAggregatesOnly(bool)` on a registered benchmark object overrides
-the value of the flag for that benchmark.
-
-## Fixtures
-Fixture tests are created by
-first defining a type that derives from ::benchmark::Fixture and then
-creating/registering the tests using the following macros:
-
-* `BENCHMARK_F(ClassName, Method)`
-* `BENCHMARK_DEFINE_F(ClassName, Method)`
-* `BENCHMARK_REGISTER_F(ClassName, Method)`
-
-For Example:
-
-```c++
-class MyFixture : public benchmark::Fixture {};
-
-BENCHMARK_F(MyFixture, FooTest)(benchmark::State& st) {
-   while (st.KeepRunning()) {
-     ...
-  }
-}
-
-BENCHMARK_DEFINE_F(MyFixture, BarTest)(benchmark::State& st) {
-   while (st.KeepRunning()) {
-     ...
-  }
-}
-/* BarTest is NOT registered */
-BENCHMARK_REGISTER_F(MyFixture, BarTest)->Threads(2);
-/* BarTest is now registered */
-```
-
-
-## User-defined counters
-
-You can add your own counters with user-defined names. The example below
-will add columns "Foo", "Bar" and "Baz" in its output:
-
-```c++
-static void UserCountersExample1(benchmark::State& state) {
-  double numFoos = 0, numBars = 0, numBazs = 0;
-  while (state.KeepRunning()) {
-    // ... count Foo,Bar,Baz events
-  }
-  state.counters["Foo"] = numFoos;
-  state.counters["Bar"] = numBars;
-  state.counters["Baz"] = numBazs;
-}
-```
-
-The `state.counters` object is a `std::map` with `std::string` keys
-and `Counter` values. The latter is a `double`-like class, via an implicit
-conversion to `double&`. Thus you can use all of the standard arithmetic
-assignment operators (`=,+=,-=,*=,/=`) to change the value of each counter.
-
-In multithreaded benchmarks, each counter is set on the calling thread only.
-When the benchmark finishes, the counters from each thread will be summed;
-the resulting sum is the value which will be shown for the benchmark.
-
-The `Counter` constructor accepts two parameters: the value as a `double`
-and a bit flag which allows you to show counters as rates and/or as
-per-thread averages:
-
-```c++
-  // sets a simple counter
-  state.counters["Foo"] = numFoos;
-
-  // Set the counter as a rate. It will be presented divided
-  // by the duration of the benchmark.
-  state.counters["FooRate"] = Counter(numFoos, benchmark::Counter::kIsRate);
-
-  // Set the counter as a thread-average quantity. It will
-  // be presented divided by the number of threads.
-  state.counters["FooAvg"] = Counter(numFoos, benchmark::Counter::kAvgThreads);
-
-  // There's also a combined flag:
-  state.counters["FooAvgRate"] = Counter(numFoos,benchmark::Counter::kAvgThreadsRate);
-```
-
-When you're compiling in C++11 mode or later you can use `insert()` with
-`std::initializer_list`:
-
-```c++
-  // With C++11, this can be done:
-  state.counters.insert({{"Foo", numFoos}, {"Bar", numBars}, {"Baz", numBazs}});
-  // ... instead of:
-  state.counters["Foo"] = numFoos;
-  state.counters["Bar"] = numBars;
-  state.counters["Baz"] = numBazs;
-```
-
-### Counter reporting
-
-When using the console reporter, by default, user counters are are printed at
-the end after the table, the same way as ``bytes_processed`` and
-``items_processed``. This is best for cases in which there are few counters,
-or where there are only a couple of lines per benchmark. Here's an example of
-the default output:
-
-```
-------------------------------------------------------------------------------
-Benchmark                        Time           CPU Iterations UserCounters...
-------------------------------------------------------------------------------
-BM_UserCounter/threads:8      2248 ns      10277 ns      68808 Bar=16 Bat=40 Baz=24 Foo=8
-BM_UserCounter/threads:1      9797 ns       9788 ns      71523 Bar=2 Bat=5 Baz=3 Foo=1024m
-BM_UserCounter/threads:2      4924 ns       9842 ns      71036 Bar=4 Bat=10 Baz=6 Foo=2
-BM_UserCounter/threads:4      2589 ns      10284 ns      68012 Bar=8 Bat=20 Baz=12 Foo=4
-BM_UserCounter/threads:8      2212 ns      10287 ns      68040 Bar=16 Bat=40 Baz=24 Foo=8
-BM_UserCounter/threads:16     1782 ns      10278 ns      68144 Bar=32 Bat=80 Baz=48 Foo=16
-BM_UserCounter/threads:32     1291 ns      10296 ns      68256 Bar=64 Bat=160 Baz=96 Foo=32
-BM_UserCounter/threads:4      2615 ns      10307 ns      68040 Bar=8 Bat=20 Baz=12 Foo=4
-BM_Factorial                    26 ns         26 ns   26608979 40320
-BM_Factorial/real_time          26 ns         26 ns   26587936 40320
-BM_CalculatePiRange/1           16 ns         16 ns   45704255 0
-BM_CalculatePiRange/8           73 ns         73 ns    9520927 3.28374
-BM_CalculatePiRange/64         609 ns        609 ns    1140647 3.15746
-BM_CalculatePiRange/512       4900 ns       4901 ns     142696 3.14355
-```
-
-If this doesn't suit you, you can print each counter as a table column by
-passing the flag `--benchmark_counters_tabular=true` to the benchmark
-application. This is best for cases in which there are a lot of counters, or
-a lot of lines per individual benchmark. Note that this will trigger a
-reprinting of the table header any time the counter set changes between
-individual benchmarks. Here's an example of corresponding output when
-`--benchmark_counters_tabular=true` is passed:
-
-```
----------------------------------------------------------------------------------------
-Benchmark                        Time           CPU Iterations    Bar   Bat   Baz   Foo
----------------------------------------------------------------------------------------
-BM_UserCounter/threads:8      2198 ns       9953 ns      70688     16    40    24     8
-BM_UserCounter/threads:1      9504 ns       9504 ns      73787      2     5     3     1
-BM_UserCounter/threads:2      4775 ns       9550 ns      72606      4    10     6     2
-BM_UserCounter/threads:4      2508 ns       9951 ns      70332      8    20    12     4
-BM_UserCounter/threads:8      2055 ns       9933 ns      70344     16    40    24     8
-BM_UserCounter/threads:16     1610 ns       9946 ns      70720     32    80    48    16
-BM_UserCounter/threads:32     1192 ns       9948 ns      70496     64   160    96    32
-BM_UserCounter/threads:4      2506 ns       9949 ns      70332      8    20    12     4
---------------------------------------------------------------
-Benchmark                        Time           CPU Iterations
---------------------------------------------------------------
-BM_Factorial                    26 ns         26 ns   26392245 40320
-BM_Factorial/real_time          26 ns         26 ns   26494107 40320
-BM_CalculatePiRange/1           15 ns         15 ns   45571597 0
-BM_CalculatePiRange/8           74 ns         74 ns    9450212 3.28374
-BM_CalculatePiRange/64         595 ns        595 ns    1173901 3.15746
-BM_CalculatePiRange/512       4752 ns       4752 ns     147380 3.14355
-BM_CalculatePiRange/4k       37970 ns      37972 ns      18453 3.14184
-BM_CalculatePiRange/32k     303733 ns     303744 ns       2305 3.14162
-BM_CalculatePiRange/256k   2434095 ns    2434186 ns        288 3.1416
-BM_CalculatePiRange/1024k  9721140 ns    9721413 ns         71 3.14159
-BM_CalculatePi/threads:8      2255 ns       9943 ns      70936
-```
-Note above the additional header printed when the benchmark changes from
-``BM_UserCounter`` to ``BM_Factorial``. This is because ``BM_Factorial`` does
-not have the same counter set as ``BM_UserCounter``.
-
-## Exiting Benchmarks in Error
-
-When errors caused by external influences, such as file I/O and network
-communication, occur within a benchmark the
-`State::SkipWithError(const char* msg)` function can be used to skip that run
-of benchmark and report the error. Note that only future iterations of the
-`KeepRunning()` are skipped. Users may explicitly return to exit the
-benchmark immediately.
-
-The `SkipWithError(...)` function may be used at any point within the benchmark,
-including before and after the `KeepRunning()` loop.
-
-For example:
-
-```c++
-static void BM_test(benchmark::State& state) {
-  auto resource = GetResource();
-  if (!resource.good()) {
-      state.SkipWithError("Resource is not good!");
-      // KeepRunning() loop will not be entered.
-  }
-  while (state.KeepRunning()) {
-      auto data = resource.read_data();
-      if (!resource.good()) {
-        state.SkipWithError("Failed to read data!");
-        break; // Needed to skip the rest of the iteration.
-     }
-     do_stuff(data);
-  }
-}
-```
-
-## Running a subset of the benchmarks
-
-The `--benchmark_filter=<regex>` option can be used to only run the benchmarks
-which match the specified `<regex>`. For example:
-
-```bash
-$ ./run_benchmarks.x --benchmark_filter=BM_memcpy/32
-Run on (1 X 2300 MHz CPU )
-2016-06-25 19:34:24
-Benchmark              Time           CPU Iterations
-----------------------------------------------------
-BM_memcpy/32          11 ns         11 ns   79545455
-BM_memcpy/32k       2181 ns       2185 ns     324074
-BM_memcpy/32          12 ns         12 ns   54687500
-BM_memcpy/32k       1834 ns       1837 ns     357143
-```
-
-
-## Output Formats
-The library supports multiple output formats. Use the
-`--benchmark_format=<console|json|csv>` flag to set the format type. `console`
-is the default format.
-
-The Console format is intended to be a human readable format. By default
-the format generates color output. Context is output on stderr and the 
-tabular data on stdout. Example tabular output looks like:
-```
-Benchmark                               Time(ns)    CPU(ns) Iterations
-----------------------------------------------------------------------
-BM_SetInsert/1024/1                        28928      29349      23853  133.097kB/s   33.2742k items/s
-BM_SetInsert/1024/8                        32065      32913      21375  949.487kB/s   237.372k items/s
-BM_SetInsert/1024/10                       33157      33648      21431  1.13369MB/s   290.225k items/s
-```
-
-The JSON format outputs human readable json split into two top level attributes.
-The `context` attribute contains information about the run in general, including
-information about the CPU and the date.
-The `benchmarks` attribute contains a list of ever benchmark run. Example json
-output looks like:
-```json
-{
-  "context": {
-    "date": "2015/03/17-18:40:25",
-    "num_cpus": 40,
-    "mhz_per_cpu": 2801,
-    "cpu_scaling_enabled": false,
-    "build_type": "debug"
-  },
-  "benchmarks": [
-    {
-      "name": "BM_SetInsert/1024/1",
-      "iterations": 94877,
-      "real_time": 29275,
-      "cpu_time": 29836,
-      "bytes_per_second": 134066,
-      "items_per_second": 33516
-    },
-    {
-      "name": "BM_SetInsert/1024/8",
-      "iterations": 21609,
-      "real_time": 32317,
-      "cpu_time": 32429,
-      "bytes_per_second": 986770,
-      "items_per_second": 246693
-    },
-    {
-      "name": "BM_SetInsert/1024/10",
-      "iterations": 21393,
-      "real_time": 32724,
-      "cpu_time": 33355,
-      "bytes_per_second": 1199226,
-      "items_per_second": 299807
-    }
-  ]
-}
-```
-
-The CSV format outputs comma-separated values. The `context` is output on stderr
-and the CSV itself on stdout. Example CSV output looks like:
-```
-name,iterations,real_time,cpu_time,bytes_per_second,items_per_second,label
-"BM_SetInsert/1024/1",65465,17890.7,8407.45,475768,118942,
-"BM_SetInsert/1024/8",116606,18810.1,9766.64,3.27646e+06,819115,
-"BM_SetInsert/1024/10",106365,17238.4,8421.53,4.74973e+06,1.18743e+06,
-```
-
-## Output Files
-The library supports writing the output of the benchmark to a file specified
-by `--benchmark_out=<filename>`. The format of the output can be specified
-using `--benchmark_out_format={json|console|csv}`. Specifying
-`--benchmark_out` does not suppress the console output.
-
-## Debug vs Release
-By default, benchmark builds as a debug library. You will see a warning in the output when this is the case. To build it as a release library instead, use:
-
-```
-cmake -DCMAKE_BUILD_TYPE=Release
-```
-
-To enable link-time optimisation, use
-
-```
-cmake -DCMAKE_BUILD_TYPE=Release -DBENCHMARK_ENABLE_LTO=true
-```
-
-## Linking against the library
-When using gcc, it is necessary to link against pthread to avoid runtime exceptions.
-This is due to how gcc implements std::thread.
-See [issue #67](https://github.com/google/benchmark/issues/67) for more details.
-
-## Compiler Support
-
-Google Benchmark uses C++11 when building the library. As such we require
-a modern C++ toolchain, both compiler and standard library.
-
-The following minimum versions are strongly recommended build the library:
-
-* GCC 4.8
-* Clang 3.4
-* Visual Studio 2013
-* Intel 2015 Update 1
-
-Anything older *may* work.
-
-Note: Using the library and its headers in C++03 is supported. C++11 is only
-required to build the library.
-
-# Known Issues
-
-### Windows
-
-* Users must manually link `shlwapi.lib`. Failure to do so may result
-in unresolved symbols.
-

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/appveyor.yml
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/appveyor.yml?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/appveyor.yml (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/appveyor.yml (removed)
@@ -1,56 +0,0 @@
-version: '{build}'
-
-image: Visual Studio 2017
-
-configuration:
-  - Debug
-  - Release
-
-environment:
-  matrix:
-    - compiler: msvc-15-seh
-      generator: "Visual Studio 15 2017"
-
-    - compiler: msvc-15-seh
-      generator: "Visual Studio 15 2017 Win64"
-
-    - compiler: msvc-14-seh
-      generator: "Visual Studio 14 2015"
-
-    - compiler: msvc-14-seh
-      generator: "Visual Studio 14 2015 Win64"
-
-    - compiler: msvc-12-seh
-      generator: "Visual Studio 12 2013"
-
-    - compiler: msvc-12-seh
-      generator: "Visual Studio 12 2013 Win64"
-
-    - compiler: gcc-5.3.0-posix
-      generator: "MinGW Makefiles"
-      cxx_path: 'C:\mingw-w64\i686-5.3.0-posix-dwarf-rt_v4-rev0\mingw32\bin'
-      APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2015
-
-matrix:
-  fast_finish: true
-
-install:
-  # git bash conflicts with MinGW makefiles
-  - if "%generator%"=="MinGW Makefiles" (set "PATH=%PATH:C:\Program Files\Git\usr\bin;=%")
-  - if not "%cxx_path%"=="" (set "PATH=%PATH%;%cxx_path%")
-
-build_script:
-  - md _build -Force
-  - cd _build
-  - echo %configuration%
-  - cmake -G "%generator%" "-DCMAKE_BUILD_TYPE=%configuration%" ..
-  - cmake --build . --config %configuration%
-
-test_script:
-  - ctest -c %configuration% --timeout 300 --output-on-failure
-
-artifacts:
-  - path: '_build/CMakeFiles/*.log'
-    name: logs
-  - path: '_build/Testing/**/*.xml'
-    name: test_results

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/AddCXXCompilerFlag.cmake
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/AddCXXCompilerFlag.cmake?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/AddCXXCompilerFlag.cmake (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/AddCXXCompilerFlag.cmake (removed)
@@ -1,64 +0,0 @@
-# - Adds a compiler flag if it is supported by the compiler
-#
-# This function checks that the supplied compiler flag is supported and then
-# adds it to the corresponding compiler flags
-#
-#  add_cxx_compiler_flag(<FLAG> [<VARIANT>])
-#
-# - Example
-#
-# include(AddCXXCompilerFlag)
-# add_cxx_compiler_flag(-Wall)
-# add_cxx_compiler_flag(-no-strict-aliasing RELEASE)
-# Requires CMake 2.6+
-
-if(__add_cxx_compiler_flag)
-  return()
-endif()
-set(__add_cxx_compiler_flag INCLUDED)
-
-include(CheckCXXCompilerFlag)
-
-function(mangle_compiler_flag FLAG OUTPUT)
-  string(TOUPPER "HAVE_CXX_FLAG_${FLAG}" SANITIZED_FLAG)
-  string(REPLACE "+" "X" SANITIZED_FLAG ${SANITIZED_FLAG})
-  string(REGEX REPLACE "[^A-Za-z_0-9]" "_" SANITIZED_FLAG ${SANITIZED_FLAG})
-  string(REGEX REPLACE "_+" "_" SANITIZED_FLAG ${SANITIZED_FLAG})
-  set(${OUTPUT} "${SANITIZED_FLAG}" PARENT_SCOPE)
-endfunction(mangle_compiler_flag)
-
-function(add_cxx_compiler_flag FLAG)
-  mangle_compiler_flag("${FLAG}" MANGLED_FLAG)
-  set(OLD_CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS}")
-  set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} ${FLAG}")
-  check_cxx_compiler_flag("${FLAG}" ${MANGLED_FLAG})
-  set(CMAKE_REQUIRED_FLAGS "${OLD_CMAKE_REQUIRED_FLAGS}")
-  if(${MANGLED_FLAG})
-    set(VARIANT ${ARGV1})
-    if(ARGV1)
-      string(TOUPPER "_${VARIANT}" VARIANT)
-    endif()
-    set(CMAKE_CXX_FLAGS${VARIANT} "${CMAKE_CXX_FLAGS${VARIANT}} ${FLAG}" PARENT_SCOPE)
-  endif()
-endfunction()
-
-function(add_required_cxx_compiler_flag FLAG)
-  mangle_compiler_flag("${FLAG}" MANGLED_FLAG)
-  set(OLD_CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS}")
-  set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} ${FLAG}")
-  check_cxx_compiler_flag("${FLAG}" ${MANGLED_FLAG})
-  set(CMAKE_REQUIRED_FLAGS "${OLD_CMAKE_REQUIRED_FLAGS}")
-  if(${MANGLED_FLAG})
-    set(VARIANT ${ARGV1})
-    if(ARGV1)
-      string(TOUPPER "_${VARIANT}" VARIANT)
-    endif()
-    set(CMAKE_CXX_FLAGS${VARIANT} "${CMAKE_CXX_FLAGS${VARIANT}} ${FLAG}" PARENT_SCOPE)
-    set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${FLAG}" PARENT_SCOPE)
-    set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} ${FLAG}" PARENT_SCOPE)
-    set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} ${FLAG}" PARENT_SCOPE)
-    set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} ${FLAG}" PARENT_SCOPE)
-  else()
-    message(FATAL_ERROR "Required flag '${FLAG}' is not supported by the compiler")
-  endif()
-endfunction()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/CXXFeatureCheck.cmake
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/CXXFeatureCheck.cmake?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/CXXFeatureCheck.cmake (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/CXXFeatureCheck.cmake (removed)
@@ -1,46 +0,0 @@
-# - Compile and run code to check for C++ features
-#
-# This functions compiles a source file under the `cmake` folder
-# and adds the corresponding `HAVE_[FILENAME]` flag to the CMake
-# environment
-#
-#  cxx_feature_check(<FLAG> [<VARIANT>])
-#
-# - Example
-#
-# include(CXXFeatureCheck)
-# cxx_feature_check(STD_REGEX)
-# Requires CMake 2.8.12+
-
-if(__cxx_feature_check)
-  return()
-endif()
-set(__cxx_feature_check INCLUDED)
-
-function(cxx_feature_check FILE)
-  string(TOLOWER ${FILE} FILE)
-  string(TOUPPER ${FILE} VAR)
-  string(TOUPPER "HAVE_${VAR}" FEATURE)
-  if (DEFINED HAVE_${VAR})
-    set(HAVE_${VAR} 1 CACHE INTERNAL "Feature test for ${FILE}" PARENT_SCOPE)
-    add_definitions(-DHAVE_${VAR})
-    return()
-  endif()
-  message("-- Performing Test ${FEATURE}")
-  try_run(RUN_${FEATURE} COMPILE_${FEATURE}
-          ${CMAKE_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/cmake/${FILE}.cpp
-          CMAKE_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS}
-          LINK_LIBRARIES ${BENCHMARK_CXX_LIBRARIES})
-  if(RUN_${FEATURE} EQUAL 0)
-    message("-- Performing Test ${FEATURE} -- success")
-    set(HAVE_${VAR} 1 CACHE INTERNAL "Feature test for ${FILE}" PARENT_SCOPE)
-    add_definitions(-DHAVE_${VAR})
-  else()
-    if(NOT COMPILE_${FEATURE})
-      message("-- Performing Test ${FEATURE} -- failed to compile")
-    else()
-      message("-- Performing Test ${FEATURE} -- compiled but failed to run")
-    endif()
-  endif()
-endfunction()
-

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/Config.cmake.in
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/Config.cmake.in?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/Config.cmake.in (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/Config.cmake.in (removed)
@@ -1 +0,0 @@
-include("${CMAKE_CURRENT_LIST_DIR}/@targets_export_name at .cmake")

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/GetGitVersion.cmake
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/GetGitVersion.cmake?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/GetGitVersion.cmake (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/GetGitVersion.cmake (removed)
@@ -1,51 +0,0 @@
-# - Returns a version string from Git tags
-#
-# This function inspects the annotated git tags for the project and returns a string
-# into a CMake variable
-#
-#  get_git_version(<var>)
-#
-# - Example
-#
-# include(GetGitVersion)
-# get_git_version(GIT_VERSION)
-#
-# Requires CMake 2.8.11+
-find_package(Git)
-
-if(__get_git_version)
-  return()
-endif()
-set(__get_git_version INCLUDED)
-
-function(get_git_version var)
-  if(GIT_EXECUTABLE)
-      execute_process(COMMAND ${GIT_EXECUTABLE} describe --match "v[0-9]*.[0-9]*.[0-9]*" --abbrev=8
-          RESULT_VARIABLE status
-          OUTPUT_VARIABLE GIT_VERSION
-          ERROR_QUIET)
-      if(${status})
-          set(GIT_VERSION "v0.0.0")
-      else()
-          string(STRIP ${GIT_VERSION} GIT_VERSION)
-          string(REGEX REPLACE "-[0-9]+-g" "-" GIT_VERSION ${GIT_VERSION})
-      endif()
-
-      # Work out if the repository is dirty
-      execute_process(COMMAND ${GIT_EXECUTABLE} update-index -q --refresh
-          OUTPUT_QUIET
-          ERROR_QUIET)
-      execute_process(COMMAND ${GIT_EXECUTABLE} diff-index --name-only HEAD --
-          OUTPUT_VARIABLE GIT_DIFF_INDEX
-          ERROR_QUIET)
-      string(COMPARE NOTEQUAL "${GIT_DIFF_INDEX}" "" GIT_DIRTY)
-      if (${GIT_DIRTY})
-          set(GIT_VERSION "${GIT_VERSION}-dirty")
-      endif()
-  else()
-      set(GIT_VERSION "v0.0.0")
-  endif()
-
-  message("-- git Version: ${GIT_VERSION}")
-  set(${var} ${GIT_VERSION} PARENT_SCOPE)
-endfunction()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/gnu_posix_regex.cpp
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/gnu_posix_regex.cpp?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/gnu_posix_regex.cpp (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/gnu_posix_regex.cpp (removed)
@@ -1,12 +0,0 @@
-#include <gnuregex.h>
-#include <string>
-int main() {
-  std::string str = "test0159";
-  regex_t re;
-  int ec = regcomp(&re, "^[a-z]+[0-9]+$", REG_EXTENDED | REG_NOSUB);
-  if (ec != 0) {
-    return ec;
-  }
-  return regexec(&re, str.c_str(), 0, nullptr, 0) ? -1 : 0;
-}
-

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/posix_regex.cpp
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/posix_regex.cpp?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/posix_regex.cpp (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/posix_regex.cpp (removed)
@@ -1,14 +0,0 @@
-#include <regex.h>
-#include <string>
-int main() {
-  std::string str = "test0159";
-  regex_t re;
-  int ec = regcomp(&re, "^[a-z]+[0-9]+$", REG_EXTENDED | REG_NOSUB);
-  if (ec != 0) {
-    return ec;
-  }
-  int ret = regexec(&re, str.c_str(), 0, nullptr, 0) ? -1 : 0;
-  regfree(&re);
-  return ret;
-}
-

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/std_regex.cpp
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/std_regex.cpp?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/std_regex.cpp (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/std_regex.cpp (removed)
@@ -1,10 +0,0 @@
-#include <regex>
-#include <string>
-int main() {
-  const std::string str = "test0159";
-  std::regex re;
-  re = std::regex("^[a-z]+[0-9]+$",
-       std::regex_constants::extended | std::regex_constants::nosubs);
-  return std::regex_search(str, re) ? 0 : -1;
-}
-

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/steady_clock.cpp
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/steady_clock.cpp?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/steady_clock.cpp (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/steady_clock.cpp (removed)
@@ -1,7 +0,0 @@
-#include <chrono>
-
-int main() {
-    typedef std::chrono::steady_clock Clock;
-    Clock::time_point tp = Clock::now();
-    ((void)tp);
-}

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/thread_safety_attributes.cpp
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/thread_safety_attributes.cpp?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/thread_safety_attributes.cpp (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/thread_safety_attributes.cpp (removed)
@@ -1,4 +0,0 @@
-#define HAVE_THREAD_SAFETY_ATTRIBUTES
-#include "../src/mutex.h"
-
-int main() {}

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/docs/tools.md
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/docs/tools.md?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/docs/tools.md (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/docs/tools.md (removed)
@@ -1,59 +0,0 @@
-# Benchmark Tools
-
-## compare_bench.py
-
-The `compare_bench.py` utility which can be used to compare the result of benchmarks.
-The program is invoked like:
-
-``` bash
-$ compare_bench.py <old-benchmark> <new-benchmark> [benchmark options]...
-```
-
-Where `<old-benchmark>` and `<new-benchmark>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
-
-The sample output using the JSON test files under `Inputs/` gives:
-
-``` bash
-$ ./compare_bench.py ./gbench/Inputs/test1_run1.json ./gbench/Inputs/test1_run2.json
-Comparing ./gbench/Inputs/test1_run1.json to ./gbench/Inputs/test1_run2.json
-Benchmark                   Time           CPU
-----------------------------------------------
-BM_SameTimes               +0.00         +0.00
-BM_2xFaster                -0.50         -0.50
-BM_2xSlower                +1.00         +1.00
-BM_10PercentFaster         -0.10         -0.10
-BM_10PercentSlower         +0.10         +0.10
-```
-
-When a benchmark executable is run, the raw output from the benchmark is printed in real time to stdout. The sample output using `benchmark/basic_test` for both arguments looks like:
-
-```
-./compare_bench.py  test/basic_test test/basic_test  --benchmark_filter=BM_empty.*
-RUNNING: test/basic_test --benchmark_filter=BM_empty.*
-Run on (4 X 4228.32 MHz CPU s)
-2016-08-02 19:21:33
-Benchmark                              Time           CPU Iterations
---------------------------------------------------------------------
-BM_empty                               9 ns          9 ns   79545455
-BM_empty/threads:4                     4 ns          9 ns   75268816
-BM_empty_stop_start                    8 ns          8 ns   83333333
-BM_empty_stop_start/threads:4          3 ns          8 ns   83333332
-RUNNING: test/basic_test --benchmark_filter=BM_empty.*
-Run on (4 X 4228.32 MHz CPU s)
-2016-08-02 19:21:35
-Benchmark                              Time           CPU Iterations
---------------------------------------------------------------------
-BM_empty                               9 ns          9 ns   76086957
-BM_empty/threads:4                     4 ns          9 ns   76086956
-BM_empty_stop_start                    8 ns          8 ns   87500000
-BM_empty_stop_start/threads:4          3 ns          8 ns   88607596
-Comparing test/basic_test to test/basic_test
-Benchmark                              Time           CPU
----------------------------------------------------------
-BM_empty                              +0.00         +0.00
-BM_empty/threads:4                    +0.00         +0.00
-BM_empty_stop_start                   +0.00         +0.00
-BM_empty_stop_start/threads:4         +0.00         +0.00
-```
-
-Obviously this example doesn't give any useful output, but it's intended to show the output format when 'compare_bench.py' needs to run benchmarks.

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark.h (removed)
@@ -1,1210 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-// Support for registering benchmarks for functions.
-
-/* Example usage:
-// Define a function that executes the code to be measured a
-// specified number of times:
-static void BM_StringCreation(benchmark::State& state) {
-  while (state.KeepRunning())
-    std::string empty_string;
-}
-
-// Register the function as a benchmark
-BENCHMARK(BM_StringCreation);
-
-// Define another benchmark
-static void BM_StringCopy(benchmark::State& state) {
-  std::string x = "hello";
-  while (state.KeepRunning())
-    std::string copy(x);
-}
-BENCHMARK(BM_StringCopy);
-
-// Augment the main() program to invoke benchmarks if specified
-// via the --benchmarks command line flag.  E.g.,
-//       my_unittest --benchmark_filter=all
-//       my_unittest --benchmark_filter=BM_StringCreation
-//       my_unittest --benchmark_filter=String
-//       my_unittest --benchmark_filter='Copy|Creation'
-int main(int argc, char** argv) {
-  benchmark::Initialize(&argc, argv);
-  benchmark::RunSpecifiedBenchmarks();
-  return 0;
-}
-
-// Sometimes a family of microbenchmarks can be implemented with
-// just one routine that takes an extra argument to specify which
-// one of the family of benchmarks to run.  For example, the following
-// code defines a family of microbenchmarks for measuring the speed
-// of memcpy() calls of different lengths:
-
-static void BM_memcpy(benchmark::State& state) {
-  char* src = new char[state.range(0)]; char* dst = new char[state.range(0)];
-  memset(src, 'x', state.range(0));
-  while (state.KeepRunning())
-    memcpy(dst, src, state.range(0));
-  state.SetBytesProcessed(int64_t(state.iterations()) *
-                          int64_t(state.range(0)));
-  delete[] src; delete[] dst;
-}
-BENCHMARK(BM_memcpy)->Arg(8)->Arg(64)->Arg(512)->Arg(1<<10)->Arg(8<<10);
-
-// The preceding code is quite repetitive, and can be replaced with the
-// following short-hand.  The following invocation will pick a few
-// appropriate arguments in the specified range and will generate a
-// microbenchmark for each such argument.
-BENCHMARK(BM_memcpy)->Range(8, 8<<10);
-
-// You might have a microbenchmark that depends on two inputs.  For
-// example, the following code defines a family of microbenchmarks for
-// measuring the speed of set insertion.
-static void BM_SetInsert(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    state.PauseTiming();
-    set<int> data = ConstructRandomSet(state.range(0));
-    state.ResumeTiming();
-    for (int j = 0; j < state.range(1); ++j)
-      data.insert(RandomNumber());
-  }
-}
-BENCHMARK(BM_SetInsert)
-   ->Args({1<<10, 1})
-   ->Args({1<<10, 8})
-   ->Args({1<<10, 64})
-   ->Args({1<<10, 512})
-   ->Args({8<<10, 1})
-   ->Args({8<<10, 8})
-   ->Args({8<<10, 64})
-   ->Args({8<<10, 512});
-
-// The preceding code is quite repetitive, and can be replaced with
-// the following short-hand.  The following macro will pick a few
-// appropriate arguments in the product of the two specified ranges
-// and will generate a microbenchmark for each such pair.
-BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {1, 512}});
-
-// For more complex patterns of inputs, passing a custom function
-// to Apply allows programmatic specification of an
-// arbitrary set of arguments to run the microbenchmark on.
-// The following example enumerates a dense range on
-// one parameter, and a sparse range on the second.
-static void CustomArguments(benchmark::internal::Benchmark* b) {
-  for (int i = 0; i <= 10; ++i)
-    for (int j = 32; j <= 1024*1024; j *= 8)
-      b->Args({i, j});
-}
-BENCHMARK(BM_SetInsert)->Apply(CustomArguments);
-
-// Templated microbenchmarks work the same way:
-// Produce then consume 'size' messages 'iters' times
-// Measures throughput in the absence of multiprogramming.
-template <class Q> int BM_Sequential(benchmark::State& state) {
-  Q q;
-  typename Q::value_type v;
-  while (state.KeepRunning()) {
-    for (int i = state.range(0); i--; )
-      q.push(v);
-    for (int e = state.range(0); e--; )
-      q.Wait(&v);
-  }
-  // actually messages, not bytes:
-  state.SetBytesProcessed(
-      static_cast<int64_t>(state.iterations())*state.range(0));
-}
-BENCHMARK_TEMPLATE(BM_Sequential, WaitQueue<int>)->Range(1<<0, 1<<10);
-
-Use `Benchmark::MinTime(double t)` to set the minimum time used to run the
-benchmark. This option overrides the `benchmark_min_time` flag.
-
-void BM_test(benchmark::State& state) {
- ... body ...
-}
-BENCHMARK(BM_test)->MinTime(2.0); // Run for at least 2 seconds.
-
-In a multithreaded test, it is guaranteed that none of the threads will start
-until all have called KeepRunning, and all will have finished before KeepRunning
-returns false. As such, any global setup or teardown you want to do can be
-wrapped in a check against the thread index:
-
-static void BM_MultiThreaded(benchmark::State& state) {
-  if (state.thread_index == 0) {
-    // Setup code here.
-  }
-  while (state.KeepRunning()) {
-    // Run the test as normal.
-  }
-  if (state.thread_index == 0) {
-    // Teardown code here.
-  }
-}
-BENCHMARK(BM_MultiThreaded)->Threads(4);
-
-
-If a benchmark runs a few milliseconds it may be hard to visually compare the
-measured times, since the output data is given in nanoseconds per default. In
-order to manually set the time unit, you can specify it manually:
-
-BENCHMARK(BM_test)->Unit(benchmark::kMillisecond);
-*/
-
-#ifndef BENCHMARK_BENCHMARK_H_
-#define BENCHMARK_BENCHMARK_H_
-
-
-#if __cplusplus >= 201103L
-#define BENCHMARK_HAS_CXX11
-#endif
-
-#include <stdint.h>
-
-#include <cassert>
-#include <cstddef>
-#include <iosfwd>
-#include <string>
-#include <vector>
-#include <map>
-#include <set>
-
-#if defined(BENCHMARK_HAS_CXX11)
-#include <type_traits>
-#include <initializer_list>
-#include <utility>
-#endif
-
-#if defined(_MSC_VER)
-#include <intrin.h> // for _ReadWriteBarrier
-#endif
-
-#ifndef BENCHMARK_HAS_CXX11
-#define BENCHMARK_DISALLOW_COPY_AND_ASSIGN(TypeName) \
-  TypeName(const TypeName&);                         \
-  TypeName& operator=(const TypeName&)
-#else
-#define BENCHMARK_DISALLOW_COPY_AND_ASSIGN(TypeName) \
-  TypeName(const TypeName&) = delete;                \
-  TypeName& operator=(const TypeName&) = delete
-#endif
-
-#if defined(__GNUC__)
-#define BENCHMARK_UNUSED __attribute__((unused))
-#define BENCHMARK_ALWAYS_INLINE __attribute__((always_inline))
-#define BENCHMARK_NOEXCEPT noexcept
-#define BENCHMARK_NOEXCEPT_OP(x) noexcept(x)
-#elif defined(_MSC_VER) && !defined(__clang__)
-#define BENCHMARK_UNUSED
-#define BENCHMARK_ALWAYS_INLINE __forceinline
-#if _MSC_VER >= 1900
-#define BENCHMARK_NOEXCEPT noexcept
-#define BENCHMARK_NOEXCEPT_OP(x) noexcept(x)
-#else
-#define BENCHMARK_NOEXCEPT
-#define BENCHMARK_NOEXCEPT_OP(x)
-#endif
-#define __func__ __FUNCTION__
-#else
-#define BENCHMARK_UNUSED
-#define BENCHMARK_ALWAYS_INLINE
-#define BENCHMARK_NOEXCEPT
-#define BENCHMARK_NOEXCEPT_OP(x)
-#endif
-
-#define BENCHMARK_INTERNAL_TOSTRING2(x) #x
-#define BENCHMARK_INTERNAL_TOSTRING(x) BENCHMARK_INTERNAL_TOSTRING2(x)
-
-#if defined(__GNUC__)
-#define BENCHMARK_BUILTIN_EXPECT(x, y) __builtin_expect(x, y)
-#define BENCHMARK_DEPRECATED_MSG(msg) __attribute__((deprecated(msg)))
-#else
-#define BENCHMARK_BUILTIN_EXPECT(x, y) x
-#define BENCHMARK_DEPRECATED_MSG(msg)
-#define BENCHMARK_WARNING_MSG(msg) __pragma(message(__FILE__ "(" BENCHMARK_INTERNAL_TOSTRING(__LINE__) ") : warning note: " msg))
-#endif
-
-#if defined(__GNUC__) && !defined(__clang__)
-#define BENCHMARK_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
-#endif
-
-
-namespace benchmark {
-class BenchmarkReporter;
-
-void Initialize(int* argc, char** argv);
-
-// Report to stdout all arguments in 'argv' as unrecognized except the first.
-// Returns true there is at least on unrecognized argument (i.e. 'argc' > 1).
-bool ReportUnrecognizedArguments(int argc, char** argv);
-
-// Generate a list of benchmarks matching the specified --benchmark_filter flag
-// and if --benchmark_list_tests is specified return after printing the name
-// of each matching benchmark. Otherwise run each matching benchmark and
-// report the results.
-//
-// The second and third overload use the specified 'console_reporter' and
-//  'file_reporter' respectively. 'file_reporter' will write to the file
-//  specified
-//   by '--benchmark_output'. If '--benchmark_output' is not given the
-//  'file_reporter' is ignored.
-//
-// RETURNS: The number of matching benchmarks.
-size_t RunSpecifiedBenchmarks();
-size_t RunSpecifiedBenchmarks(BenchmarkReporter* console_reporter);
-size_t RunSpecifiedBenchmarks(BenchmarkReporter* console_reporter,
-                              BenchmarkReporter* file_reporter);
-
-// If this routine is called, peak memory allocation past this point in the
-// benchmark is reported at the end of the benchmark report line. (It is
-// computed by running the benchmark once with a single iteration and a memory
-// tracer.)
-// TODO(dominic)
-// void MemoryUsage();
-
-namespace internal {
-class Benchmark;
-class BenchmarkImp;
-class BenchmarkFamilies;
-
-void UseCharPointer(char const volatile*);
-
-// Take ownership of the pointer and register the benchmark. Return the
-// registered benchmark.
-Benchmark* RegisterBenchmarkInternal(Benchmark*);
-
-// Ensure that the standard streams are properly initialized in every TU.
-int InitializeStreams();
-BENCHMARK_UNUSED static int stream_init_anchor = InitializeStreams();
-
-}  // namespace internal
-
-
-#if !defined(__GNUC__) || defined(__pnacl__) || defined(EMSCRIPTN)
-# define BENCHMARK_HAS_NO_INLINE_ASSEMBLY
-#endif
-
-// The DoNotOptimize(...) function can be used to prevent a value or
-// expression from being optimized away by the compiler. This function is
-// intended to add little to no overhead.
-// See: https://youtu.be/nXaxk27zwlk?t=2441
-#ifndef BENCHMARK_HAS_NO_INLINE_ASSEMBLY
-template <class Tp>
-inline BENCHMARK_ALWAYS_INLINE void DoNotOptimize(Tp const& value) {
-  // Clang doesn't like the 'X' constraint on `value` and certain GCC versions
-  // don't like the 'g' constraint. Attempt to placate them both.
-#if defined(__clang__)
-  asm volatile("" : : "g"(value) : "memory");
-#else
-  asm volatile("" : : "i,r,m"(value) : "memory");
-#endif
-}
-// Force the compiler to flush pending writes to global memory. Acts as an
-// effective read/write barrier
-inline BENCHMARK_ALWAYS_INLINE void ClobberMemory() {
-  asm volatile("" : : : "memory");
-}
-#elif defined(_MSC_VER)
-template <class Tp>
-inline BENCHMARK_ALWAYS_INLINE void DoNotOptimize(Tp const& value) {
-  internal::UseCharPointer(&reinterpret_cast<char const volatile&>(value));
-  _ReadWriteBarrier();
-}
-
-inline BENCHMARK_ALWAYS_INLINE void ClobberMemory() {
-  _ReadWriteBarrier();
-}
-#else
-template <class Tp>
-inline BENCHMARK_ALWAYS_INLINE void DoNotOptimize(Tp const& value) {
-  internal::UseCharPointer(&reinterpret_cast<char const volatile&>(value));
-}
-// FIXME Add ClobberMemory() for non-gnu and non-msvc compilers
-#endif
-
-
-
-// This class is used for user-defined counters.
-class Counter {
-public:
-
-  enum Flags {
-    kDefaults   = 0,
-    // Mark the counter as a rate. It will be presented divided
-    // by the duration of the benchmark.
-    kIsRate     = 1,
-    // Mark the counter as a thread-average quantity. It will be
-    // presented divided by the number of threads.
-    kAvgThreads = 2,
-    // Mark the counter as a thread-average rate. See above.
-    kAvgThreadsRate = kIsRate|kAvgThreads
-  };
-
-  double value;
-  Flags  flags;
-
-  BENCHMARK_ALWAYS_INLINE
-  Counter(double v = 0., Flags f = kDefaults) : value(v), flags(f) {}
-
-  BENCHMARK_ALWAYS_INLINE operator double const& () const { return value; }
-  BENCHMARK_ALWAYS_INLINE operator double      & ()       { return value; }
-
-};
-
-// This is the container for the user-defined counters.
-typedef std::map<std::string, Counter> UserCounters;
-
-
-// TimeUnit is passed to a benchmark in order to specify the order of magnitude
-// for the measured time.
-enum TimeUnit { kNanosecond, kMicrosecond, kMillisecond };
-
-// BigO is passed to a benchmark in order to specify the asymptotic
-// computational
-// complexity for the benchmark. In case oAuto is selected, complexity will be
-// calculated automatically to the best fit.
-enum BigO { oNone, o1, oN, oNSquared, oNCubed, oLogN, oNLogN, oAuto, oLambda };
-
-// BigOFunc is passed to a benchmark in order to specify the asymptotic
-// computational complexity for the benchmark.
-typedef double(BigOFunc)(int);
-
-namespace internal {
-class ThreadTimer;
-class ThreadManager;
-
-enum ReportMode
-#if defined(BENCHMARK_HAS_CXX11)
-  : unsigned
-#else
-#endif
-  {
-  RM_Unspecified,  // The mode has not been manually specified
-  RM_Default,      // The mode is user-specified as default.
-  RM_ReportAggregatesOnly
-};
-}  // namespace internal
-
-// State is passed to a running Benchmark and contains state for the
-// benchmark to use.
-class State {
- public:
-  // Returns true if the benchmark should continue through another iteration.
-  // NOTE: A benchmark may not return from the test until KeepRunning() has
-  // returned false.
-  bool KeepRunning() {
-    if (BENCHMARK_BUILTIN_EXPECT(!started_, false)) {
-      StartKeepRunning();
-    }
-    bool const res = total_iterations_++ < max_iterations;
-    if (BENCHMARK_BUILTIN_EXPECT(!res, false)) {
-      FinishKeepRunning();
-    }
-    return res;
-  }
-
-  // REQUIRES: timer is running and 'SkipWithError(...)' has not been called
-  //           by the current thread.
-  // Stop the benchmark timer.  If not called, the timer will be
-  // automatically stopped after KeepRunning() returns false for the first time.
-  //
-  // For threaded benchmarks the PauseTiming() function only pauses the timing
-  // for the current thread.
-  //
-  // NOTE: The "real time" measurement is per-thread. If different threads
-  // report different measurements the largest one is reported.
-  //
-  // NOTE: PauseTiming()/ResumeTiming() are relatively
-  // heavyweight, and so their use should generally be avoided
-  // within each benchmark iteration, if possible.
-  void PauseTiming();
-
-  // REQUIRES: timer is not running and 'SkipWithError(...)' has not been called
-  //           by the current thread.
-  // Start the benchmark timer.  The timer is NOT running on entrance to the
-  // benchmark function. It begins running after the first call to KeepRunning()
-  //
-  // NOTE: PauseTiming()/ResumeTiming() are relatively
-  // heavyweight, and so their use should generally be avoided
-  // within each benchmark iteration, if possible.
-  void ResumeTiming();
-
-  // REQUIRES: 'SkipWithError(...)' has not been called previously by the
-  //            current thread.
-  // Skip any future iterations of the 'KeepRunning()' loop in the current
-  // thread and report an error with the specified 'msg'. After this call
-  // the user may explicitly 'return' from the benchmark.
-  //
-  // For threaded benchmarks only the current thread stops executing and future
-  // calls to `KeepRunning()` will block until all threads have completed
-  // the `KeepRunning()` loop. If multiple threads report an error only the
-  // first error message is used.
-  //
-  // NOTE: Calling 'SkipWithError(...)' does not cause the benchmark to exit
-  // the current scope immediately. If the function is called from within
-  // the 'KeepRunning()' loop the current iteration will finish. It is the users
-  // responsibility to exit the scope as needed.
-  void SkipWithError(const char* msg);
-
-  // REQUIRES: called exactly once per iteration of the KeepRunning loop.
-  // Set the manually measured time for this benchmark iteration, which
-  // is used instead of automatically measured time if UseManualTime() was
-  // specified.
-  //
-  // For threaded benchmarks the final value will be set to the largest
-  // reported values.
-  void SetIterationTime(double seconds);
-
-  // Set the number of bytes processed by the current benchmark
-  // execution.  This routine is typically called once at the end of a
-  // throughput oriented benchmark.  If this routine is called with a
-  // value > 0, the report is printed in MB/sec instead of nanoseconds
-  // per iteration.
-  //
-  // REQUIRES: a benchmark has exited its KeepRunning loop.
-  BENCHMARK_ALWAYS_INLINE
-  void SetBytesProcessed(size_t bytes) { bytes_processed_ = bytes; }
-
-  BENCHMARK_ALWAYS_INLINE
-  size_t bytes_processed() const { return bytes_processed_; }
-
-  // If this routine is called with complexity_n > 0 and complexity report is
-  // requested for the
-  // family benchmark, then current benchmark will be part of the computation
-  // and complexity_n will
-  // represent the length of N.
-  BENCHMARK_ALWAYS_INLINE
-  void SetComplexityN(int complexity_n) { complexity_n_ = complexity_n; }
-
-  BENCHMARK_ALWAYS_INLINE
-  int complexity_length_n() { return complexity_n_; }
-
-  // If this routine is called with items > 0, then an items/s
-  // label is printed on the benchmark report line for the currently
-  // executing benchmark. It is typically called at the end of a processing
-  // benchmark where a processing items/second output is desired.
-  //
-  // REQUIRES: a benchmark has exited its KeepRunning loop.
-  BENCHMARK_ALWAYS_INLINE
-  void SetItemsProcessed(size_t items) { items_processed_ = items; }
-
-  BENCHMARK_ALWAYS_INLINE
-  size_t items_processed() const { return items_processed_; }
-
-  // If this routine is called, the specified label is printed at the
-  // end of the benchmark report line for the currently executing
-  // benchmark.  Example:
-  //  static void BM_Compress(benchmark::State& state) {
-  //    ...
-  //    double compress = input_size / output_size;
-  //    state.SetLabel(StringPrintf("compress:%.1f%%", 100.0*compression));
-  //  }
-  // Produces output that looks like:
-  //  BM_Compress   50         50   14115038  compress:27.3%
-  //
-  // REQUIRES: a benchmark has exited its KeepRunning loop.
-  void SetLabel(const char* label);
-
-  void BENCHMARK_ALWAYS_INLINE SetLabel(const std::string& str) {
-    this->SetLabel(str.c_str());
-  }
-
-  // Range arguments for this run. CHECKs if the argument has been set.
-  BENCHMARK_ALWAYS_INLINE
-  int range(std::size_t pos = 0) const {
-    assert(range_.size() > pos);
-    return range_[pos];
-  }
-
-  BENCHMARK_DEPRECATED_MSG("use 'range(0)' instead")
-  int range_x() const { return range(0); }
-
-  BENCHMARK_DEPRECATED_MSG("use 'range(1)' instead")
-  int range_y() const { return range(1); }
-
-  BENCHMARK_ALWAYS_INLINE
-  size_t iterations() const { return total_iterations_; }
-
- private:
-  bool started_;
-  bool finished_;
-  size_t total_iterations_;
-
-  std::vector<int> range_;
-
-  size_t bytes_processed_;
-  size_t items_processed_;
-
-  int complexity_n_;
-
-  bool error_occurred_;
-
- public:
-  // Container for user-defined counters.
-  UserCounters counters;
-  // Index of the executing thread. Values from [0, threads).
-  const int thread_index;
-  // Number of threads concurrently executing the benchmark.
-  const int threads;
-  const size_t max_iterations;
-
-  // TODO(EricWF) make me private
-  State(size_t max_iters, const std::vector<int>& ranges, int thread_i,
-        int n_threads, internal::ThreadTimer* timer,
-        internal::ThreadManager* manager);
-
- private:
-  void StartKeepRunning();
-  void FinishKeepRunning();
-  internal::ThreadTimer* timer_;
-  internal::ThreadManager* manager_;
-  BENCHMARK_DISALLOW_COPY_AND_ASSIGN(State);
-};
-
-namespace internal {
-
-typedef void(Function)(State&);
-
-// ------------------------------------------------------
-// Benchmark registration object.  The BENCHMARK() macro expands
-// into an internal::Benchmark* object.  Various methods can
-// be called on this object to change the properties of the benchmark.
-// Each method returns "this" so that multiple method calls can
-// chained into one expression.
-class Benchmark {
- public:
-  virtual ~Benchmark();
-
-  // Note: the following methods all return "this" so that multiple
-  // method calls can be chained together in one expression.
-
-  // Run this benchmark once with "x" as the extra argument passed
-  // to the function.
-  // REQUIRES: The function passed to the constructor must accept an arg1.
-  Benchmark* Arg(int x);
-
-  // Run this benchmark with the given time unit for the generated output report
-  Benchmark* Unit(TimeUnit unit);
-
-  // Run this benchmark once for a number of values picked from the
-  // range [start..limit].  (start and limit are always picked.)
-  // REQUIRES: The function passed to the constructor must accept an arg1.
-  Benchmark* Range(int start, int limit);
-
-  // Run this benchmark once for all values in the range [start..limit] with
-  // specific step
-  // REQUIRES: The function passed to the constructor must accept an arg1.
-  Benchmark* DenseRange(int start, int limit, int step = 1);
-
-  // Run this benchmark once with "args" as the extra arguments passed
-  // to the function.
-  // REQUIRES: The function passed to the constructor must accept arg1, arg2 ...
-  Benchmark* Args(const std::vector<int>& args);
-
-  // Equivalent to Args({x, y})
-  // NOTE: This is a legacy C++03 interface provided for compatibility only.
-  //   New code should use 'Args'.
-  Benchmark* ArgPair(int x, int y) {
-    std::vector<int> args;
-    args.push_back(x);
-    args.push_back(y);
-    return Args(args);
-  }
-
-  // Run this benchmark once for a number of values picked from the
-  // ranges [start..limit].  (starts and limits are always picked.)
-  // REQUIRES: The function passed to the constructor must accept arg1, arg2 ...
-  Benchmark* Ranges(const std::vector<std::pair<int, int> >& ranges);
-
-  // Equivalent to ArgNames({name})
-  Benchmark* ArgName(const std::string& name);
-
-  // Set the argument names to display in the benchmark name. If not called,
-  // only argument values will be shown.
-  Benchmark* ArgNames(const std::vector<std::string>& names);
-
-  // Equivalent to Ranges({{lo1, hi1}, {lo2, hi2}}).
-  // NOTE: This is a legacy C++03 interface provided for compatibility only.
-  //   New code should use 'Ranges'.
-  Benchmark* RangePair(int lo1, int hi1, int lo2, int hi2) {
-    std::vector<std::pair<int, int> > ranges;
-    ranges.push_back(std::make_pair(lo1, hi1));
-    ranges.push_back(std::make_pair(lo2, hi2));
-    return Ranges(ranges);
-  }
-
-  // Pass this benchmark object to *func, which can customize
-  // the benchmark by calling various methods like Arg, Args,
-  // Threads, etc.
-  Benchmark* Apply(void (*func)(Benchmark* benchmark));
-
-  // Set the range multiplier for non-dense range. If not called, the range
-  // multiplier kRangeMultiplier will be used.
-  Benchmark* RangeMultiplier(int multiplier);
-
-  // Set the minimum amount of time to use when running this benchmark. This
-  // option overrides the `benchmark_min_time` flag.
-  // REQUIRES: `t > 0` and `Iterations` has not been called on this benchmark.
-  Benchmark* MinTime(double t);
-
-  // Specify the amount of iterations that should be run by this benchmark.
-  // REQUIRES: 'n > 0' and `MinTime` has not been called on this benchmark.
-  //
-  // NOTE: This function should only be used when *exact* iteration control is
-  //   needed and never to control or limit how long a benchmark runs, where
-  // `--benchmark_min_time=N` or `MinTime(...)` should be used instead.
-  Benchmark* Iterations(size_t n);
-
-  // Specify the amount of times to repeat this benchmark. This option overrides
-  // the `benchmark_repetitions` flag.
-  // REQUIRES: `n > 0`
-  Benchmark* Repetitions(int n);
-
-  // Specify if each repetition of the benchmark should be reported separately
-  // or if only the final statistics should be reported. If the benchmark
-  // is not repeated then the single result is always reported.
-  Benchmark* ReportAggregatesOnly(bool value = true);
-
-  // If a particular benchmark is I/O bound, runs multiple threads internally or
-  // if for some reason CPU timings are not representative, call this method. If
-  // called, the elapsed time will be used to control how many iterations are
-  // run, and in the printing of items/second or MB/seconds values.  If not
-  // called, the cpu time used by the benchmark will be used.
-  Benchmark* UseRealTime();
-
-  // If a benchmark must measure time manually (e.g. if GPU execution time is
-  // being
-  // measured), call this method. If called, each benchmark iteration should
-  // call
-  // SetIterationTime(seconds) to report the measured time, which will be used
-  // to control how many iterations are run, and in the printing of items/second
-  // or MB/second values.
-  Benchmark* UseManualTime();
-
-  // Set the asymptotic computational complexity for the benchmark. If called
-  // the asymptotic computational complexity will be shown on the output.
-  Benchmark* Complexity(BigO complexity = benchmark::oAuto);
-
-  // Set the asymptotic computational complexity for the benchmark. If called
-  // the asymptotic computational complexity will be shown on the output.
-  Benchmark* Complexity(BigOFunc* complexity);
-
-  // Support for running multiple copies of the same benchmark concurrently
-  // in multiple threads.  This may be useful when measuring the scaling
-  // of some piece of code.
-
-  // Run one instance of this benchmark concurrently in t threads.
-  Benchmark* Threads(int t);
-
-  // Pick a set of values T from [min_threads,max_threads].
-  // min_threads and max_threads are always included in T.  Run this
-  // benchmark once for each value in T.  The benchmark run for a
-  // particular value t consists of t threads running the benchmark
-  // function concurrently.  For example, consider:
-  //    BENCHMARK(Foo)->ThreadRange(1,16);
-  // This will run the following benchmarks:
-  //    Foo in 1 thread
-  //    Foo in 2 threads
-  //    Foo in 4 threads
-  //    Foo in 8 threads
-  //    Foo in 16 threads
-  Benchmark* ThreadRange(int min_threads, int max_threads);
-
-  // For each value n in the range, run this benchmark once using n threads.
-  // min_threads and max_threads are always included in the range.
-  // stride specifies the increment. E.g. DenseThreadRange(1, 8, 3) starts
-  // a benchmark with 1, 4, 7 and 8 threads.
-  Benchmark* DenseThreadRange(int min_threads, int max_threads, int stride = 1);
-
-  // Equivalent to ThreadRange(NumCPUs(), NumCPUs())
-  Benchmark* ThreadPerCpu();
-
-  virtual void Run(State& state) = 0;
-
-  // Used inside the benchmark implementation
-  struct Instance;
-
- protected:
-  explicit Benchmark(const char* name);
-  Benchmark(Benchmark const&);
-  void SetName(const char* name);
-
-  int ArgsCnt() const;
-
-  static void AddRange(std::vector<int>* dst, int lo, int hi, int mult);
-
- private:
-  friend class BenchmarkFamilies;
-
-  std::string name_;
-  ReportMode report_mode_;
-  std::vector<std::string> arg_names_;   // Args for all benchmark runs
-  std::vector<std::vector<int> > args_;  // Args for all benchmark runs
-  TimeUnit time_unit_;
-  int range_multiplier_;
-  double min_time_;
-  size_t iterations_;
-  int repetitions_;
-  bool use_real_time_;
-  bool use_manual_time_;
-  BigO complexity_;
-  BigOFunc* complexity_lambda_;
-  std::vector<int> thread_counts_;
-
-  Benchmark& operator=(Benchmark const&);
-};
-
-}  // namespace internal
-
-// Create and register a benchmark with the specified 'name' that invokes
-// the specified functor 'fn'.
-//
-// RETURNS: A pointer to the registered benchmark.
-internal::Benchmark* RegisterBenchmark(const char* name,
-                                       internal::Function* fn);
-
-#if defined(BENCHMARK_HAS_CXX11)
-template <class Lambda>
-internal::Benchmark* RegisterBenchmark(const char* name, Lambda&& fn);
-#endif
-
-// Remove all registered benchmarks. All pointers to previously registered
-// benchmarks are invalidated.
-void ClearRegisteredBenchmarks();
-
-namespace internal {
-// The class used to hold all Benchmarks created from static function.
-// (ie those created using the BENCHMARK(...) macros.
-class FunctionBenchmark : public Benchmark {
- public:
-  FunctionBenchmark(const char* name, Function* func)
-      : Benchmark(name), func_(func) {}
-
-  virtual void Run(State& st);
-
- private:
-  Function* func_;
-};
-
-#ifdef BENCHMARK_HAS_CXX11
-template <class Lambda>
-class LambdaBenchmark : public Benchmark {
- public:
-  virtual void Run(State& st) { lambda_(st); }
-
- private:
-  template <class OLambda>
-  LambdaBenchmark(const char* name, OLambda&& lam)
-      : Benchmark(name), lambda_(std::forward<OLambda>(lam)) {}
-
-  LambdaBenchmark(LambdaBenchmark const&) = delete;
-
- private:
-  template <class Lam>
-  friend Benchmark* ::benchmark::RegisterBenchmark(const char*, Lam&&);
-
-  Lambda lambda_;
-};
-#endif
-
-}  // namespace internal
-
-inline internal::Benchmark* RegisterBenchmark(const char* name,
-                                              internal::Function* fn) {
-  return internal::RegisterBenchmarkInternal(
-      ::new internal::FunctionBenchmark(name, fn));
-}
-
-#ifdef BENCHMARK_HAS_CXX11
-template <class Lambda>
-internal::Benchmark* RegisterBenchmark(const char* name, Lambda&& fn) {
-  using BenchType =
-      internal::LambdaBenchmark<typename std::decay<Lambda>::type>;
-  return internal::RegisterBenchmarkInternal(
-      ::new BenchType(name, std::forward<Lambda>(fn)));
-}
-#endif
-
-#if defined(BENCHMARK_HAS_CXX11) && \
-    (!defined(BENCHMARK_GCC_VERSION) || BENCHMARK_GCC_VERSION >= 409)
-template <class Lambda, class... Args>
-internal::Benchmark* RegisterBenchmark(const char* name, Lambda&& fn,
-                                       Args&&... args) {
-  return benchmark::RegisterBenchmark(
-      name, [=](benchmark::State& st) { fn(st, args...); });
-}
-#else
-#define BENCHMARK_HAS_NO_VARIADIC_REGISTER_BENCHMARK
-#endif
-
-// The base class for all fixture tests.
-class Fixture : public internal::Benchmark {
- public:
-  Fixture() : internal::Benchmark("") {}
-
-  virtual void Run(State& st) {
-    this->SetUp(st);
-    this->BenchmarkCase(st);
-    this->TearDown(st);
-  }
-
-  // These will be deprecated ...
-  virtual void SetUp(const State&) {}
-  virtual void TearDown(const State&) {}
-  // ... In favor of these.
-  virtual void SetUp(State& st) { SetUp(const_cast<const State&>(st)); }
-  virtual void TearDown(State& st) { TearDown(const_cast<const State&>(st)); }
-
- protected:
-  virtual void BenchmarkCase(State&) = 0;
-};
-
-}  // namespace benchmark
-
-// ------------------------------------------------------
-// Macro to register benchmarks
-
-// Check that __COUNTER__ is defined and that __COUNTER__ increases by 1
-// every time it is expanded. X + 1 == X + 0 is used in case X is defined to be
-// empty. If X is empty the expression becomes (+1 == +0).
-#if defined(__COUNTER__) && (__COUNTER__ + 1 == __COUNTER__ + 0)
-#define BENCHMARK_PRIVATE_UNIQUE_ID __COUNTER__
-#else
-#define BENCHMARK_PRIVATE_UNIQUE_ID __LINE__
-#endif
-
-// Helpers for generating unique variable names
-#define BENCHMARK_PRIVATE_NAME(n) \
-  BENCHMARK_PRIVATE_CONCAT(_benchmark_, BENCHMARK_PRIVATE_UNIQUE_ID, n)
-#define BENCHMARK_PRIVATE_CONCAT(a, b, c) BENCHMARK_PRIVATE_CONCAT2(a, b, c)
-#define BENCHMARK_PRIVATE_CONCAT2(a, b, c) a##b##c
-
-#define BENCHMARK_PRIVATE_DECLARE(n)                                 \
-  static ::benchmark::internal::Benchmark* BENCHMARK_PRIVATE_NAME(n) \
-      BENCHMARK_UNUSED
-
-#define BENCHMARK(n)                                     \
-  BENCHMARK_PRIVATE_DECLARE(n) =                         \
-      (::benchmark::internal::RegisterBenchmarkInternal( \
-          new ::benchmark::internal::FunctionBenchmark(#n, n)))
-
-// Old-style macros
-#define BENCHMARK_WITH_ARG(n, a) BENCHMARK(n)->Arg((a))
-#define BENCHMARK_WITH_ARG2(n, a1, a2) BENCHMARK(n)->Args({(a1), (a2)})
-#define BENCHMARK_WITH_UNIT(n, t) BENCHMARK(n)->Unit((t))
-#define BENCHMARK_RANGE(n, lo, hi) BENCHMARK(n)->Range((lo), (hi))
-#define BENCHMARK_RANGE2(n, l1, h1, l2, h2) \
-  BENCHMARK(n)->RangePair({{(l1), (h1)}, {(l2), (h2)}})
-
-#if __cplusplus >= 201103L
-
-// Register a benchmark which invokes the function specified by `func`
-// with the additional arguments specified by `...`.
-//
-// For example:
-//
-// template <class ...ExtraArgs>`
-// void BM_takes_args(benchmark::State& state, ExtraArgs&&... extra_args) {
-//  [...]
-//}
-// /* Registers a benchmark named "BM_takes_args/int_string_test` */
-// BENCHMARK_CAPTURE(BM_takes_args, int_string_test, 42, std::string("abc"));
-#define BENCHMARK_CAPTURE(func, test_case_name, ...)     \
-  BENCHMARK_PRIVATE_DECLARE(func) =                      \
-      (::benchmark::internal::RegisterBenchmarkInternal( \
-          new ::benchmark::internal::FunctionBenchmark(  \
-              #func "/" #test_case_name,                 \
-              [](::benchmark::State& st) { func(st, __VA_ARGS__); })))
-
-#endif  // __cplusplus >= 11
-
-// This will register a benchmark for a templatized function.  For example:
-//
-// template<int arg>
-// void BM_Foo(int iters);
-//
-// BENCHMARK_TEMPLATE(BM_Foo, 1);
-//
-// will register BM_Foo<1> as a benchmark.
-#define BENCHMARK_TEMPLATE1(n, a)                        \
-  BENCHMARK_PRIVATE_DECLARE(n) =                         \
-      (::benchmark::internal::RegisterBenchmarkInternal( \
-          new ::benchmark::internal::FunctionBenchmark(#n "<" #a ">", n<a>)))
-
-#define BENCHMARK_TEMPLATE2(n, a, b)                                         \
-  BENCHMARK_PRIVATE_DECLARE(n) =                                             \
-      (::benchmark::internal::RegisterBenchmarkInternal(                     \
-          new ::benchmark::internal::FunctionBenchmark(#n "<" #a "," #b ">", \
-                                                       n<a, b>)))
-
-#if __cplusplus >= 201103L
-#define BENCHMARK_TEMPLATE(n, ...)                       \
-  BENCHMARK_PRIVATE_DECLARE(n) =                         \
-      (::benchmark::internal::RegisterBenchmarkInternal( \
-          new ::benchmark::internal::FunctionBenchmark(  \
-              #n "<" #__VA_ARGS__ ">", n<__VA_ARGS__>)))
-#else
-#define BENCHMARK_TEMPLATE(n, a) BENCHMARK_TEMPLATE1(n, a)
-#endif
-
-#define BENCHMARK_PRIVATE_DECLARE_F(BaseClass, Method)        \
-  class BaseClass##_##Method##_Benchmark : public BaseClass { \
-   public:                                                    \
-    BaseClass##_##Method##_Benchmark() : BaseClass() {        \
-      this->SetName(#BaseClass "/" #Method);                  \
-    }                                                         \
-                                                              \
-   protected:                                                 \
-    virtual void BenchmarkCase(::benchmark::State&);          \
-  };
-
-#define BENCHMARK_DEFINE_F(BaseClass, Method)    \
-  BENCHMARK_PRIVATE_DECLARE_F(BaseClass, Method) \
-  void BaseClass##_##Method##_Benchmark::BenchmarkCase
-
-#define BENCHMARK_REGISTER_F(BaseClass, Method) \
-  BENCHMARK_PRIVATE_REGISTER_F(BaseClass##_##Method##_Benchmark)
-
-#define BENCHMARK_PRIVATE_REGISTER_F(TestName) \
-  BENCHMARK_PRIVATE_DECLARE(TestName) =        \
-      (::benchmark::internal::RegisterBenchmarkInternal(new TestName()))
-
-// This macro will define and register a benchmark within a fixture class.
-#define BENCHMARK_F(BaseClass, Method)           \
-  BENCHMARK_PRIVATE_DECLARE_F(BaseClass, Method) \
-  BENCHMARK_REGISTER_F(BaseClass, Method);       \
-  void BaseClass##_##Method##_Benchmark::BenchmarkCase
-
-// Helper macro to create a main routine in a test that runs the benchmarks
-#define BENCHMARK_MAIN()                   \
-  int main(int argc, char** argv) {        \
-    ::benchmark::Initialize(&argc, argv);  \
-    if (::benchmark::ReportUnrecognizedArguments(argc, argv)) return 1; \
-    ::benchmark::RunSpecifiedBenchmarks(); \
-  }
-
-
-// ------------------------------------------------------
-// Benchmark Reporters
-
-namespace benchmark {
-
-// Interface for custom benchmark result printers.
-// By default, benchmark reports are printed to stdout. However an application
-// can control the destination of the reports by calling
-// RunSpecifiedBenchmarks and passing it a custom reporter object.
-// The reporter object must implement the following interface.
-class BenchmarkReporter {
- public:
-  struct Context {
-    int num_cpus;
-    double mhz_per_cpu;
-    bool cpu_scaling_enabled;
-
-    // The number of chars in the longest benchmark name.
-    size_t name_field_width;
-  };
-
-  struct Run {
-    Run()
-        : error_occurred(false),
-          iterations(1),
-          time_unit(kNanosecond),
-          real_accumulated_time(0),
-          cpu_accumulated_time(0),
-          bytes_per_second(0),
-          items_per_second(0),
-          max_heapbytes_used(0),
-          complexity(oNone),
-          complexity_lambda(),
-          complexity_n(0),
-          report_big_o(false),
-          report_rms(false),
-          counters() {}
-
-    std::string benchmark_name;
-    std::string report_label;  // Empty if not set by benchmark.
-    bool error_occurred;
-    std::string error_message;
-
-    int64_t iterations;
-    TimeUnit time_unit;
-    double real_accumulated_time;
-    double cpu_accumulated_time;
-
-    // Return a value representing the real time per iteration in the unit
-    // specified by 'time_unit'.
-    // NOTE: If 'iterations' is zero the returned value represents the
-    // accumulated time.
-    double GetAdjustedRealTime() const;
-
-    // Return a value representing the cpu time per iteration in the unit
-    // specified by 'time_unit'.
-    // NOTE: If 'iterations' is zero the returned value represents the
-    // accumulated time.
-    double GetAdjustedCPUTime() const;
-
-    // Zero if not set by benchmark.
-    double bytes_per_second;
-    double items_per_second;
-
-    // This is set to 0.0 if memory tracing is not enabled.
-    double max_heapbytes_used;
-
-    // Keep track of arguments to compute asymptotic complexity
-    BigO complexity;
-    BigOFunc* complexity_lambda;
-    int complexity_n;
-
-    // Inform print function whether the current run is a complexity report
-    bool report_big_o;
-    bool report_rms;
-
-    UserCounters counters;
-  };
-
-  // Construct a BenchmarkReporter with the output stream set to 'std::cout'
-  // and the error stream set to 'std::cerr'
-  BenchmarkReporter();
-
-  // Called once for every suite of benchmarks run.
-  // The parameter "context" contains information that the
-  // reporter may wish to use when generating its report, for example the
-  // platform under which the benchmarks are running. The benchmark run is
-  // never started if this function returns false, allowing the reporter
-  // to skip runs based on the context information.
-  virtual bool ReportContext(const Context& context) = 0;
-
-  // Called once for each group of benchmark runs, gives information about
-  // cpu-time and heap memory usage during the benchmark run. If the group
-  // of runs contained more than two entries then 'report' contains additional
-  // elements representing the mean and standard deviation of those runs.
-  // Additionally if this group of runs was the last in a family of benchmarks
-  // 'reports' contains additional entries representing the asymptotic
-  // complexity and RMS of that benchmark family.
-  virtual void ReportRuns(const std::vector<Run>& report) = 0;
-
-  // Called once and only once after ever group of benchmarks is run and
-  // reported.
-  virtual void Finalize() {}
-
-  // REQUIRES: The object referenced by 'out' is valid for the lifetime
-  // of the reporter.
-  void SetOutputStream(std::ostream* out) {
-    assert(out);
-    output_stream_ = out;
-  }
-
-  // REQUIRES: The object referenced by 'err' is valid for the lifetime
-  // of the reporter.
-  void SetErrorStream(std::ostream* err) {
-    assert(err);
-    error_stream_ = err;
-  }
-
-  std::ostream& GetOutputStream() const { return *output_stream_; }
-
-  std::ostream& GetErrorStream() const { return *error_stream_; }
-
-  virtual ~BenchmarkReporter();
-
-  // Write a human readable string to 'out' representing the specified
-  // 'context'.
-  // REQUIRES: 'out' is non-null.
-  static void PrintBasicContext(std::ostream* out, Context const& context);
-
- private:
-  std::ostream* output_stream_;
-  std::ostream* error_stream_;
-};
-
-// Simple reporter that outputs benchmark data to the console. This is the
-// default reporter used by RunSpecifiedBenchmarks().
-class ConsoleReporter : public BenchmarkReporter {
-public:
-  enum OutputOptions {
-    OO_None = 0,
-    OO_Color = 1,
-    OO_Tabular = 2,
-    OO_ColorTabular = OO_Color|OO_Tabular,
-    OO_Defaults = OO_ColorTabular
-  };
-  explicit ConsoleReporter(OutputOptions opts_ = OO_Defaults)
-      : output_options_(opts_), name_field_width_(0),
-        prev_counters_(), printed_header_(false) {}
-
-  virtual bool ReportContext(const Context& context);
-  virtual void ReportRuns(const std::vector<Run>& reports);
-
- protected:
-  virtual void PrintRunData(const Run& report);
-  virtual void PrintHeader(const Run& report);
-
-  OutputOptions output_options_;
-  size_t name_field_width_;
-  UserCounters prev_counters_;
-  bool printed_header_;
-};
-
-class JSONReporter : public BenchmarkReporter {
- public:
-  JSONReporter() : first_report_(true) {}
-  virtual bool ReportContext(const Context& context);
-  virtual void ReportRuns(const std::vector<Run>& reports);
-  virtual void Finalize();
-
- private:
-  void PrintRunData(const Run& report);
-
-  bool first_report_;
-};
-
-class CSVReporter : public BenchmarkReporter {
- public:
-  CSVReporter() : printed_header_(false) {}
-  virtual bool ReportContext(const Context& context);
-  virtual void ReportRuns(const std::vector<Run>& reports);
-
- private:
-  void PrintRunData(const Run& report);
-
-  bool printed_header_;
-  std::set< std::string > user_counter_names_;
-};
-
-inline const char* GetTimeUnitString(TimeUnit unit) {
-  switch (unit) {
-    case kMillisecond:
-      return "ms";
-    case kMicrosecond:
-      return "us";
-    case kNanosecond:
-    default:
-      return "ns";
-  }
-}
-
-inline double GetTimeUnitMultiplier(TimeUnit unit) {
-  switch (unit) {
-    case kMillisecond:
-      return 1e3;
-    case kMicrosecond:
-      return 1e6;
-    case kNanosecond:
-    default:
-      return 1e9;
-  }
-}
-
-} // namespace benchmark
-
-#endif  // BENCHMARK_BENCHMARK_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark_api.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark_api.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark_api.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark_api.h (removed)
@@ -1,27 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-#ifndef BENCHMARK_BENCHMARK_API_H_
-#define BENCHMARK_BENCHMARK_API_H_
-
-#ifdef __DEPRECATED
-# ifndef BENCHMARK_WARNING_MSG
-#   warning the benchmark_api.h header has been deprecated and will be removed, please include benchmark.h instead
-# else
-    BENCHMARK_WARNING_MSG("the benchmark_api.h header has been deprecated and will be removed, please include benchmark.h instead")
-# endif
-#endif
-
-#include "benchmark.h"  // For forward declaration of BenchmarkReporter
-
-#endif  // BENCHMARK_BENCHMARK_API_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/reporter.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/reporter.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/reporter.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/reporter.h (removed)
@@ -1,27 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-#ifndef BENCHMARK_REPORTER_H_
-#define BENCHMARK_REPORTER_H_
-
-#ifdef __DEPRECATED
-# ifndef BENCHMARK_WARNING_MSG
-#   warning the reporter.h header has been deprecated and will be removed, please include benchmark.h instead
-# else
-    BENCHMARK_WARNING_MSG("the reporter.h header has been deprecated and will be removed, please include benchmark.h instead")
-# endif
-#endif
-
-#include "benchmark.h"  // For forward declaration of BenchmarkReporter
-
-#endif  // BENCHMARK_REPORTER_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/mingw.py
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/mingw.py?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/mingw.py (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/mingw.py (removed)
@@ -1,320 +0,0 @@
-#! /usr/bin/env python
-# encoding: utf-8
-
-import argparse
-import errno
-import logging
-import os
-import platform
-import re
-import sys
-import subprocess
-import tempfile
-
-try:
-    import winreg
-except ImportError:
-    import _winreg as winreg
-try:
-    import urllib.request as request
-except ImportError:
-    import urllib as request
-try:
-    import urllib.parse as parse
-except ImportError:
-    import urlparse as parse
-
-class EmptyLogger(object):
-    '''
-    Provides an implementation that performs no logging
-    '''
-    def debug(self, *k, **kw):
-        pass
-    def info(self, *k, **kw):
-        pass
-    def warn(self, *k, **kw):
-        pass
-    def error(self, *k, **kw):
-        pass
-    def critical(self, *k, **kw):
-        pass
-    def setLevel(self, *k, **kw):
-        pass
-
-urls = (
-    'http://downloads.sourceforge.net/project/mingw-w64/Toolchains%20'
-        'targetting%20Win32/Personal%20Builds/mingw-builds/installer/'
-        'repository.txt',
-    'http://downloads.sourceforge.net/project/mingwbuilds/host-windows/'
-        'repository.txt'
-)
-'''
-A list of mingw-build repositories
-'''
-
-def repository(urls = urls, log = EmptyLogger()):
-    '''
-    Downloads and parse mingw-build repository files and parses them
-    '''
-    log.info('getting mingw-builds repository')
-    versions = {}
-    re_sourceforge = re.compile(r'http://sourceforge.net/projects/([^/]+)/files')
-    re_sub = r'http://downloads.sourceforge.net/project/\1'
-    for url in urls:
-        log.debug(' - requesting: %s', url)
-        socket = request.urlopen(url)
-        repo = socket.read()
-        if not isinstance(repo, str):
-            repo = repo.decode();
-        socket.close()
-        for entry in repo.split('\n')[:-1]:
-            value = entry.split('|')
-            version = tuple([int(n) for n in value[0].strip().split('.')])
-            version = versions.setdefault(version, {})
-            arch = value[1].strip()
-            if arch == 'x32':
-                arch = 'i686'
-            elif arch == 'x64':
-                arch = 'x86_64'
-            arch = version.setdefault(arch, {})
-            threading = arch.setdefault(value[2].strip(), {})
-            exceptions = threading.setdefault(value[3].strip(), {})
-            revision = exceptions.setdefault(int(value[4].strip()[3:]),
-                re_sourceforge.sub(re_sub, value[5].strip()))
-    return versions
-
-def find_in_path(file, path=None):
-    '''
-    Attempts to find an executable in the path
-    '''
-    if platform.system() == 'Windows':
-        file += '.exe'
-    if path is None:
-        path = os.environ.get('PATH', '')
-    if type(path) is type(''):
-        path = path.split(os.pathsep)
-    return list(filter(os.path.exists,
-        map(lambda dir, file=file: os.path.join(dir, file), path)))
-
-def find_7zip(log = EmptyLogger()):
-    '''
-    Attempts to find 7zip for unpacking the mingw-build archives
-    '''
-    log.info('finding 7zip')
-    path = find_in_path('7z')
-    if not path:
-        key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, r'SOFTWARE\7-Zip')
-        path, _ = winreg.QueryValueEx(key, 'Path')
-        path = [os.path.join(path, '7z.exe')]
-    log.debug('found \'%s\'', path[0])
-    return path[0]
-
-find_7zip()
-
-def unpack(archive, location, log = EmptyLogger()):
-    '''
-    Unpacks a mingw-builds archive
-    '''
-    sevenzip = find_7zip(log)
-    log.info('unpacking %s', os.path.basename(archive))
-    cmd = [sevenzip, 'x', archive, '-o' + location, '-y']
-    log.debug(' - %r', cmd)
-    with open(os.devnull, 'w') as devnull:
-        subprocess.check_call(cmd, stdout = devnull)
-
-def download(url, location, log = EmptyLogger()):
-    '''
-    Downloads and unpacks a mingw-builds archive
-    '''
-    log.info('downloading MinGW')
-    log.debug(' - url: %s', url)
-    log.debug(' - location: %s', location)
-
-    re_content = re.compile(r'attachment;[ \t]*filename=(")?([^"]*)(")?[\r\n]*')
-
-    stream = request.urlopen(url)
-    try:
-        content = stream.getheader('Content-Disposition') or ''
-    except AttributeError:
-        content = stream.headers.getheader('Content-Disposition') or ''
-    matches = re_content.match(content)
-    if matches:
-        filename = matches.group(2)
-    else:
-        parsed = parse.urlparse(stream.geturl())
-        filename = os.path.basename(parsed.path)
-
-    try:
-        os.makedirs(location)
-    except OSError as e:
-        if e.errno == errno.EEXIST and os.path.isdir(location):
-            pass
-        else:
-            raise
-
-    archive = os.path.join(location, filename)
-    with open(archive, 'wb') as out:
-        while True:
-            buf = stream.read(1024)
-            if not buf:
-                break
-            out.write(buf)
-    unpack(archive, location, log = log)
-    os.remove(archive)
-
-    possible = os.path.join(location, 'mingw64')
-    if not os.path.exists(possible):
-        possible = os.path.join(location, 'mingw32')
-        if not os.path.exists(possible):
-            raise ValueError('Failed to find unpacked MinGW: ' + possible)
-    return possible
-
-def root(location = None, arch = None, version = None, threading = None,
-        exceptions = None, revision = None, log = EmptyLogger()):
-    '''
-    Returns the root folder of a specific version of the mingw-builds variant
-    of gcc. Will download the compiler if needed
-    '''
-
-    # Get the repository if we don't have all the information
-    if not (arch and version and threading and exceptions and revision):
-        versions = repository(log = log)
-
-    # Determine some defaults
-    version = version or max(versions.keys())
-    if not arch:
-        arch = platform.machine().lower()
-        if arch == 'x86':
-            arch = 'i686'
-        elif arch == 'amd64':
-            arch = 'x86_64'
-    if not threading:
-        keys = versions[version][arch].keys()
-        if 'posix' in keys:
-            threading = 'posix'
-        elif 'win32' in keys:
-            threading = 'win32'
-        else:
-            threading = keys[0]
-    if not exceptions:
-        keys = versions[version][arch][threading].keys()
-        if 'seh' in keys:
-            exceptions = 'seh'
-        elif 'sjlj' in keys:
-            exceptions = 'sjlj'
-        else:
-            exceptions = keys[0]
-    if revision == None:
-        revision = max(versions[version][arch][threading][exceptions].keys())
-    if not location:
-        location = os.path.join(tempfile.gettempdir(), 'mingw-builds')
-
-    # Get the download url
-    url = versions[version][arch][threading][exceptions][revision]
-
-    # Tell the user whatzzup
-    log.info('finding MinGW %s', '.'.join(str(v) for v in version))
-    log.debug(' - arch: %s', arch)
-    log.debug(' - threading: %s', threading)
-    log.debug(' - exceptions: %s', exceptions)
-    log.debug(' - revision: %s', revision)
-    log.debug(' - url: %s', url)
-
-    # Store each specific revision differently
-    slug = '{version}-{arch}-{threading}-{exceptions}-rev{revision}'
-    slug = slug.format(
-        version = '.'.join(str(v) for v in version),
-        arch = arch,
-        threading = threading,
-        exceptions = exceptions,
-        revision = revision
-    )
-    if arch == 'x86_64':
-        root_dir = os.path.join(location, slug, 'mingw64')
-    elif arch == 'i686':
-        root_dir = os.path.join(location, slug, 'mingw32')
-    else:
-        raise ValueError('Unknown MinGW arch: ' + arch)
-
-    # Download if needed
-    if not os.path.exists(root_dir):
-        downloaded = download(url, os.path.join(location, slug), log = log)
-        if downloaded != root_dir:
-            raise ValueError('The location of mingw did not match\n%s\n%s'
-                % (downloaded, root_dir))
-
-    return root_dir
-
-def str2ver(string):
-    '''
-    Converts a version string into a tuple
-    '''
-    try:
-        version = tuple(int(v) for v in string.split('.'))
-        if len(version) is not 3:
-            raise ValueError()
-    except ValueError:
-        raise argparse.ArgumentTypeError(
-            'please provide a three digit version string')
-    return version
-
-def main():
-    '''
-    Invoked when the script is run directly by the python interpreter
-    '''
-    parser = argparse.ArgumentParser(
-        description = 'Downloads a specific version of MinGW',
-        formatter_class = argparse.ArgumentDefaultsHelpFormatter
-    )
-    parser.add_argument('--location',
-        help = 'the location to download the compiler to',
-        default = os.path.join(tempfile.gettempdir(), 'mingw-builds'))
-    parser.add_argument('--arch', required = True, choices = ['i686', 'x86_64'],
-        help = 'the target MinGW architecture string')
-    parser.add_argument('--version', type = str2ver,
-        help = 'the version of GCC to download')
-    parser.add_argument('--threading', choices = ['posix', 'win32'],
-        help = 'the threading type of the compiler')
-    parser.add_argument('--exceptions', choices = ['sjlj', 'seh', 'dwarf'],
-        help = 'the method to throw exceptions')
-    parser.add_argument('--revision', type=int,
-        help = 'the revision of the MinGW release')
-    group = parser.add_mutually_exclusive_group()
-    group.add_argument('-v', '--verbose', action='store_true',
-        help='increase the script output verbosity')
-    group.add_argument('-q', '--quiet', action='store_true',
-        help='only print errors and warning')
-    args = parser.parse_args()
-
-    # Create the logger
-    logger = logging.getLogger('mingw')
-    handler = logging.StreamHandler()
-    formatter = logging.Formatter('%(message)s')
-    handler.setFormatter(formatter)
-    logger.addHandler(handler)
-    logger.setLevel(logging.INFO)
-    if args.quiet:
-        logger.setLevel(logging.WARN)
-    if args.verbose:
-        logger.setLevel(logging.DEBUG)
-
-    # Get MinGW
-    root_dir = root(location = args.location, arch = args.arch,
-        version = args.version, threading = args.threading,
-        exceptions = args.exceptions, revision = args.revision,
-        log = logger)
-
-    sys.stdout.write('%s\n' % os.path.join(root_dir, 'bin'))
-
-if __name__ == '__main__':
-    try:
-        main()
-    except IOError as e:
-        sys.stderr.write('IO error: %s\n' % e)
-        sys.exit(1)
-    except OSError as e:
-        sys.stderr.write('OS error: %s\n' % e)
-        sys.exit(1)
-    except KeyboardInterrupt as e:
-        sys.stderr.write('Killed\n')
-        sys.exit(1)

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/CMakeLists.txt
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/CMakeLists.txt?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/CMakeLists.txt (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/CMakeLists.txt (removed)
@@ -1,78 +0,0 @@
-# Allow the source files to find headers in src/
-include_directories(${PROJECT_SOURCE_DIR}/src)
-
-if (DEFINED BENCHMARK_CXX_LINKER_FLAGS)
-  list(APPEND CMAKE_SHARED_LINKER_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS})
-  list(APPEND CMAKE_MODULE_LINKER_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS})
-endif()
-
-file(GLOB
-  SOURCE_FILES
-    *.cc
-    ${PROJECT_SOURCE_DIR}/include/benchmark/*.h
-    ${CMAKE_CURRENT_SOURCE_DIR}/*.h)
-
-add_library(benchmark ${SOURCE_FILES})
-set_target_properties(benchmark PROPERTIES
-  OUTPUT_NAME "benchmark"
-  VERSION ${GENERIC_LIB_VERSION}
-  SOVERSION ${GENERIC_LIB_SOVERSION}
-)
-target_include_directories(benchmark PUBLIC
-    $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/../include>
-    )
-
-# Link threads.
-target_link_libraries(benchmark  ${BENCHMARK_CXX_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
-find_library(LIBRT rt)
-if(LIBRT)
-  target_link_libraries(benchmark ${LIBRT})
-endif()
-
-# We need extra libraries on Windows
-if(${CMAKE_SYSTEM_NAME} MATCHES "Windows")
-  target_link_libraries(benchmark Shlwapi)
-endif()
-
-set(include_install_dir "include")
-set(lib_install_dir "lib/")
-set(bin_install_dir "bin/")
-set(config_install_dir "lib/cmake/${PROJECT_NAME}")
-
-set(generated_dir "${CMAKE_CURRENT_BINARY_DIR}/generated")
-
-set(version_config "${generated_dir}/${PROJECT_NAME}ConfigVersion.cmake")
-set(project_config "${generated_dir}/${PROJECT_NAME}Config.cmake")
-set(targets_export_name "${PROJECT_NAME}Targets")
-
-set(namespace "${PROJECT_NAME}::")
-
-include(CMakePackageConfigHelpers)
-write_basic_package_version_file(
-    "${version_config}" VERSION ${GIT_VERSION} COMPATIBILITY SameMajorVersion
-)
-
-configure_file("${PROJECT_SOURCE_DIR}/cmake/Config.cmake.in" "${project_config}" @ONLY)
-
-# Install target (will install the library to specified CMAKE_INSTALL_PREFIX variable)
-install(
-  TARGETS benchmark
-  EXPORT ${targets_export_name}
-  ARCHIVE DESTINATION ${lib_install_dir}
-  LIBRARY DESTINATION ${lib_install_dir}
-  RUNTIME DESTINATION ${bin_install_dir}
-  INCLUDES DESTINATION ${include_install_dir})
-
-install(
-  DIRECTORY "${PROJECT_SOURCE_DIR}/include/benchmark"
-  DESTINATION ${include_install_dir}
-  FILES_MATCHING PATTERN "*.*h")
-
-install(
-    FILES "${project_config}" "${version_config}"
-    DESTINATION "${config_install_dir}")
-
-install(
-    EXPORT "${targets_export_name}"
-    NAMESPACE "${namespace}"
-    DESTINATION "${config_install_dir}")

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/arraysize.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/arraysize.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/arraysize.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/arraysize.h (removed)
@@ -1,33 +0,0 @@
-#ifndef BENCHMARK_ARRAYSIZE_H_
-#define BENCHMARK_ARRAYSIZE_H_
-
-#include "internal_macros.h"
-
-namespace benchmark {
-namespace internal {
-// The arraysize(arr) macro returns the # of elements in an array arr.
-// The expression is a compile-time constant, and therefore can be
-// used in defining new arrays, for example.  If you use arraysize on
-// a pointer by mistake, you will get a compile-time error.
-//
-
-// This template function declaration is used in defining arraysize.
-// Note that the function doesn't need an implementation, as we only
-// use its type.
-template <typename T, size_t N>
-char (&ArraySizeHelper(T (&array)[N]))[N];
-
-// That gcc wants both of these prototypes seems mysterious. VC, for
-// its part, can't decide which to use (another mystery). Matching of
-// template overloads: the final frontier.
-#ifndef COMPILER_MSVC
-template <typename T, size_t N>
-char (&ArraySizeHelper(const T (&array)[N]))[N];
-#endif
-
-#define arraysize(array) (sizeof(::benchmark::internal::ArraySizeHelper(array)))
-
-}  // end namespace internal
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_ARRAYSIZE_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark.cc (removed)
@@ -1,715 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-#include "benchmark_api_internal.h"
-#include "internal_macros.h"
-
-#ifndef BENCHMARK_OS_WINDOWS
-#include <sys/resource.h>
-#include <sys/time.h>
-#include <unistd.h>
-#endif
-
-#include <algorithm>
-#include <atomic>
-#include <condition_variable>
-#include <cstdio>
-#include <cstdlib>
-#include <cstring>
-#include <fstream>
-#include <iostream>
-#include <memory>
-#include <thread>
-
-#include "check.h"
-#include "colorprint.h"
-#include "commandlineflags.h"
-#include "complexity.h"
-#include "counter.h"
-#include "log.h"
-#include "mutex.h"
-#include "re.h"
-#include "stat.h"
-#include "string_util.h"
-#include "sysinfo.h"
-#include "timers.h"
-
-DEFINE_bool(benchmark_list_tests, false,
-            "Print a list of benchmarks. This option overrides all other "
-            "options.");
-
-DEFINE_string(benchmark_filter, ".",
-              "A regular expression that specifies the set of benchmarks "
-              "to execute.  If this flag is empty, no benchmarks are run.  "
-              "If this flag is the string \"all\", all benchmarks linked "
-              "into the process are run.");
-
-DEFINE_double(benchmark_min_time, 0.5,
-              "Minimum number of seconds we should run benchmark before "
-              "results are considered significant.  For cpu-time based "
-              "tests, this is the lower bound on the total cpu time "
-              "used by all threads that make up the test.  For real-time "
-              "based tests, this is the lower bound on the elapsed time "
-              "of the benchmark execution, regardless of number of "
-              "threads.");
-
-DEFINE_int32(benchmark_repetitions, 1,
-             "The number of runs of each benchmark. If greater than 1, the "
-             "mean and standard deviation of the runs will be reported.");
-
-DEFINE_bool(benchmark_report_aggregates_only, false,
-            "Report the result of each benchmark repetitions. When 'true' is "
-            "specified only the mean, standard deviation, and other statistics "
-            "are reported for repeated benchmarks.");
-
-DEFINE_string(benchmark_format, "console",
-              "The format to use for console output. Valid values are "
-              "'console', 'json', or 'csv'.");
-
-DEFINE_string(benchmark_out_format, "json",
-              "The format to use for file output. Valid values are "
-              "'console', 'json', or 'csv'.");
-
-DEFINE_string(benchmark_out, "", "The file to write additonal output to");
-
-DEFINE_string(benchmark_color, "auto",
-              "Whether to use colors in the output.  Valid values: "
-              "'true'/'yes'/1, 'false'/'no'/0, and 'auto'. 'auto' means to use "
-              "colors if the output is being sent to a terminal and the TERM "
-              "environment variable is set to a terminal type that supports "
-              "colors.");
-
-DEFINE_bool(benchmark_counters_tabular, false,
-            "Whether to use tabular format when printing user counters to "
-            "the console.  Valid values: 'true'/'yes'/1, 'false'/'no'/0."
-            "Defaults to false.");
-
-DEFINE_int32(v, 0, "The level of verbose logging to output");
-
-namespace benchmark {
-namespace internal {
-
-void UseCharPointer(char const volatile*) {}
-
-}  // end namespace internal
-
-namespace {
-
-static const size_t kMaxIterations = 1000000000;
-
-}  // end namespace
-
-namespace internal {
-
-class ThreadManager {
- public:
-  ThreadManager(int num_threads)
-      : alive_threads_(num_threads), start_stop_barrier_(num_threads) {}
-
-  Mutex& GetBenchmarkMutex() const RETURN_CAPABILITY(benchmark_mutex_) {
-    return benchmark_mutex_;
-  }
-
-  bool StartStopBarrier() EXCLUDES(end_cond_mutex_) {
-    return start_stop_barrier_.wait();
-  }
-
-  void NotifyThreadComplete() EXCLUDES(end_cond_mutex_) {
-    start_stop_barrier_.removeThread();
-    if (--alive_threads_ == 0) {
-      MutexLock lock(end_cond_mutex_);
-      end_condition_.notify_all();
-    }
-  }
-
-  void WaitForAllThreads() EXCLUDES(end_cond_mutex_) {
-    MutexLock lock(end_cond_mutex_);
-    end_condition_.wait(lock.native_handle(),
-                        [this]() { return alive_threads_ == 0; });
-  }
-
- public:
-  struct Result {
-    double real_time_used = 0;
-    double cpu_time_used = 0;
-    double manual_time_used = 0;
-    int64_t bytes_processed = 0;
-    int64_t items_processed = 0;
-    int complexity_n = 0;
-    std::string report_label_;
-    std::string error_message_;
-    bool has_error_ = false;
-    UserCounters counters;
-  };
-  GUARDED_BY(GetBenchmarkMutex()) Result results;
-
- private:
-  mutable Mutex benchmark_mutex_;
-  std::atomic<int> alive_threads_;
-  Barrier start_stop_barrier_;
-  Mutex end_cond_mutex_;
-  Condition end_condition_;
-};
-
-// Timer management class
-class ThreadTimer {
- public:
-  ThreadTimer() = default;
-
-  // Called by each thread
-  void StartTimer() {
-    running_ = true;
-    start_real_time_ = ChronoClockNow();
-    start_cpu_time_ = ThreadCPUUsage();
-  }
-
-  // Called by each thread
-  void StopTimer() {
-    CHECK(running_);
-    running_ = false;
-    real_time_used_ += ChronoClockNow() - start_real_time_;
-    cpu_time_used_ += ThreadCPUUsage() - start_cpu_time_;
-  }
-
-  // Called by each thread
-  void SetIterationTime(double seconds) { manual_time_used_ += seconds; }
-
-  bool running() const { return running_; }
-
-  // REQUIRES: timer is not running
-  double real_time_used() {
-    CHECK(!running_);
-    return real_time_used_;
-  }
-
-  // REQUIRES: timer is not running
-  double cpu_time_used() {
-    CHECK(!running_);
-    return cpu_time_used_;
-  }
-
-  // REQUIRES: timer is not running
-  double manual_time_used() {
-    CHECK(!running_);
-    return manual_time_used_;
-  }
-
- private:
-  bool running_ = false;        // Is the timer running
-  double start_real_time_ = 0;  // If running_
-  double start_cpu_time_ = 0;   // If running_
-
-  // Accumulated time so far (does not contain current slice if running_)
-  double real_time_used_ = 0;
-  double cpu_time_used_ = 0;
-  // Manually set iteration time. User sets this with SetIterationTime(seconds).
-  double manual_time_used_ = 0;
-};
-
-namespace {
-
-BenchmarkReporter::Run CreateRunReport(
-    const benchmark::internal::Benchmark::Instance& b,
-    const internal::ThreadManager::Result& results, size_t iters,
-    double seconds) {
-  // Create report about this benchmark run.
-  BenchmarkReporter::Run report;
-
-  report.benchmark_name = b.name;
-  report.error_occurred = results.has_error_;
-  report.error_message = results.error_message_;
-  report.report_label = results.report_label_;
-  // Report the total iterations across all threads.
-  report.iterations = static_cast<int64_t>(iters) * b.threads;
-  report.time_unit = b.time_unit;
-
-  if (!report.error_occurred) {
-    double bytes_per_second = 0;
-    if (results.bytes_processed > 0 && seconds > 0.0) {
-      bytes_per_second = (results.bytes_processed / seconds);
-    }
-    double items_per_second = 0;
-    if (results.items_processed > 0 && seconds > 0.0) {
-      items_per_second = (results.items_processed / seconds);
-    }
-
-    if (b.use_manual_time) {
-      report.real_accumulated_time = results.manual_time_used;
-    } else {
-      report.real_accumulated_time = results.real_time_used;
-    }
-    report.cpu_accumulated_time = results.cpu_time_used;
-    report.bytes_per_second = bytes_per_second;
-    report.items_per_second = items_per_second;
-    report.complexity_n = results.complexity_n;
-    report.complexity = b.complexity;
-    report.complexity_lambda = b.complexity_lambda;
-    report.counters = results.counters;
-    internal::Finish(&report.counters, seconds, b.threads);
-  }
-  return report;
-}
-
-// Execute one thread of benchmark b for the specified number of iterations.
-// Adds the stats collected for the thread into *total.
-void RunInThread(const benchmark::internal::Benchmark::Instance* b,
-                 size_t iters, int thread_id,
-                 internal::ThreadManager* manager) {
-  internal::ThreadTimer timer;
-  State st(iters, b->arg, thread_id, b->threads, &timer, manager);
-  b->benchmark->Run(st);
-  CHECK(st.iterations() == st.max_iterations)
-      << "Benchmark returned before State::KeepRunning() returned false!";
-  {
-    MutexLock l(manager->GetBenchmarkMutex());
-    internal::ThreadManager::Result& results = manager->results;
-    results.cpu_time_used += timer.cpu_time_used();
-    results.real_time_used += timer.real_time_used();
-    results.manual_time_used += timer.manual_time_used();
-    results.bytes_processed += st.bytes_processed();
-    results.items_processed += st.items_processed();
-    results.complexity_n += st.complexity_length_n();
-    internal::Increment(&results.counters, st.counters);
-  }
-  manager->NotifyThreadComplete();
-}
-
-std::vector<BenchmarkReporter::Run> RunBenchmark(
-    const benchmark::internal::Benchmark::Instance& b,
-    std::vector<BenchmarkReporter::Run>* complexity_reports) {
-  std::vector<BenchmarkReporter::Run> reports;  // return value
-
-  const bool has_explicit_iteration_count = b.iterations != 0;
-  size_t iters = has_explicit_iteration_count ? b.iterations : 1;
-  std::unique_ptr<internal::ThreadManager> manager;
-  std::vector<std::thread> pool(b.threads - 1);
-  const int repeats =
-      b.repetitions != 0 ? b.repetitions : FLAGS_benchmark_repetitions;
-  const bool report_aggregates_only =
-      repeats != 1 &&
-      (b.report_mode == internal::RM_Unspecified
-           ? FLAGS_benchmark_report_aggregates_only
-           : b.report_mode == internal::RM_ReportAggregatesOnly);
-  for (int repetition_num = 0; repetition_num < repeats; repetition_num++) {
-    for (;;) {
-      // Try benchmark
-      VLOG(2) << "Running " << b.name << " for " << iters << "\n";
-
-      manager.reset(new internal::ThreadManager(b.threads));
-      for (std::size_t ti = 0; ti < pool.size(); ++ti) {
-        pool[ti] = std::thread(&RunInThread, &b, iters,
-                               static_cast<int>(ti + 1), manager.get());
-      }
-      RunInThread(&b, iters, 0, manager.get());
-      manager->WaitForAllThreads();
-      for (std::thread& thread : pool) thread.join();
-      internal::ThreadManager::Result results;
-      {
-        MutexLock l(manager->GetBenchmarkMutex());
-        results = manager->results;
-      }
-      manager.reset();
-      // Adjust real/manual time stats since they were reported per thread.
-      results.real_time_used /= b.threads;
-      results.manual_time_used /= b.threads;
-
-      VLOG(2) << "Ran in " << results.cpu_time_used << "/"
-              << results.real_time_used << "\n";
-
-      // Base decisions off of real time if requested by this benchmark.
-      double seconds = results.cpu_time_used;
-      if (b.use_manual_time) {
-        seconds = results.manual_time_used;
-      } else if (b.use_real_time) {
-        seconds = results.real_time_used;
-      }
-
-      const double min_time =
-          !IsZero(b.min_time) ? b.min_time : FLAGS_benchmark_min_time;
-
-      // Determine if this run should be reported; Either it has
-      // run for a sufficient amount of time or because an error was reported.
-      const bool should_report =  repetition_num > 0
-        || has_explicit_iteration_count // An exact iteration count was requested
-        || results.has_error_
-        || iters >= kMaxIterations
-        || seconds >= min_time // the elapsed time is large enough
-        // CPU time is specified but the elapsed real time greatly exceeds the
-        // minimum time. Note that user provided timers are except from this
-        // sanity check.
-        || ((results.real_time_used >= 5 * min_time) && !b.use_manual_time);
-
-      if (should_report) {
-        BenchmarkReporter::Run report =
-            CreateRunReport(b, results, iters, seconds);
-        if (!report.error_occurred && b.complexity != oNone)
-          complexity_reports->push_back(report);
-        reports.push_back(report);
-        break;
-      }
-
-      // See how much iterations should be increased by
-      // Note: Avoid division by zero with max(seconds, 1ns).
-      double multiplier = min_time * 1.4 / std::max(seconds, 1e-9);
-      // If our last run was at least 10% of FLAGS_benchmark_min_time then we
-      // use the multiplier directly. Otherwise we use at most 10 times
-      // expansion.
-      // NOTE: When the last run was at least 10% of the min time the max
-      // expansion should be 14x.
-      bool is_significant = (seconds / min_time) > 0.1;
-      multiplier = is_significant ? multiplier : std::min(10.0, multiplier);
-      if (multiplier <= 1.0) multiplier = 2.0;
-      double next_iters = std::max(multiplier * iters, iters + 1.0);
-      if (next_iters > kMaxIterations) {
-        next_iters = kMaxIterations;
-      }
-      VLOG(3) << "Next iters: " << next_iters << ", " << multiplier << "\n";
-      iters = static_cast<int>(next_iters + 0.5);
-    }
-  }
-  // Calculate additional statistics
-  auto stat_reports = ComputeStats(reports);
-  if ((b.complexity != oNone) && b.last_benchmark_instance) {
-    auto additional_run_stats = ComputeBigO(*complexity_reports);
-    stat_reports.insert(stat_reports.end(), additional_run_stats.begin(),
-                        additional_run_stats.end());
-    complexity_reports->clear();
-  }
-
-  if (report_aggregates_only) reports.clear();
-  reports.insert(reports.end(), stat_reports.begin(), stat_reports.end());
-  return reports;
-}
-
-}  // namespace
-}  // namespace internal
-
-State::State(size_t max_iters, const std::vector<int>& ranges, int thread_i,
-             int n_threads, internal::ThreadTimer* timer,
-             internal::ThreadManager* manager)
-    : started_(false),
-      finished_(false),
-      total_iterations_(0),
-      range_(ranges),
-      bytes_processed_(0),
-      items_processed_(0),
-      complexity_n_(0),
-      error_occurred_(false),
-      counters(),
-      thread_index(thread_i),
-      threads(n_threads),
-      max_iterations(max_iters),
-      timer_(timer),
-      manager_(manager) {
-  CHECK(max_iterations != 0) << "At least one iteration must be run";
-  CHECK_LT(thread_index, threads) << "thread_index must be less than threads";
-}
-
-void State::PauseTiming() {
-  // Add in time accumulated so far
-  CHECK(started_ && !finished_ && !error_occurred_);
-  timer_->StopTimer();
-}
-
-void State::ResumeTiming() {
-  CHECK(started_ && !finished_ && !error_occurred_);
-  timer_->StartTimer();
-}
-
-void State::SkipWithError(const char* msg) {
-  CHECK(msg);
-  error_occurred_ = true;
-  {
-    MutexLock l(manager_->GetBenchmarkMutex());
-    if (manager_->results.has_error_ == false) {
-      manager_->results.error_message_ = msg;
-      manager_->results.has_error_ = true;
-    }
-  }
-  total_iterations_ = max_iterations;
-  if (timer_->running()) timer_->StopTimer();
-}
-
-void State::SetIterationTime(double seconds) {
-  timer_->SetIterationTime(seconds);
-}
-
-void State::SetLabel(const char* label) {
-  MutexLock l(manager_->GetBenchmarkMutex());
-  manager_->results.report_label_ = label;
-}
-
-void State::StartKeepRunning() {
-  CHECK(!started_ && !finished_);
-  started_ = true;
-  manager_->StartStopBarrier();
-  if (!error_occurred_) ResumeTiming();
-}
-
-void State::FinishKeepRunning() {
-  CHECK(started_ && (!finished_ || error_occurred_));
-  if (!error_occurred_) {
-    PauseTiming();
-  }
-  // Total iterations now is one greater than max iterations. Fix this.
-  total_iterations_ = max_iterations;
-  finished_ = true;
-  manager_->StartStopBarrier();
-}
-
-namespace internal {
-namespace {
-
-void RunBenchmarks(const std::vector<Benchmark::Instance>& benchmarks,
-                           BenchmarkReporter* console_reporter,
-                           BenchmarkReporter* file_reporter) {
-  // Note the file_reporter can be null.
-  CHECK(console_reporter != nullptr);
-
-  // Determine the width of the name field using a minimum width of 10.
-  bool has_repetitions = FLAGS_benchmark_repetitions > 1;
-  size_t name_field_width = 10;
-  for (const Benchmark::Instance& benchmark : benchmarks) {
-    name_field_width =
-        std::max<size_t>(name_field_width, benchmark.name.size());
-    has_repetitions |= benchmark.repetitions > 1;
-  }
-  if (has_repetitions) name_field_width += std::strlen("_stddev");
-
-  // Print header here
-  BenchmarkReporter::Context context;
-  context.num_cpus = NumCPUs();
-  context.mhz_per_cpu = CyclesPerSecond() / 1000000.0f;
-
-  context.cpu_scaling_enabled = CpuScalingEnabled();
-  context.name_field_width = name_field_width;
-
-  // Keep track of runing times of all instances of current benchmark
-  std::vector<BenchmarkReporter::Run> complexity_reports;
-
-  // We flush streams after invoking reporter methods that write to them. This
-  // ensures users get timely updates even when streams are not line-buffered.
-  auto flushStreams = [](BenchmarkReporter* reporter) {
-    if (!reporter) return;
-    std::flush(reporter->GetOutputStream());
-    std::flush(reporter->GetErrorStream());
-  };
-
-  if (console_reporter->ReportContext(context) &&
-      (!file_reporter || file_reporter->ReportContext(context))) {
-    flushStreams(console_reporter);
-    flushStreams(file_reporter);
-    for (const auto& benchmark : benchmarks) {
-      std::vector<BenchmarkReporter::Run> reports =
-          RunBenchmark(benchmark, &complexity_reports);
-      console_reporter->ReportRuns(reports);
-      if (file_reporter) file_reporter->ReportRuns(reports);
-      flushStreams(console_reporter);
-      flushStreams(file_reporter);
-    }
-  }
-  console_reporter->Finalize();
-  if (file_reporter) file_reporter->Finalize();
-  flushStreams(console_reporter);
-  flushStreams(file_reporter);
-}
-
-std::unique_ptr<BenchmarkReporter> CreateReporter(
-    std::string const& name, ConsoleReporter::OutputOptions output_opts) {
-  typedef std::unique_ptr<BenchmarkReporter> PtrType;
-  if (name == "console") {
-    return PtrType(new ConsoleReporter(output_opts));
-  } else if (name == "json") {
-    return PtrType(new JSONReporter);
-  } else if (name == "csv") {
-    return PtrType(new CSVReporter);
-  } else {
-    std::cerr << "Unexpected format: '" << name << "'\n";
-    std::exit(1);
-  }
-}
-
-}  // end namespace
-
-bool IsZero(double n) {
-  return std::abs(n) < std::numeric_limits<double>::epsilon();
-}
-
-ConsoleReporter::OutputOptions GetOutputOptions(bool force_no_color) {
-  int output_opts = ConsoleReporter::OO_Defaults;
-  if ((FLAGS_benchmark_color == "auto" && IsColorTerminal()) ||
-      IsTruthyFlagValue(FLAGS_benchmark_color)) {
-    output_opts |= ConsoleReporter::OO_Color;
-  } else {
-    output_opts &= ~ConsoleReporter::OO_Color;
-  }
-  if(force_no_color) {
-    output_opts &= ~ConsoleReporter::OO_Color;
-  }
-  if(FLAGS_benchmark_counters_tabular) {
-    output_opts |= ConsoleReporter::OO_Tabular;
-  } else {
-    output_opts &= ~ConsoleReporter::OO_Tabular;
-  }
-  return static_cast< ConsoleReporter::OutputOptions >(output_opts);
-}
-
-}  // end namespace internal
-
-size_t RunSpecifiedBenchmarks() {
-  return RunSpecifiedBenchmarks(nullptr, nullptr);
-}
-
-size_t RunSpecifiedBenchmarks(BenchmarkReporter* console_reporter) {
-  return RunSpecifiedBenchmarks(console_reporter, nullptr);
-}
-
-size_t RunSpecifiedBenchmarks(BenchmarkReporter* console_reporter,
-                              BenchmarkReporter* file_reporter) {
-  std::string spec = FLAGS_benchmark_filter;
-  if (spec.empty() || spec == "all")
-    spec = ".";  // Regexp that matches all benchmarks
-
-  // Setup the reporters
-  std::ofstream output_file;
-  std::unique_ptr<BenchmarkReporter> default_console_reporter;
-  std::unique_ptr<BenchmarkReporter> default_file_reporter;
-  if (!console_reporter) {
-    default_console_reporter = internal::CreateReporter(
-          FLAGS_benchmark_format, internal::GetOutputOptions());
-    console_reporter = default_console_reporter.get();
-  }
-  auto& Out = console_reporter->GetOutputStream();
-  auto& Err = console_reporter->GetErrorStream();
-
-  std::string const& fname = FLAGS_benchmark_out;
-  if (fname.empty() && file_reporter) {
-    Err << "A custom file reporter was provided but "
-           "--benchmark_out=<file> was not specified."
-        << std::endl;
-    std::exit(1);
-  }
-  if (!fname.empty()) {
-    output_file.open(fname);
-    if (!output_file.is_open()) {
-      Err << "invalid file name: '" << fname << std::endl;
-      std::exit(1);
-    }
-    if (!file_reporter) {
-      default_file_reporter = internal::CreateReporter(
-          FLAGS_benchmark_out_format, ConsoleReporter::OO_None);
-      file_reporter = default_file_reporter.get();
-    }
-    file_reporter->SetOutputStream(&output_file);
-    file_reporter->SetErrorStream(&output_file);
-  }
-
-  std::vector<internal::Benchmark::Instance> benchmarks;
-  if (!FindBenchmarksInternal(spec, &benchmarks, &Err)) return 0;
-
-  if (benchmarks.empty()) {
-    Err << "Failed to match any benchmarks against regex: " << spec << "\n";
-    return 0;
-  }
-
-  if (FLAGS_benchmark_list_tests) {
-    for (auto const& benchmark : benchmarks) Out << benchmark.name << "\n";
-  } else {
-    internal::RunBenchmarks(benchmarks, console_reporter, file_reporter);
-  }
-
-  return benchmarks.size();
-}
-
-namespace internal {
-
-void PrintUsageAndExit() {
-  fprintf(stdout,
-          "benchmark"
-          " [--benchmark_list_tests={true|false}]\n"
-          "          [--benchmark_filter=<regex>]\n"
-          "          [--benchmark_min_time=<min_time>]\n"
-          "          [--benchmark_repetitions=<num_repetitions>]\n"
-          "          [--benchmark_report_aggregates_only={true|false}\n"
-          "          [--benchmark_format=<console|json|csv>]\n"
-          "          [--benchmark_out=<filename>]\n"
-          "          [--benchmark_out_format=<json|console|csv>]\n"
-          "          [--benchmark_color={auto|true|false}]\n"
-          "          [--benchmark_counters_tabular={true|false}]\n"
-          "          [--v=<verbosity>]\n");
-  exit(0);
-}
-
-void ParseCommandLineFlags(int* argc, char** argv) {
-  using namespace benchmark;
-  for (int i = 1; i < *argc; ++i) {
-    if (ParseBoolFlag(argv[i], "benchmark_list_tests",
-                      &FLAGS_benchmark_list_tests) ||
-        ParseStringFlag(argv[i], "benchmark_filter", &FLAGS_benchmark_filter) ||
-        ParseDoubleFlag(argv[i], "benchmark_min_time",
-                        &FLAGS_benchmark_min_time) ||
-        ParseInt32Flag(argv[i], "benchmark_repetitions",
-                       &FLAGS_benchmark_repetitions) ||
-        ParseBoolFlag(argv[i], "benchmark_report_aggregates_only",
-                      &FLAGS_benchmark_report_aggregates_only) ||
-        ParseStringFlag(argv[i], "benchmark_format", &FLAGS_benchmark_format) ||
-        ParseStringFlag(argv[i], "benchmark_out", &FLAGS_benchmark_out) ||
-        ParseStringFlag(argv[i], "benchmark_out_format",
-                        &FLAGS_benchmark_out_format) ||
-        ParseStringFlag(argv[i], "benchmark_color", &FLAGS_benchmark_color) ||
-        // "color_print" is the deprecated name for "benchmark_color".
-        // TODO: Remove this.
-        ParseStringFlag(argv[i], "color_print", &FLAGS_benchmark_color) ||
-        ParseBoolFlag(argv[i], "benchmark_counters_tabular",
-                        &FLAGS_benchmark_counters_tabular) ||
-        ParseInt32Flag(argv[i], "v", &FLAGS_v)) {
-      for (int j = i; j != *argc - 1; ++j) argv[j] = argv[j + 1];
-
-      --(*argc);
-      --i;
-    } else if (IsFlag(argv[i], "help")) {
-      PrintUsageAndExit();
-    }
-  }
-  for (auto const* flag :
-       {&FLAGS_benchmark_format, &FLAGS_benchmark_out_format})
-    if (*flag != "console" && *flag != "json" && *flag != "csv") {
-      PrintUsageAndExit();
-    }
-  if (FLAGS_benchmark_color.empty()) {
-    PrintUsageAndExit();
-  }
-}
-
-int InitializeStreams() {
-  static std::ios_base::Init init;
-  return 0;
-}
-
-}  // end namespace internal
-
-void Initialize(int* argc, char** argv) {
-  internal::ParseCommandLineFlags(argc, argv);
-  internal::LogLevel() = FLAGS_v;
-}
-
-bool ReportUnrecognizedArguments(int argc, char** argv) {
-  for (int i = 1; i < argc; ++i) {
-    fprintf(stderr, "%s: error: unrecognized command-line flag: %s\n", argv[0], argv[i]);
-  }
-  return argc > 1;
-}
-
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_api_internal.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_api_internal.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_api_internal.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_api_internal.h (removed)
@@ -1,46 +0,0 @@
-#ifndef BENCHMARK_API_INTERNAL_H
-#define BENCHMARK_API_INTERNAL_H
-
-#include "benchmark/benchmark.h"
-
-#include <cmath>
-#include <iosfwd>
-#include <limits>
-#include <string>
-#include <vector>
-
-namespace benchmark {
-namespace internal {
-
-// Information kept per benchmark we may want to run
-struct Benchmark::Instance {
-  std::string name;
-  Benchmark* benchmark;
-  ReportMode report_mode;
-  std::vector<int> arg;
-  TimeUnit time_unit;
-  int range_multiplier;
-  bool use_real_time;
-  bool use_manual_time;
-  BigO complexity;
-  BigOFunc* complexity_lambda;
-  UserCounters counters;
-  bool last_benchmark_instance;
-  int repetitions;
-  double min_time;
-  size_t iterations;
-  int threads;  // Number of concurrent threads to us
-};
-
-bool FindBenchmarksInternal(const std::string& re,
-                            std::vector<Benchmark::Instance>* benchmarks,
-                            std::ostream* Err);
-
-bool IsZero(double n);
-
-ConsoleReporter::OutputOptions GetOutputOptions(bool force_no_color = false);
-
-}  // end namespace internal
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_API_INTERNAL_H

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_register.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_register.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_register.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_register.cc (removed)
@@ -1,467 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-#include "benchmark_api_internal.h"
-#include "internal_macros.h"
-
-#ifndef BENCHMARK_OS_WINDOWS
-#include <sys/resource.h>
-#include <sys/time.h>
-#include <unistd.h>
-#endif
-
-#include <algorithm>
-#include <atomic>
-#include <condition_variable>
-#include <cstdio>
-#include <cstdlib>
-#include <cstring>
-#include <fstream>
-#include <iostream>
-#include <memory>
-#include <sstream>
-#include <thread>
-
-#include "check.h"
-#include "commandlineflags.h"
-#include "complexity.h"
-#include "log.h"
-#include "mutex.h"
-#include "re.h"
-#include "stat.h"
-#include "string_util.h"
-#include "sysinfo.h"
-#include "timers.h"
-
-namespace benchmark {
-
-namespace {
-// For non-dense Range, intermediate values are powers of kRangeMultiplier.
-static const int kRangeMultiplier = 8;
-// The size of a benchmark family determines is the number of inputs to repeat
-// the benchmark on. If this is "large" then warn the user during configuration.
-static const size_t kMaxFamilySize = 100;
-}  // end namespace
-
-namespace internal {
-
-//=============================================================================//
-//                         BenchmarkFamilies
-//=============================================================================//
-
-// Class for managing registered benchmarks.  Note that each registered
-// benchmark identifies a family of related benchmarks to run.
-class BenchmarkFamilies {
- public:
-  static BenchmarkFamilies* GetInstance();
-
-  // Registers a benchmark family and returns the index assigned to it.
-  size_t AddBenchmark(std::unique_ptr<Benchmark> family);
-
-  // Clear all registered benchmark families.
-  void ClearBenchmarks();
-
-  // Extract the list of benchmark instances that match the specified
-  // regular expression.
-  bool FindBenchmarks(const std::string& re,
-                      std::vector<Benchmark::Instance>* benchmarks,
-                      std::ostream* Err);
-
- private:
-  BenchmarkFamilies() {}
-
-  std::vector<std::unique_ptr<Benchmark>> families_;
-  Mutex mutex_;
-};
-
-BenchmarkFamilies* BenchmarkFamilies::GetInstance() {
-  static BenchmarkFamilies instance;
-  return &instance;
-}
-
-size_t BenchmarkFamilies::AddBenchmark(std::unique_ptr<Benchmark> family) {
-  MutexLock l(mutex_);
-  size_t index = families_.size();
-  families_.push_back(std::move(family));
-  return index;
-}
-
-void BenchmarkFamilies::ClearBenchmarks() {
-  MutexLock l(mutex_);
-  families_.clear();
-  families_.shrink_to_fit();
-}
-
-bool BenchmarkFamilies::FindBenchmarks(
-    const std::string& spec, std::vector<Benchmark::Instance>* benchmarks,
-    std::ostream* ErrStream) {
-  CHECK(ErrStream);
-  auto& Err = *ErrStream;
-  // Make regular expression out of command-line flag
-  std::string error_msg;
-  Regex re;
-  if (!re.Init(spec, &error_msg)) {
-    Err << "Could not compile benchmark re: " << error_msg << std::endl;
-    return false;
-  }
-
-  // Special list of thread counts to use when none are specified
-  const std::vector<int> one_thread = {1};
-
-  MutexLock l(mutex_);
-  for (std::unique_ptr<Benchmark>& family : families_) {
-    // Family was deleted or benchmark doesn't match
-    if (!family) continue;
-
-    if (family->ArgsCnt() == -1) {
-      family->Args({});
-    }
-    const std::vector<int>* thread_counts =
-        (family->thread_counts_.empty()
-             ? &one_thread
-             : &static_cast<const std::vector<int>&>(family->thread_counts_));
-    const size_t family_size = family->args_.size() * thread_counts->size();
-    // The benchmark will be run at least 'family_size' different inputs.
-    // If 'family_size' is very large warn the user.
-    if (family_size > kMaxFamilySize) {
-      Err << "The number of inputs is very large. " << family->name_
-          << " will be repeated at least " << family_size << " times.\n";
-    }
-    // reserve in the special case the regex ".", since we know the final
-    // family size.
-    if (spec == ".") benchmarks->reserve(family_size);
-
-    for (auto const& args : family->args_) {
-      for (int num_threads : *thread_counts) {
-        Benchmark::Instance instance;
-        instance.name = family->name_;
-        instance.benchmark = family.get();
-        instance.report_mode = family->report_mode_;
-        instance.arg = args;
-        instance.time_unit = family->time_unit_;
-        instance.range_multiplier = family->range_multiplier_;
-        instance.min_time = family->min_time_;
-        instance.iterations = family->iterations_;
-        instance.repetitions = family->repetitions_;
-        instance.use_real_time = family->use_real_time_;
-        instance.use_manual_time = family->use_manual_time_;
-        instance.complexity = family->complexity_;
-        instance.complexity_lambda = family->complexity_lambda_;
-        instance.threads = num_threads;
-
-        // Add arguments to instance name
-        size_t arg_i = 0;
-        for (auto const& arg : args) {
-          instance.name += "/";
-
-          if (arg_i < family->arg_names_.size()) {
-            const auto& arg_name = family->arg_names_[arg_i];
-            if (!arg_name.empty()) {
-              instance.name +=
-                  StringPrintF("%s:", family->arg_names_[arg_i].c_str());
-            }
-          }
-          
-          instance.name += StringPrintF("%d", arg);
-          ++arg_i;
-        }
-
-        if (!IsZero(family->min_time_))
-          instance.name += StringPrintF("/min_time:%0.3f", family->min_time_);
-        if (family->iterations_ != 0)
-          instance.name += StringPrintF("/iterations:%d", family->iterations_);
-        if (family->repetitions_ != 0)
-          instance.name += StringPrintF("/repeats:%d", family->repetitions_);
-
-        if (family->use_manual_time_) {
-          instance.name += "/manual_time";
-        } else if (family->use_real_time_) {
-          instance.name += "/real_time";
-        }
-
-        // Add the number of threads used to the name
-        if (!family->thread_counts_.empty()) {
-          instance.name += StringPrintF("/threads:%d", instance.threads);
-        }
-
-        if (re.Match(instance.name)) {
-          instance.last_benchmark_instance = (&args == &family->args_.back());
-          benchmarks->push_back(std::move(instance));
-        }
-      }
-    }
-  }
-  return true;
-}
-
-Benchmark* RegisterBenchmarkInternal(Benchmark* bench) {
-  std::unique_ptr<Benchmark> bench_ptr(bench);
-  BenchmarkFamilies* families = BenchmarkFamilies::GetInstance();
-  families->AddBenchmark(std::move(bench_ptr));
-  return bench;
-}
-
-// FIXME: This function is a hack so that benchmark.cc can access
-// `BenchmarkFamilies`
-bool FindBenchmarksInternal(const std::string& re,
-                            std::vector<Benchmark::Instance>* benchmarks,
-                            std::ostream* Err) {
-  return BenchmarkFamilies::GetInstance()->FindBenchmarks(re, benchmarks, Err);
-}
-
-//=============================================================================//
-//                               Benchmark
-//=============================================================================//
-
-Benchmark::Benchmark(const char* name)
-    : name_(name),
-      report_mode_(RM_Unspecified),
-      time_unit_(kNanosecond),
-      range_multiplier_(kRangeMultiplier),
-      min_time_(0),
-      iterations_(0),
-      repetitions_(0),
-      use_real_time_(false),
-      use_manual_time_(false),
-      complexity_(oNone),
-      complexity_lambda_(nullptr) {}
-
-Benchmark::~Benchmark() {}
-
-void Benchmark::AddRange(std::vector<int>* dst, int lo, int hi, int mult) {
-  CHECK_GE(lo, 0);
-  CHECK_GE(hi, lo);
-  CHECK_GE(mult, 2);
-
-  // Add "lo"
-  dst->push_back(lo);
-
-  static const int kint32max = std::numeric_limits<int32_t>::max();
-
-  // Now space out the benchmarks in multiples of "mult"
-  for (int32_t i = 1; i < kint32max / mult; i *= mult) {
-    if (i >= hi) break;
-    if (i > lo) {
-      dst->push_back(i);
-    }
-  }
-  // Add "hi" (if different from "lo")
-  if (hi != lo) {
-    dst->push_back(hi);
-  }
-}
-
-Benchmark* Benchmark::Arg(int x) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == 1);
-  args_.push_back({x});
-  return this;
-}
-
-Benchmark* Benchmark::Unit(TimeUnit unit) {
-  time_unit_ = unit;
-  return this;
-}
-
-Benchmark* Benchmark::Range(int start, int limit) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == 1);
-  std::vector<int> arglist;
-  AddRange(&arglist, start, limit, range_multiplier_);
-
-  for (int i : arglist) {
-    args_.push_back({i});
-  }
-  return this;
-}
-
-Benchmark* Benchmark::Ranges(const std::vector<std::pair<int, int>>& ranges) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == static_cast<int>(ranges.size()));
-  std::vector<std::vector<int>> arglists(ranges.size());
-  std::size_t total = 1;
-  for (std::size_t i = 0; i < ranges.size(); i++) {
-    AddRange(&arglists[i], ranges[i].first, ranges[i].second,
-             range_multiplier_);
-    total *= arglists[i].size();
-  }
-
-  std::vector<std::size_t> ctr(arglists.size(), 0);
-
-  for (std::size_t i = 0; i < total; i++) {
-    std::vector<int> tmp;
-    tmp.reserve(arglists.size());
-
-    for (std::size_t j = 0; j < arglists.size(); j++) {
-      tmp.push_back(arglists[j].at(ctr[j]));
-    }
-
-    args_.push_back(std::move(tmp));
-
-    for (std::size_t j = 0; j < arglists.size(); j++) {
-      if (ctr[j] + 1 < arglists[j].size()) {
-        ++ctr[j];
-        break;
-      }
-      ctr[j] = 0;
-    }
-  }
-  return this;
-}
-
-Benchmark* Benchmark::ArgName(const std::string& name) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == 1);
-  arg_names_ = {name};
-  return this;
-}
-
-Benchmark* Benchmark::ArgNames(const std::vector<std::string>& names) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == static_cast<int>(names.size()));
-  arg_names_ = names;
-  return this;
-}
-
-Benchmark* Benchmark::DenseRange(int start, int limit, int step) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == 1);
-  CHECK_GE(start, 0);
-  CHECK_LE(start, limit);
-  for (int arg = start; arg <= limit; arg += step) {
-    args_.push_back({arg});
-  }
-  return this;
-}
-
-Benchmark* Benchmark::Args(const std::vector<int>& args) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == static_cast<int>(args.size()));
-  args_.push_back(args);
-  return this;
-}
-
-Benchmark* Benchmark::Apply(void (*custom_arguments)(Benchmark* benchmark)) {
-  custom_arguments(this);
-  return this;
-}
-
-Benchmark* Benchmark::RangeMultiplier(int multiplier) {
-  CHECK(multiplier > 1);
-  range_multiplier_ = multiplier;
-  return this;
-}
-
-
-Benchmark* Benchmark::MinTime(double t) {
-  CHECK(t > 0.0);
-  CHECK(iterations_ == 0);
-  min_time_ = t;
-  return this;
-}
-
-
-Benchmark* Benchmark::Iterations(size_t n) {
-  CHECK(n > 0);
-  CHECK(IsZero(min_time_));
-  iterations_ = n;
-  return this;
-}
-
-Benchmark* Benchmark::Repetitions(int n) {
-  CHECK(n > 0);
-  repetitions_ = n;
-  return this;
-}
-
-Benchmark* Benchmark::ReportAggregatesOnly(bool value) {
-  report_mode_ = value ? RM_ReportAggregatesOnly : RM_Default;
-  return this;
-}
-
-Benchmark* Benchmark::UseRealTime() {
-  CHECK(!use_manual_time_)
-      << "Cannot set UseRealTime and UseManualTime simultaneously.";
-  use_real_time_ = true;
-  return this;
-}
-
-Benchmark* Benchmark::UseManualTime() {
-  CHECK(!use_real_time_)
-      << "Cannot set UseRealTime and UseManualTime simultaneously.";
-  use_manual_time_ = true;
-  return this;
-}
-
-Benchmark* Benchmark::Complexity(BigO complexity) {
-  complexity_ = complexity;
-  return this;
-}
-
-Benchmark* Benchmark::Complexity(BigOFunc* complexity) {
-  complexity_lambda_ = complexity;
-  complexity_ = oLambda;
-  return this;
-}
-
-Benchmark* Benchmark::Threads(int t) {
-  CHECK_GT(t, 0);
-  thread_counts_.push_back(t);
-  return this;
-}
-
-Benchmark* Benchmark::ThreadRange(int min_threads, int max_threads) {
-  CHECK_GT(min_threads, 0);
-  CHECK_GE(max_threads, min_threads);
-
-  AddRange(&thread_counts_, min_threads, max_threads, 2);
-  return this;
-}
-
-Benchmark* Benchmark::DenseThreadRange(int min_threads, int max_threads,
-                                       int stride) {
-  CHECK_GT(min_threads, 0);
-  CHECK_GE(max_threads, min_threads);
-  CHECK_GE(stride, 1);
-
-  for (auto i = min_threads; i < max_threads; i += stride) {
-    thread_counts_.push_back(i);
-  }
-  thread_counts_.push_back(max_threads);
-  return this;
-}
-
-Benchmark* Benchmark::ThreadPerCpu() {
-  static int num_cpus = NumCPUs();
-  thread_counts_.push_back(num_cpus);
-  return this;
-}
-
-void Benchmark::SetName(const char* name) { name_ = name; }
-
-int Benchmark::ArgsCnt() const {
-  if (args_.empty()) {
-    if (arg_names_.empty()) return -1;
-    return static_cast<int>(arg_names_.size());
-  }
-  return static_cast<int>(args_.front().size());
-}
-
-//=============================================================================//
-//                            FunctionBenchmark
-//=============================================================================//
-
-void FunctionBenchmark::Run(State& st) { func_(st); }
-
-}  // end namespace internal
-
-void ClearRegisteredBenchmarks() {
-  internal::BenchmarkFamilies::GetInstance()->ClearBenchmarks();
-}
-
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/check.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/check.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/check.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/check.h (removed)
@@ -1,79 +0,0 @@
-#ifndef CHECK_H_
-#define CHECK_H_
-
-#include <cstdlib>
-#include <ostream>
-#include <cmath>
-
-#include "internal_macros.h"
-#include "log.h"
-
-namespace benchmark {
-namespace internal {
-
-typedef void(AbortHandlerT)();
-
-inline AbortHandlerT*& GetAbortHandler() {
-  static AbortHandlerT* handler = &std::abort;
-  return handler;
-}
-
-BENCHMARK_NORETURN inline void CallAbortHandler() {
-  GetAbortHandler()();
-  std::abort();  // fallback to enforce noreturn
-}
-
-// CheckHandler is the class constructed by failing CHECK macros. CheckHandler
-// will log information about the failures and abort when it is destructed.
-class CheckHandler {
- public:
-  CheckHandler(const char* check, const char* file, const char* func, int line)
-      : log_(GetErrorLogInstance()) {
-    log_ << file << ":" << line << ": " << func << ": Check `" << check
-         << "' failed. ";
-  }
-
-  LogType& GetLog() { return log_; }
-
-  BENCHMARK_NORETURN ~CheckHandler() BENCHMARK_NOEXCEPT_OP(false) {
-    log_ << std::endl;
-    CallAbortHandler();
-  }
-
-  CheckHandler& operator=(const CheckHandler&) = delete;
-  CheckHandler(const CheckHandler&) = delete;
-  CheckHandler() = delete;
-
- private:
-  LogType& log_;
-};
-
-}  // end namespace internal
-}  // end namespace benchmark
-
-// The CHECK macro returns a std::ostream object that can have extra information
-// written to it.
-#ifndef NDEBUG
-#define CHECK(b)                                                             \
-  (b ? ::benchmark::internal::GetNullLogInstance()                           \
-     : ::benchmark::internal::CheckHandler(#b, __FILE__, __func__, __LINE__) \
-           .GetLog())
-#else
-#define CHECK(b) ::benchmark::internal::GetNullLogInstance()
-#endif
-
-#define CHECK_EQ(a, b) CHECK((a) == (b))
-#define CHECK_NE(a, b) CHECK((a) != (b))
-#define CHECK_GE(a, b) CHECK((a) >= (b))
-#define CHECK_LE(a, b) CHECK((a) <= (b))
-#define CHECK_GT(a, b) CHECK((a) > (b))
-#define CHECK_LT(a, b) CHECK((a) < (b))
-
-#define CHECK_FLOAT_EQ(a, b, eps) CHECK(std::fabs((a) - (b)) <  (eps))
-#define CHECK_FLOAT_NE(a, b, eps) CHECK(std::fabs((a) - (b)) >= (eps))
-#define CHECK_FLOAT_GE(a, b, eps) CHECK((a) - (b) > -(eps))
-#define CHECK_FLOAT_LE(a, b, eps) CHECK((b) - (a) > -(eps))
-#define CHECK_FLOAT_GT(a, b, eps) CHECK((a) - (b) >  (eps))
-#define CHECK_FLOAT_LT(a, b, eps) CHECK((b) - (a) >  (eps))
-
-#endif  // CHECK_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.cc (removed)
@@ -1,188 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "colorprint.h"
-
-#include <cstdarg>
-#include <cstdio>
-#include <cstdlib>
-#include <cstring>
-#include <memory>
-#include <string>
-
-#include "check.h"
-#include "internal_macros.h"
-
-#ifdef BENCHMARK_OS_WINDOWS
-#include <Windows.h>
-#include <io.h>
-#else
-#include <unistd.h>
-#endif  // BENCHMARK_OS_WINDOWS
-
-namespace benchmark {
-namespace {
-#ifdef BENCHMARK_OS_WINDOWS
-typedef WORD PlatformColorCode;
-#else
-typedef const char* PlatformColorCode;
-#endif
-
-PlatformColorCode GetPlatformColorCode(LogColor color) {
-#ifdef BENCHMARK_OS_WINDOWS
-  switch (color) {
-    case COLOR_RED:
-      return FOREGROUND_RED;
-    case COLOR_GREEN:
-      return FOREGROUND_GREEN;
-    case COLOR_YELLOW:
-      return FOREGROUND_RED | FOREGROUND_GREEN;
-    case COLOR_BLUE:
-      return FOREGROUND_BLUE;
-    case COLOR_MAGENTA:
-      return FOREGROUND_BLUE | FOREGROUND_RED;
-    case COLOR_CYAN:
-      return FOREGROUND_BLUE | FOREGROUND_GREEN;
-    case COLOR_WHITE:  // fall through to default
-    default:
-      return 0;
-  }
-#else
-  switch (color) {
-    case COLOR_RED:
-      return "1";
-    case COLOR_GREEN:
-      return "2";
-    case COLOR_YELLOW:
-      return "3";
-    case COLOR_BLUE:
-      return "4";
-    case COLOR_MAGENTA:
-      return "5";
-    case COLOR_CYAN:
-      return "6";
-    case COLOR_WHITE:
-      return "7";
-    default:
-      return nullptr;
-  };
-#endif
-}
-
-}  // end namespace
-
-std::string FormatString(const char* msg, va_list args) {
-  // we might need a second shot at this, so pre-emptivly make a copy
-  va_list args_cp;
-  va_copy(args_cp, args);
-
-  std::size_t size = 256;
-  char local_buff[256];
-  auto ret = vsnprintf(local_buff, size, msg, args_cp);
-
-  va_end(args_cp);
-
-  // currently there is no error handling for failure, so this is hack.
-  CHECK(ret >= 0);
-
-  if (ret == 0)  // handle empty expansion
-    return {};
-  else if (static_cast<size_t>(ret) < size)
-    return local_buff;
-  else {
-    // we did not provide a long enough buffer on our first attempt.
-    size = (size_t)ret + 1;  // + 1 for the null byte
-    std::unique_ptr<char[]> buff(new char[size]);
-    ret = vsnprintf(buff.get(), size, msg, args);
-    CHECK(ret > 0 && ((size_t)ret) < size);
-    return buff.get();
-  }
-}
-
-std::string FormatString(const char* msg, ...) {
-  va_list args;
-  va_start(args, msg);
-  auto tmp = FormatString(msg, args);
-  va_end(args);
-  return tmp;
-}
-
-void ColorPrintf(std::ostream& out, LogColor color, const char* fmt, ...) {
-  va_list args;
-  va_start(args, fmt);
-  ColorPrintf(out, color, fmt, args);
-  va_end(args);
-}
-
-void ColorPrintf(std::ostream& out, LogColor color, const char* fmt,
-                 va_list args) {
-#ifdef BENCHMARK_OS_WINDOWS
-  ((void)out);  // suppress unused warning
-
-  const HANDLE stdout_handle = GetStdHandle(STD_OUTPUT_HANDLE);
-
-  // Gets the current text color.
-  CONSOLE_SCREEN_BUFFER_INFO buffer_info;
-  GetConsoleScreenBufferInfo(stdout_handle, &buffer_info);
-  const WORD old_color_attrs = buffer_info.wAttributes;
-
-  // We need to flush the stream buffers into the console before each
-  // SetConsoleTextAttribute call lest it affect the text that is already
-  // printed but has not yet reached the console.
-  fflush(stdout);
-  SetConsoleTextAttribute(stdout_handle,
-                          GetPlatformColorCode(color) | FOREGROUND_INTENSITY);
-  vprintf(fmt, args);
-
-  fflush(stdout);
-  // Restores the text color.
-  SetConsoleTextAttribute(stdout_handle, old_color_attrs);
-#else
-  const char* color_code = GetPlatformColorCode(color);
-  if (color_code) out << FormatString("\033[0;3%sm", color_code);
-  out << FormatString(fmt, args) << "\033[m";
-#endif
-}
-
-bool IsColorTerminal() {
-#if BENCHMARK_OS_WINDOWS
-  // On Windows the TERM variable is usually not set, but the
-  // console there does support colors.
-  return 0 != _isatty(_fileno(stdout));
-#else
-  // On non-Windows platforms, we rely on the TERM variable. This list of
-  // supported TERM values is copied from Google Test:
-  // <https://github.com/google/googletest/blob/master/googletest/src/gtest.cc#L2925>.
-  const char* const SUPPORTED_TERM_VALUES[] = {
-      "xterm",         "xterm-color",     "xterm-256color",
-      "screen",        "screen-256color", "tmux",
-      "tmux-256color", "rxvt-unicode",    "rxvt-unicode-256color",
-      "linux",         "cygwin",
-  };
-
-  const char* const term = getenv("TERM");
-
-  bool term_supports_color = false;
-  for (const char* candidate : SUPPORTED_TERM_VALUES) {
-    if (term && 0 == strcmp(term, candidate)) {
-      term_supports_color = true;
-      break;
-    }
-  }
-
-  return 0 != isatty(fileno(stdout)) && term_supports_color;
-#endif  // BENCHMARK_OS_WINDOWS
-}
-
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.h (removed)
@@ -1,33 +0,0 @@
-#ifndef BENCHMARK_COLORPRINT_H_
-#define BENCHMARK_COLORPRINT_H_
-
-#include <cstdarg>
-#include <iostream>
-#include <string>
-
-namespace benchmark {
-enum LogColor {
-  COLOR_DEFAULT,
-  COLOR_RED,
-  COLOR_GREEN,
-  COLOR_YELLOW,
-  COLOR_BLUE,
-  COLOR_MAGENTA,
-  COLOR_CYAN,
-  COLOR_WHITE
-};
-
-std::string FormatString(const char* msg, va_list args);
-std::string FormatString(const char* msg, ...);
-
-void ColorPrintf(std::ostream& out, LogColor color, const char* fmt,
-                 va_list args);
-void ColorPrintf(std::ostream& out, LogColor color, const char* fmt, ...);
-
-// Returns true if stdout appears to be a terminal that supports colored
-// output, false otherwise.
-bool IsColorTerminal();
-
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_COLORPRINT_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.cc (removed)
@@ -1,218 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "commandlineflags.h"
-
-#include <cctype>
-#include <cstdlib>
-#include <cstring>
-#include <iostream>
-#include <limits>
-
-namespace benchmark {
-// Parses 'str' for a 32-bit signed integer.  If successful, writes
-// the result to *value and returns true; otherwise leaves *value
-// unchanged and returns false.
-bool ParseInt32(const std::string& src_text, const char* str, int32_t* value) {
-  // Parses the environment variable as a decimal integer.
-  char* end = nullptr;
-  const long long_value = strtol(str, &end, 10);  // NOLINT
-
-  // Has strtol() consumed all characters in the string?
-  if (*end != '\0') {
-    // No - an invalid character was encountered.
-    std::cerr << src_text << " is expected to be a 32-bit integer, "
-              << "but actually has value \"" << str << "\".\n";
-    return false;
-  }
-
-  // Is the parsed value in the range of an Int32?
-  const int32_t result = static_cast<int32_t>(long_value);
-  if (long_value == std::numeric_limits<long>::max() ||
-      long_value == std::numeric_limits<long>::min() ||
-      // The parsed value overflows as a long.  (strtol() returns
-      // LONG_MAX or LONG_MIN when the input overflows.)
-      result != long_value
-      // The parsed value overflows as an Int32.
-      ) {
-    std::cerr << src_text << " is expected to be a 32-bit integer, "
-              << "but actually has value \"" << str << "\", "
-              << "which overflows.\n";
-    return false;
-  }
-
-  *value = result;
-  return true;
-}
-
-// Parses 'str' for a double.  If successful, writes the result to *value and
-// returns true; otherwise leaves *value unchanged and returns false.
-bool ParseDouble(const std::string& src_text, const char* str, double* value) {
-  // Parses the environment variable as a decimal integer.
-  char* end = nullptr;
-  const double double_value = strtod(str, &end);  // NOLINT
-
-  // Has strtol() consumed all characters in the string?
-  if (*end != '\0') {
-    // No - an invalid character was encountered.
-    std::cerr << src_text << " is expected to be a double, "
-              << "but actually has value \"" << str << "\".\n";
-    return false;
-  }
-
-  *value = double_value;
-  return true;
-}
-
-// Returns the name of the environment variable corresponding to the
-// given flag.  For example, FlagToEnvVar("foo") will return
-// "BENCHMARK_FOO" in the open-source version.
-static std::string FlagToEnvVar(const char* flag) {
-  const std::string flag_str(flag);
-
-  std::string env_var;
-  for (size_t i = 0; i != flag_str.length(); ++i)
-    env_var += static_cast<char>(::toupper(flag_str.c_str()[i]));
-
-  return "BENCHMARK_" + env_var;
-}
-
-// Reads and returns the Boolean environment variable corresponding to
-// the given flag; if it's not set, returns default_value.
-//
-// The value is considered true iff it's not "0".
-bool BoolFromEnv(const char* flag, bool default_value) {
-  const std::string env_var = FlagToEnvVar(flag);
-  const char* const string_value = getenv(env_var.c_str());
-  return string_value == nullptr ? default_value
-                                 : strcmp(string_value, "0") != 0;
-}
-
-// Reads and returns a 32-bit integer stored in the environment
-// variable corresponding to the given flag; if it isn't set or
-// doesn't represent a valid 32-bit integer, returns default_value.
-int32_t Int32FromEnv(const char* flag, int32_t default_value) {
-  const std::string env_var = FlagToEnvVar(flag);
-  const char* const string_value = getenv(env_var.c_str());
-  if (string_value == nullptr) {
-    // The environment variable is not set.
-    return default_value;
-  }
-
-  int32_t result = default_value;
-  if (!ParseInt32(std::string("Environment variable ") + env_var, string_value,
-                  &result)) {
-    std::cout << "The default value " << default_value << " is used.\n";
-    return default_value;
-  }
-
-  return result;
-}
-
-// Reads and returns the string environment variable corresponding to
-// the given flag; if it's not set, returns default_value.
-const char* StringFromEnv(const char* flag, const char* default_value) {
-  const std::string env_var = FlagToEnvVar(flag);
-  const char* const value = getenv(env_var.c_str());
-  return value == nullptr ? default_value : value;
-}
-
-// Parses a string as a command line flag.  The string should have
-// the format "--flag=value".  When def_optional is true, the "=value"
-// part can be omitted.
-//
-// Returns the value of the flag, or nullptr if the parsing failed.
-const char* ParseFlagValue(const char* str, const char* flag,
-                           bool def_optional) {
-  // str and flag must not be nullptr.
-  if (str == nullptr || flag == nullptr) return nullptr;
-
-  // The flag must start with "--".
-  const std::string flag_str = std::string("--") + std::string(flag);
-  const size_t flag_len = flag_str.length();
-  if (strncmp(str, flag_str.c_str(), flag_len) != 0) return nullptr;
-
-  // Skips the flag name.
-  const char* flag_end = str + flag_len;
-
-  // When def_optional is true, it's OK to not have a "=value" part.
-  if (def_optional && (flag_end[0] == '\0')) return flag_end;
-
-  // If def_optional is true and there are more characters after the
-  // flag name, or if def_optional is false, there must be a '=' after
-  // the flag name.
-  if (flag_end[0] != '=') return nullptr;
-
-  // Returns the string after "=".
-  return flag_end + 1;
-}
-
-bool ParseBoolFlag(const char* str, const char* flag, bool* value) {
-  // Gets the value of the flag as a string.
-  const char* const value_str = ParseFlagValue(str, flag, true);
-
-  // Aborts if the parsing failed.
-  if (value_str == nullptr) return false;
-
-  // Converts the string value to a bool.
-  *value = IsTruthyFlagValue(value_str);
-  return true;
-}
-
-bool ParseInt32Flag(const char* str, const char* flag, int32_t* value) {
-  // Gets the value of the flag as a string.
-  const char* const value_str = ParseFlagValue(str, flag, false);
-
-  // Aborts if the parsing failed.
-  if (value_str == nullptr) return false;
-
-  // Sets *value to the value of the flag.
-  return ParseInt32(std::string("The value of flag --") + flag, value_str,
-                    value);
-}
-
-bool ParseDoubleFlag(const char* str, const char* flag, double* value) {
-  // Gets the value of the flag as a string.
-  const char* const value_str = ParseFlagValue(str, flag, false);
-
-  // Aborts if the parsing failed.
-  if (value_str == nullptr) return false;
-
-  // Sets *value to the value of the flag.
-  return ParseDouble(std::string("The value of flag --") + flag, value_str,
-                     value);
-}
-
-bool ParseStringFlag(const char* str, const char* flag, std::string* value) {
-  // Gets the value of the flag as a string.
-  const char* const value_str = ParseFlagValue(str, flag, false);
-
-  // Aborts if the parsing failed.
-  if (value_str == nullptr) return false;
-
-  *value = value_str;
-  return true;
-}
-
-bool IsFlag(const char* str, const char* flag) {
-  return (ParseFlagValue(str, flag, true) != nullptr);
-}
-
-bool IsTruthyFlagValue(const std::string& value) {
-  if (value.empty()) return true;
-  char ch = value[0];
-  return isalnum(ch) &&
-         !(ch == '0' || ch == 'f' || ch == 'F' || ch == 'n' || ch == 'N');
-}
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.h (removed)
@@ -1,79 +0,0 @@
-#ifndef BENCHMARK_COMMANDLINEFLAGS_H_
-#define BENCHMARK_COMMANDLINEFLAGS_H_
-
-#include <cstdint>
-#include <string>
-
-// Macro for referencing flags.
-#define FLAG(name) FLAGS_##name
-
-// Macros for declaring flags.
-#define DECLARE_bool(name) extern bool FLAG(name)
-#define DECLARE_int32(name) extern int32_t FLAG(name)
-#define DECLARE_int64(name) extern int64_t FLAG(name)
-#define DECLARE_double(name) extern double FLAG(name)
-#define DECLARE_string(name) extern std::string FLAG(name)
-
-// Macros for defining flags.
-#define DEFINE_bool(name, default_val, doc) bool FLAG(name) = (default_val)
-#define DEFINE_int32(name, default_val, doc) int32_t FLAG(name) = (default_val)
-#define DEFINE_int64(name, default_val, doc) int64_t FLAG(name) = (default_val)
-#define DEFINE_double(name, default_val, doc) double FLAG(name) = (default_val)
-#define DEFINE_string(name, default_val, doc) \
-  std::string FLAG(name) = (default_val)
-
-namespace benchmark {
-// Parses 'str' for a 32-bit signed integer.  If successful, writes the result
-// to *value and returns true; otherwise leaves *value unchanged and returns
-// false.
-bool ParseInt32(const std::string& src_text, const char* str, int32_t* value);
-
-// Parses a bool/Int32/string from the environment variable
-// corresponding to the given Google Test flag.
-bool BoolFromEnv(const char* flag, bool default_val);
-int32_t Int32FromEnv(const char* flag, int32_t default_val);
-double DoubleFromEnv(const char* flag, double default_val);
-const char* StringFromEnv(const char* flag, const char* default_val);
-
-// Parses a string for a bool flag, in the form of either
-// "--flag=value" or "--flag".
-//
-// In the former case, the value is taken as true if it passes IsTruthyValue().
-//
-// In the latter case, the value is taken as true.
-//
-// On success, stores the value of the flag in *value, and returns
-// true.  On failure, returns false without changing *value.
-bool ParseBoolFlag(const char* str, const char* flag, bool* value);
-
-// Parses a string for an Int32 flag, in the form of
-// "--flag=value".
-//
-// On success, stores the value of the flag in *value, and returns
-// true.  On failure, returns false without changing *value.
-bool ParseInt32Flag(const char* str, const char* flag, int32_t* value);
-
-// Parses a string for a Double flag, in the form of
-// "--flag=value".
-//
-// On success, stores the value of the flag in *value, and returns
-// true.  On failure, returns false without changing *value.
-bool ParseDoubleFlag(const char* str, const char* flag, double* value);
-
-// Parses a string for a string flag, in the form of
-// "--flag=value".
-//
-// On success, stores the value of the flag in *value, and returns
-// true.  On failure, returns false without changing *value.
-bool ParseStringFlag(const char* str, const char* flag, std::string* value);
-
-// Returns true if the string matches the flag.
-bool IsFlag(const char* str, const char* flag);
-
-// Returns true unless value starts with one of: '0', 'f', 'F', 'n' or 'N', or
-// some non-alphanumeric character. As a special case, also returns true if
-// value is the empty string.
-bool IsTruthyFlagValue(const std::string& value);
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_COMMANDLINEFLAGS_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.cc (removed)
@@ -1,324 +0,0 @@
-// Copyright 2016 Ismael Jimenez Martinez. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-// Source project : https://github.com/ismaelJimenez/cpp.leastsq
-// Adapted to be used with google benchmark
-
-#include "benchmark/benchmark.h"
-
-#include <algorithm>
-#include <cmath>
-#include "check.h"
-#include "complexity.h"
-#include "stat.h"
-
-namespace benchmark {
-
-// Internal function to calculate the different scalability forms
-BigOFunc* FittingCurve(BigO complexity) {
-  switch (complexity) {
-    case oN:
-      return [](int n) -> double { return n; };
-    case oNSquared:
-      return [](int n) -> double { return std::pow(n, 2); };
-    case oNCubed:
-      return [](int n) -> double { return std::pow(n, 3); };
-    case oLogN:
-      return [](int n) { return log2(n); };
-    case oNLogN:
-      return [](int n) { return n * log2(n); };
-    case o1:
-    default:
-      return [](int) { return 1.0; };
-  }
-}
-
-// Function to return an string for the calculated complexity
-std::string GetBigOString(BigO complexity) {
-  switch (complexity) {
-    case oN:
-      return "N";
-    case oNSquared:
-      return "N^2";
-    case oNCubed:
-      return "N^3";
-    case oLogN:
-      return "lgN";
-    case oNLogN:
-      return "NlgN";
-    case o1:
-      return "(1)";
-    default:
-      return "f(N)";
-  }
-}
-
-// Find the coefficient for the high-order term in the running time, by
-// minimizing the sum of squares of relative error, for the fitting curve
-// given by the lambda expresion.
-//   - n             : Vector containing the size of the benchmark tests.
-//   - time          : Vector containing the times for the benchmark tests.
-//   - fitting_curve : lambda expresion (e.g. [](int n) {return n; };).
-
-// For a deeper explanation on the algorithm logic, look the README file at
-// http://github.com/ismaelJimenez/Minimal-Cpp-Least-Squared-Fit
-
-LeastSq MinimalLeastSq(const std::vector<int>& n,
-                       const std::vector<double>& time,
-                       BigOFunc* fitting_curve) {
-  double sigma_gn = 0.0;
-  double sigma_gn_squared = 0.0;
-  double sigma_time = 0.0;
-  double sigma_time_gn = 0.0;
-
-  // Calculate least square fitting parameter
-  for (size_t i = 0; i < n.size(); ++i) {
-    double gn_i = fitting_curve(n[i]);
-    sigma_gn += gn_i;
-    sigma_gn_squared += gn_i * gn_i;
-    sigma_time += time[i];
-    sigma_time_gn += time[i] * gn_i;
-  }
-
-  LeastSq result;
-  result.complexity = oLambda;
-
-  // Calculate complexity.
-  result.coef = sigma_time_gn / sigma_gn_squared;
-
-  // Calculate RMS
-  double rms = 0.0;
-  for (size_t i = 0; i < n.size(); ++i) {
-    double fit = result.coef * fitting_curve(n[i]);
-    rms += pow((time[i] - fit), 2);
-  }
-
-  // Normalized RMS by the mean of the observed values
-  double mean = sigma_time / n.size();
-  result.rms = sqrt(rms / n.size()) / mean;
-
-  return result;
-}
-
-// Find the coefficient for the high-order term in the running time, by
-// minimizing the sum of squares of relative error.
-//   - n          : Vector containing the size of the benchmark tests.
-//   - time       : Vector containing the times for the benchmark tests.
-//   - complexity : If different than oAuto, the fitting curve will stick to
-//                  this one. If it is oAuto, it will be calculated the best
-//                  fitting curve.
-LeastSq MinimalLeastSq(const std::vector<int>& n,
-                       const std::vector<double>& time, const BigO complexity) {
-  CHECK_EQ(n.size(), time.size());
-  CHECK_GE(n.size(), 2);  // Do not compute fitting curve is less than two
-                          // benchmark runs are given
-  CHECK_NE(complexity, oNone);
-
-  LeastSq best_fit;
-
-  if (complexity == oAuto) {
-    std::vector<BigO> fit_curves = {oLogN, oN, oNLogN, oNSquared, oNCubed};
-
-    // Take o1 as default best fitting curve
-    best_fit = MinimalLeastSq(n, time, FittingCurve(o1));
-    best_fit.complexity = o1;
-
-    // Compute all possible fitting curves and stick to the best one
-    for (const auto& fit : fit_curves) {
-      LeastSq current_fit = MinimalLeastSq(n, time, FittingCurve(fit));
-      if (current_fit.rms < best_fit.rms) {
-        best_fit = current_fit;
-        best_fit.complexity = fit;
-      }
-    }
-  } else {
-    best_fit = MinimalLeastSq(n, time, FittingCurve(complexity));
-    best_fit.complexity = complexity;
-  }
-
-  return best_fit;
-}
-
-std::vector<BenchmarkReporter::Run> ComputeStats(
-    const std::vector<BenchmarkReporter::Run>& reports) {
-  typedef BenchmarkReporter::Run Run;
-  std::vector<Run> results;
-
-  auto error_count =
-      std::count_if(reports.begin(), reports.end(),
-                    [](Run const& run) { return run.error_occurred; });
-
-  if (reports.size() - error_count < 2) {
-    // We don't report aggregated data if there was a single run.
-    return results;
-  }
-  // Accumulators.
-  Stat1_d real_accumulated_time_stat;
-  Stat1_d cpu_accumulated_time_stat;
-  Stat1_d bytes_per_second_stat;
-  Stat1_d items_per_second_stat;
-  // All repetitions should be run with the same number of iterations so we
-  // can take this information from the first benchmark.
-  int64_t const run_iterations = reports.front().iterations;
-  // create stats for user counters
-  struct CounterStat {
-    Counter c;
-    Stat1_d s;
-  };
-  std::map< std::string, CounterStat > counter_stats;
-  for(Run const& r : reports) {
-    for(auto const& cnt : r.counters) {
-      auto it = counter_stats.find(cnt.first);
-      if(it == counter_stats.end()) {
-        counter_stats.insert({cnt.first, {cnt.second, Stat1_d{}}});
-      } else {
-        CHECK_EQ(counter_stats[cnt.first].c.flags, cnt.second.flags);
-      }
-    }
-  }
-
-  // Populate the accumulators.
-  for (Run const& run : reports) {
-    CHECK_EQ(reports[0].benchmark_name, run.benchmark_name);
-    CHECK_EQ(run_iterations, run.iterations);
-    if (run.error_occurred) continue;
-    real_accumulated_time_stat +=
-        Stat1_d(run.real_accumulated_time / run.iterations);
-    cpu_accumulated_time_stat +=
-        Stat1_d(run.cpu_accumulated_time / run.iterations);
-    items_per_second_stat += Stat1_d(run.items_per_second);
-    bytes_per_second_stat += Stat1_d(run.bytes_per_second);
-    // user counters
-    for(auto const& cnt : run.counters) {
-      auto it = counter_stats.find(cnt.first);
-      CHECK_NE(it, counter_stats.end());
-      it->second.s += Stat1_d(cnt.second);
-    }
-  }
-
-  // Get the data from the accumulator to BenchmarkReporter::Run's.
-  Run mean_data;
-  mean_data.benchmark_name = reports[0].benchmark_name + "_mean";
-  mean_data.iterations = run_iterations;
-  mean_data.real_accumulated_time =
-      real_accumulated_time_stat.Mean() * run_iterations;
-  mean_data.cpu_accumulated_time =
-      cpu_accumulated_time_stat.Mean() * run_iterations;
-  mean_data.bytes_per_second = bytes_per_second_stat.Mean();
-  mean_data.items_per_second = items_per_second_stat.Mean();
-  mean_data.time_unit = reports[0].time_unit;
-  // user counters
-  for(auto const& kv : counter_stats) {
-    auto c = Counter(kv.second.s.Mean(), counter_stats[kv.first].c.flags);
-    mean_data.counters[kv.first] = c;
-  }
-
-  // Only add label to mean/stddev if it is same for all runs
-  mean_data.report_label = reports[0].report_label;
-  for (std::size_t i = 1; i < reports.size(); i++) {
-    if (reports[i].report_label != reports[0].report_label) {
-      mean_data.report_label = "";
-      break;
-    }
-  }
-
-  Run stddev_data;
-  stddev_data.benchmark_name = reports[0].benchmark_name + "_stddev";
-  stddev_data.report_label = mean_data.report_label;
-  stddev_data.iterations = 0;
-  stddev_data.real_accumulated_time = real_accumulated_time_stat.StdDev();
-  stddev_data.cpu_accumulated_time = cpu_accumulated_time_stat.StdDev();
-  stddev_data.bytes_per_second = bytes_per_second_stat.StdDev();
-  stddev_data.items_per_second = items_per_second_stat.StdDev();
-  stddev_data.time_unit = reports[0].time_unit;
-  // user counters
-  for(auto const& kv : counter_stats) {
-    auto c = Counter(kv.second.s.StdDev(), counter_stats[kv.first].c.flags);
-    stddev_data.counters[kv.first] = c;
-  }
-
-  results.push_back(mean_data);
-  results.push_back(stddev_data);
-  return results;
-}
-
-std::vector<BenchmarkReporter::Run> ComputeBigO(
-    const std::vector<BenchmarkReporter::Run>& reports) {
-  typedef BenchmarkReporter::Run Run;
-  std::vector<Run> results;
-
-  if (reports.size() < 2) return results;
-
-  // Accumulators.
-  std::vector<int> n;
-  std::vector<double> real_time;
-  std::vector<double> cpu_time;
-
-  // Populate the accumulators.
-  for (const Run& run : reports) {
-    CHECK_GT(run.complexity_n, 0) << "Did you forget to call SetComplexityN?";
-    n.push_back(run.complexity_n);
-    real_time.push_back(run.real_accumulated_time / run.iterations);
-    cpu_time.push_back(run.cpu_accumulated_time / run.iterations);
-  }
-
-  LeastSq result_cpu;
-  LeastSq result_real;
-
-  if (reports[0].complexity == oLambda) {
-    result_cpu = MinimalLeastSq(n, cpu_time, reports[0].complexity_lambda);
-    result_real = MinimalLeastSq(n, real_time, reports[0].complexity_lambda);
-  } else {
-    result_cpu = MinimalLeastSq(n, cpu_time, reports[0].complexity);
-    result_real = MinimalLeastSq(n, real_time, result_cpu.complexity);
-  }
-  std::string benchmark_name =
-      reports[0].benchmark_name.substr(0, reports[0].benchmark_name.find('/'));
-
-  // Get the data from the accumulator to BenchmarkReporter::Run's.
-  Run big_o;
-  big_o.benchmark_name = benchmark_name + "_BigO";
-  big_o.iterations = 0;
-  big_o.real_accumulated_time = result_real.coef;
-  big_o.cpu_accumulated_time = result_cpu.coef;
-  big_o.report_big_o = true;
-  big_o.complexity = result_cpu.complexity;
-
-  // All the time results are reported after being multiplied by the
-  // time unit multiplier. But since RMS is a relative quantity it
-  // should not be multiplied at all. So, here, we _divide_ it by the
-  // multiplier so that when it is multiplied later the result is the
-  // correct one.
-  double multiplier = GetTimeUnitMultiplier(reports[0].time_unit);
-
-  // Only add label to mean/stddev if it is same for all runs
-  Run rms;
-  big_o.report_label = reports[0].report_label;
-  rms.benchmark_name = benchmark_name + "_RMS";
-  rms.report_label = big_o.report_label;
-  rms.iterations = 0;
-  rms.real_accumulated_time = result_real.rms / multiplier;
-  rms.cpu_accumulated_time = result_cpu.rms / multiplier;
-  rms.report_rms = true;
-  rms.complexity = result_cpu.complexity;
-  // don't forget to keep the time unit, or we won't be able to
-  // recover the correct value.
-  rms.time_unit = reports[0].time_unit;
-
-  results.push_back(big_o);
-  results.push_back(rms);
-  return results;
-}
-
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.h (removed)
@@ -1,60 +0,0 @@
-// Copyright 2016 Ismael Jimenez Martinez. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-// Source project : https://github.com/ismaelJimenez/cpp.leastsq
-// Adapted to be used with google benchmark
-
-#ifndef COMPLEXITY_H_
-#define COMPLEXITY_H_
-
-#include <string>
-#include <vector>
-
-#include "benchmark/benchmark.h"
-
-namespace benchmark {
-
-// Return a vector containing the mean and standard devation information for
-// the specified list of reports. If 'reports' contains less than two
-// non-errored runs an empty vector is returned
-std::vector<BenchmarkReporter::Run> ComputeStats(
-    const std::vector<BenchmarkReporter::Run>& reports);
-
-// Return a vector containing the bigO and RMS information for the specified
-// list of reports. If 'reports.size() < 2' an empty vector is returned.
-std::vector<BenchmarkReporter::Run> ComputeBigO(
-    const std::vector<BenchmarkReporter::Run>& reports);
-
-// This data structure will contain the result returned by MinimalLeastSq
-//   - coef        : Estimated coeficient for the high-order term as
-//                   interpolated from data.
-//   - rms         : Normalized Root Mean Squared Error.
-//   - complexity  : Scalability form (e.g. oN, oNLogN). In case a scalability
-//                   form has been provided to MinimalLeastSq this will return
-//                   the same value. In case BigO::oAuto has been selected, this
-//                   parameter will return the best fitting curve detected.
-
-struct LeastSq {
-  LeastSq() : coef(0.0), rms(0.0), complexity(oNone) {}
-
-  double coef;
-  double rms;
-  BigO complexity;
-};
-
-// Function to return an string for the calculated complexity
-std::string GetBigOString(BigO complexity);
-
-}  // end namespace benchmark
-#endif  // COMPLEXITY_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/console_reporter.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/console_reporter.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/console_reporter.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/console_reporter.cc (removed)
@@ -1,180 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-#include "complexity.h"
-#include "counter.h"
-
-#include <algorithm>
-#include <cstdint>
-#include <cstdio>
-#include <iostream>
-#include <string>
-#include <tuple>
-#include <vector>
-
-#include "check.h"
-#include "colorprint.h"
-#include "commandlineflags.h"
-#include "internal_macros.h"
-#include "string_util.h"
-#include "timers.h"
-
-namespace benchmark {
-
-bool ConsoleReporter::ReportContext(const Context& context) {
-  name_field_width_ = context.name_field_width;
-  printed_header_ = false;
-  prev_counters_.clear();
-
-  PrintBasicContext(&GetErrorStream(), context);
-
-#ifdef BENCHMARK_OS_WINDOWS
-  if ((output_options_ & OO_Color) && &std::cout != &GetOutputStream()) {
-    GetErrorStream()
-        << "Color printing is only supported for stdout on windows."
-           " Disabling color printing\n";
-    output_options_ = static_cast< OutputOptions >(output_options_ & ~OO_Color);
-  }
-#endif
-
-  return true;
-}
-
-void ConsoleReporter::PrintHeader(const Run& run) {
-  std::string str = FormatString("%-*s %13s %13s %10s", static_cast<int>(name_field_width_),
-                                 "Benchmark", "Time", "CPU", "Iterations");
-  if(!run.counters.empty()) {
-    if(output_options_ & OO_Tabular) {
-      for(auto const& c : run.counters) {
-        str += FormatString(" %10s", c.first.c_str());
-      }
-    } else {
-      str += " UserCounters...";
-    }
-  }
-  str += "\n";
-  std::string line = std::string(str.length(), '-');
-  GetOutputStream() << line << "\n" << str << line << "\n";
-}
-
-void ConsoleReporter::ReportRuns(const std::vector<Run>& reports) {
-  for (const auto& run : reports) {
-    // print the header:
-    // --- if none was printed yet
-    bool print_header = !printed_header_;
-    // --- or if the format is tabular and this run
-    //     has different fields from the prev header
-    print_header |= (output_options_ & OO_Tabular) &&
-                    (!internal::SameNames(run.counters, prev_counters_));
-    if (print_header) {
-      printed_header_ = true;
-      prev_counters_ = run.counters;
-      PrintHeader(run);
-    }
-    // As an alternative to printing the headers like this, we could sort
-    // the benchmarks by header and then print. But this would require
-    // waiting for the full results before printing, or printing twice.
-    PrintRunData(run);
-  }
-}
-
-static void IgnoreColorPrint(std::ostream& out, LogColor, const char* fmt,
-                             ...) {
-  va_list args;
-  va_start(args, fmt);
-  out << FormatString(fmt, args);
-  va_end(args);
-}
-
-void ConsoleReporter::PrintRunData(const Run& result) {
-  typedef void(PrinterFn)(std::ostream&, LogColor, const char*, ...);
-  auto& Out = GetOutputStream();
-  PrinterFn* printer = (output_options_ & OO_Color) ?
-                         (PrinterFn*)ColorPrintf : IgnoreColorPrint;
-  auto name_color =
-      (result.report_big_o || result.report_rms) ? COLOR_BLUE : COLOR_GREEN;
-  printer(Out, name_color, "%-*s ", name_field_width_,
-          result.benchmark_name.c_str());
-
-  if (result.error_occurred) {
-    printer(Out, COLOR_RED, "ERROR OCCURRED: \'%s\'",
-            result.error_message.c_str());
-    printer(Out, COLOR_DEFAULT, "\n");
-    return;
-  }
-  // Format bytes per second
-  std::string rate;
-  if (result.bytes_per_second > 0) {
-    rate = StrCat(" ", HumanReadableNumber(result.bytes_per_second), "B/s");
-  }
-
-  // Format items per second
-  std::string items;
-  if (result.items_per_second > 0) {
-    items =
-        StrCat(" ", HumanReadableNumber(result.items_per_second), " items/s");
-  }
-
-  const double real_time = result.GetAdjustedRealTime();
-  const double cpu_time = result.GetAdjustedCPUTime();
-
-  if (result.report_big_o) {
-    std::string big_o = GetBigOString(result.complexity);
-    printer(Out, COLOR_YELLOW, "%10.2f %s %10.2f %s ", real_time, big_o.c_str(),
-            cpu_time, big_o.c_str());
-  } else if (result.report_rms) {
-    printer(Out, COLOR_YELLOW, "%10.0f %% %10.0f %% ", real_time * 100,
-            cpu_time * 100);
-  } else {
-    const char* timeLabel = GetTimeUnitString(result.time_unit);
-    printer(Out, COLOR_YELLOW, "%10.0f %s %10.0f %s ", real_time, timeLabel,
-            cpu_time, timeLabel);
-  }
-
-  if (!result.report_big_o && !result.report_rms) {
-    printer(Out, COLOR_CYAN, "%10lld", result.iterations);
-  }
-
-  for (auto& c : result.counters) {
-    auto const& s = HumanReadableNumber(c.second.value);
-    if (output_options_ & OO_Tabular) {
-      if (c.second.flags & Counter::kIsRate) {
-        printer(Out, COLOR_DEFAULT, " %8s/s", s.c_str());
-      } else {
-        printer(Out, COLOR_DEFAULT, " %10s", s.c_str());
-      }
-    } else {
-      const char* unit = (c.second.flags & Counter::kIsRate) ? "/s" : "";
-      printer(Out, COLOR_DEFAULT, " %s=%s%s", c.first.c_str(), s.c_str(),
-              unit);
-    }
-  }
-
-  if (!rate.empty()) {
-    printer(Out, COLOR_DEFAULT, " %*s", 13, rate.c_str());
-  }
-
-  if (!items.empty()) {
-    printer(Out, COLOR_DEFAULT, " %*s", 18, items.c_str());
-  }
-
-  if (!result.report_label.empty()) {
-    printer(Out, COLOR_DEFAULT, " %s", result.report_label.c_str());
-  }
-
-  printer(Out, COLOR_DEFAULT, "\n");
-}
-
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.cc (removed)
@@ -1,68 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "counter.h"
-
-namespace benchmark {
-namespace internal {
-
-double Finish(Counter const& c, double cpu_time, double num_threads) {
-  double v = c.value;
-  if (c.flags & Counter::kIsRate) {
-    v /= cpu_time;
-  }
-  if (c.flags & Counter::kAvgThreads) {
-    v /= num_threads;
-  }
-  return v;
-}
-
-void Finish(UserCounters *l, double cpu_time, double num_threads) {
-  for (auto &c : *l) {
-    c.second.value = Finish(c.second, cpu_time, num_threads);
-  }
-}
-
-void Increment(UserCounters *l, UserCounters const& r) {
-  // add counters present in both or just in *l
-  for (auto &c : *l) {
-    auto it = r.find(c.first);
-    if (it != r.end()) {
-      c.second.value = c.second + it->second;
-    }
-  }
-  // add counters present in r, but not in *l
-  for (auto const &tc : r) {
-    auto it = l->find(tc.first);
-    if (it == l->end()) {
-      (*l)[tc.first] = tc.second;
-    }
-  }
-}
-
-bool SameNames(UserCounters const& l, UserCounters const& r) {
-  if (&l == &r) return true;
-  if (l.size() != r.size()) {
-    return false;
-  }
-  for (auto const& c : l) {
-    if (r.find(c.first) == r.end()) {
-      return false;
-    }
-  }
-  return true;
-}
-
-} // end namespace internal
-} // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.h (removed)
@@ -1,26 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-
-namespace benchmark {
-
-// these counter-related functions are hidden to reduce API surface.
-namespace internal {
-void Finish(UserCounters *l, double time, double num_threads);
-void Increment(UserCounters *l, UserCounters const& r);
-bool SameNames(UserCounters const& l, UserCounters const& r);
-} // end namespace internal
-
-} //end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/csv_reporter.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/csv_reporter.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/csv_reporter.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/csv_reporter.cc (removed)
@@ -1,149 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-#include "complexity.h"
-
-#include <algorithm>
-#include <cstdint>
-#include <iostream>
-#include <string>
-#include <tuple>
-#include <vector>
-
-#include "string_util.h"
-#include "timers.h"
-#include "check.h"
-
-// File format reference: http://edoceo.com/utilitas/csv-file-format.
-
-namespace benchmark {
-
-namespace {
-std::vector<std::string> elements = {
-    "name",           "iterations",       "real_time",        "cpu_time",
-    "time_unit",      "bytes_per_second", "items_per_second", "label",
-    "error_occurred", "error_message"};
-}  // namespace
-
-bool CSVReporter::ReportContext(const Context& context) {
-  PrintBasicContext(&GetErrorStream(), context);
-  return true;
-}
-
-void CSVReporter::ReportRuns(const std::vector<Run> & reports) {
-  std::ostream& Out = GetOutputStream();
-
-  if (!printed_header_) {
-    // save the names of all the user counters
-    for (const auto& run : reports) {
-      for (const auto& cnt : run.counters) {
-        user_counter_names_.insert(cnt.first);
-      }
-    }
-
-    // print the header
-    for (auto B = elements.begin(); B != elements.end();) {
-      Out << *B++;
-      if (B != elements.end()) Out << ",";
-    }
-    for (auto B = user_counter_names_.begin(); B != user_counter_names_.end();) {
-      Out << ",\"" << *B++ << "\"";
-    }
-    Out << "\n";
-
-    printed_header_ = true;
-  } else {
-    // check that all the current counters are saved in the name set
-    for (const auto& run : reports) {
-      for (const auto& cnt : run.counters) {
-        CHECK(user_counter_names_.find(cnt.first) != user_counter_names_.end())
-              << "All counters must be present in each run. "
-              << "Counter named \"" << cnt.first
-              << "\" was not in a run after being added to the header";
-      }
-    }
-  }
-
-  // print results for each run
-  for (const auto& run : reports) {
-    PrintRunData(run);
-  }
-
-}
-
-void CSVReporter::PrintRunData(const Run & run) {
-  std::ostream& Out = GetOutputStream();
-
-  // Field with embedded double-quote characters must be doubled and the field
-  // delimited with double-quotes.
-  std::string name = run.benchmark_name;
-  ReplaceAll(&name, "\"", "\"\"");
-  Out << '"' << name << "\",";
-  if (run.error_occurred) {
-    Out << std::string(elements.size() - 3, ',');
-    Out << "true,";
-    std::string msg = run.error_message;
-    ReplaceAll(&msg, "\"", "\"\"");
-    Out << '"' << msg << "\"\n";
-    return;
-  }
-
-  // Do not print iteration on bigO and RMS report
-  if (!run.report_big_o && !run.report_rms) {
-    Out << run.iterations;
-  }
-  Out << ",";
-
-  Out << run.GetAdjustedRealTime() << ",";
-  Out << run.GetAdjustedCPUTime() << ",";
-
-  // Do not print timeLabel on bigO and RMS report
-  if (run.report_big_o) {
-    Out << GetBigOString(run.complexity);
-  } else if (!run.report_rms) {
-    Out << GetTimeUnitString(run.time_unit);
-  }
-  Out << ",";
-
-  if (run.bytes_per_second > 0.0) {
-    Out << run.bytes_per_second;
-  }
-  Out << ",";
-  if (run.items_per_second > 0.0) {
-    Out << run.items_per_second;
-  }
-  Out << ",";
-  if (!run.report_label.empty()) {
-    // Field with embedded double-quote characters must be doubled and the field
-    // delimited with double-quotes.
-    std::string label = run.report_label;
-    ReplaceAll(&label, "\"", "\"\"");
-    Out << "\"" << label << "\"";
-  }
-  Out << ",,";  // for error_occurred and error_message
-
-  // Print user counters
-  for (const auto &ucn : user_counter_names_) {
-    auto it = run.counters.find(ucn);
-    if(it == run.counters.end()) {
-      Out << ",";
-    } else {
-      Out << "," << it->second;
-    }
-  }
-  Out << '\n';
-}
-
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/cycleclock.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/cycleclock.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/cycleclock.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/cycleclock.h (removed)
@@ -1,172 +0,0 @@
-// ----------------------------------------------------------------------
-// CycleClock
-//    A CycleClock tells you the current time in Cycles.  The "time"
-//    is actually time since power-on.  This is like time() but doesn't
-//    involve a system call and is much more precise.
-//
-// NOTE: Not all cpu/platform/kernel combinations guarantee that this
-// clock increments at a constant rate or is synchronized across all logical
-// cpus in a system.
-//
-// If you need the above guarantees, please consider using a different
-// API. There are efforts to provide an interface which provides a millisecond
-// granularity and implemented as a memory read. A memory read is generally
-// cheaper than the CycleClock for many architectures.
-//
-// Also, in some out of order CPU implementations, the CycleClock is not
-// serializing. So if you're trying to count at cycles granularity, your
-// data might be inaccurate due to out of order instruction execution.
-// ----------------------------------------------------------------------
-
-#ifndef BENCHMARK_CYCLECLOCK_H_
-#define BENCHMARK_CYCLECLOCK_H_
-
-#include <cstdint>
-
-#include "benchmark/benchmark.h"
-#include "internal_macros.h"
-
-#if defined(BENCHMARK_OS_MACOSX)
-#include <mach/mach_time.h>
-#endif
-// For MSVC, we want to use '_asm rdtsc' when possible (since it works
-// with even ancient MSVC compilers), and when not possible the
-// __rdtsc intrinsic, declared in <intrin.h>.  Unfortunately, in some
-// environments, <windows.h> and <intrin.h> have conflicting
-// declarations of some other intrinsics, breaking compilation.
-// Therefore, we simply declare __rdtsc ourselves. See also
-// http://connect.microsoft.com/VisualStudio/feedback/details/262047
-#if defined(COMPILER_MSVC) && !defined(_M_IX86)
-extern "C" uint64_t __rdtsc();
-#pragma intrinsic(__rdtsc)
-#endif
-
-#ifndef BENCHMARK_OS_WINDOWS
-#include <sys/time.h>
-#include <time.h>
-#endif
-
-#ifdef BENCHMARK_OS_EMSCRIPTEN
-#include <emscripten.h>
-#endif
-
-namespace benchmark {
-// NOTE: only i386 and x86_64 have been well tested.
-// PPC, sparc, alpha, and ia64 are based on
-//    http://peter.kuscsik.com/wordpress/?p=14
-// with modifications by m3b.  See also
-//    https://setisvn.ssl.berkeley.edu/svn/lib/fftw-3.0.1/kernel/cycle.h
-namespace cycleclock {
-// This should return the number of cycles since power-on.  Thread-safe.
-inline BENCHMARK_ALWAYS_INLINE int64_t Now() {
-#if defined(BENCHMARK_OS_MACOSX)
-  // this goes at the top because we need ALL Macs, regardless of
-  // architecture, to return the number of "mach time units" that
-  // have passed since startup.  See sysinfo.cc where
-  // InitializeSystemInfo() sets the supposed cpu clock frequency of
-  // macs to the number of mach time units per second, not actual
-  // CPU clock frequency (which can change in the face of CPU
-  // frequency scaling).  Also note that when the Mac sleeps, this
-  // counter pauses; it does not continue counting, nor does it
-  // reset to zero.
-  return mach_absolute_time();
-#elif defined(BENCHMARK_OS_EMSCRIPTEN)
-  // this goes above x86-specific code because old versions of Emscripten
-  // define __x86_64__, although they have nothing to do with it.
-  return static_cast<int64_t>(emscripten_get_now() * 1e+6);
-#elif defined(__i386__)
-  int64_t ret;
-  __asm__ volatile("rdtsc" : "=A"(ret));
-  return ret;
-#elif defined(__x86_64__) || defined(__amd64__)
-  uint64_t low, high;
-  __asm__ volatile("rdtsc" : "=a"(low), "=d"(high));
-  return (high << 32) | low;
-#elif defined(__powerpc__) || defined(__ppc__)
-  // This returns a time-base, which is not always precisely a cycle-count.
-  int64_t tbl, tbu0, tbu1;
-  asm("mftbu %0" : "=r"(tbu0));
-  asm("mftb  %0" : "=r"(tbl));
-  asm("mftbu %0" : "=r"(tbu1));
-  tbl &= -static_cast<int64_t>(tbu0 == tbu1);
-  // high 32 bits in tbu1; low 32 bits in tbl  (tbu0 is garbage)
-  return (tbu1 << 32) | tbl;
-#elif defined(__sparc__)
-  int64_t tick;
-  asm(".byte 0x83, 0x41, 0x00, 0x00");
-  asm("mov   %%g1, %0" : "=r"(tick));
-  return tick;
-#elif defined(__ia64__)
-  int64_t itc;
-  asm("mov %0 = ar.itc" : "=r"(itc));
-  return itc;
-#elif defined(COMPILER_MSVC) && defined(_M_IX86)
-  // Older MSVC compilers (like 7.x) don't seem to support the
-  // __rdtsc intrinsic properly, so I prefer to use _asm instead
-  // when I know it will work.  Otherwise, I'll use __rdtsc and hope
-  // the code is being compiled with a non-ancient compiler.
-  _asm rdtsc
-#elif defined(COMPILER_MSVC)
-  return __rdtsc();
-#elif defined(BENCHMARK_OS_NACL)
-  // Native Client validator on x86/x86-64 allows RDTSC instructions,
-  // and this case is handled above. Native Client validator on ARM
-  // rejects MRC instructions (used in the ARM-specific sequence below),
-  // so we handle it here. Portable Native Client compiles to
-  // architecture-agnostic bytecode, which doesn't provide any
-  // cycle counter access mnemonics.
-
-  // Native Client does not provide any API to access cycle counter.
-  // Use clock_gettime(CLOCK_MONOTONIC, ...) instead of gettimeofday
-  // because is provides nanosecond resolution (which is noticable at
-  // least for PNaCl modules running on x86 Mac & Linux).
-  // Initialize to always return 0 if clock_gettime fails.
-  struct timespec ts = { 0, 0 };
-  clock_gettime(CLOCK_MONOTONIC, &ts);
-  return static_cast<int64_t>(ts.tv_sec) * 1000000000 + ts.tv_nsec;
-#elif defined(__aarch64__)
-  // System timer of ARMv8 runs at a different frequency than the CPU's.
-  // The frequency is fixed, typically in the range 1-50MHz.  It can be
-  // read at CNTFRQ special register.  We assume the OS has set up
-  // the virtual timer properly.
-  int64_t virtual_timer_value;
-  asm volatile("mrs %0, cntvct_el0" : "=r"(virtual_timer_value));
-  return virtual_timer_value;
-#elif defined(__ARM_ARCH)
-  // V6 is the earliest arch that has a standard cyclecount
-  // Native Client validator doesn't allow MRC instructions.
-#if (__ARM_ARCH >= 6)
-  uint32_t pmccntr;
-  uint32_t pmuseren;
-  uint32_t pmcntenset;
-  // Read the user mode perf monitor counter access permissions.
-  asm volatile("mrc p15, 0, %0, c9, c14, 0" : "=r"(pmuseren));
-  if (pmuseren & 1) {  // Allows reading perfmon counters for user mode code.
-    asm volatile("mrc p15, 0, %0, c9, c12, 1" : "=r"(pmcntenset));
-    if (pmcntenset & 0x80000000ul) {  // Is it counting?
-      asm volatile("mrc p15, 0, %0, c9, c13, 0" : "=r"(pmccntr));
-      // The counter is set up to count every 64th cycle
-      return static_cast<int64_t>(pmccntr) * 64;  // Should optimize to << 6
-    }
-  }
-#endif
-  struct timeval tv;
-  gettimeofday(&tv, nullptr);
-  return static_cast<int64_t>(tv.tv_sec) * 1000000 + tv.tv_usec;
-#elif defined(__mips__)
-  // mips apparently only allows rdtsc for superusers, so we fall
-  // back to gettimeofday.  It's possible clock_gettime would be better.
-  struct timeval tv;
-  gettimeofday(&tv, nullptr);
-  return static_cast<int64_t>(tv.tv_sec) * 1000000 + tv.tv_usec;
-#else
-// The soft failover to a generic implementation is automatic only for ARM.
-// For other platforms the developer is expected to make an attempt to create
-// a fast implementation and use generic version if nothing better is available.
-#error You need to define CycleTimer for your OS and CPU
-#endif
-}
-}  // end namespace cycleclock
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_CYCLECLOCK_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/internal_macros.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/internal_macros.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/internal_macros.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/internal_macros.h (removed)
@@ -1,57 +0,0 @@
-#ifndef BENCHMARK_INTERNAL_MACROS_H_
-#define BENCHMARK_INTERNAL_MACROS_H_
-
-#include "benchmark/benchmark.h"
-
-#ifndef __has_feature
-#define __has_feature(x) 0
-#endif
-
-#if defined(__clang__)
-#define COMPILER_CLANG
-#elif defined(_MSC_VER)
-#define COMPILER_MSVC
-#elif defined(__GNUC__)
-#define COMPILER_GCC
-#endif
-
-#if __has_feature(cxx_attributes)
-#define BENCHMARK_NORETURN [[noreturn]]
-#elif defined(__GNUC__)
-#define BENCHMARK_NORETURN __attribute__((noreturn))
-#elif defined(COMPILER_MSVC)
-#define BENCHMARK_NORETURN __declspec(noreturn)
-#else
-#define BENCHMARK_NORETURN
-#endif
-
-#if defined(__CYGWIN__)
-#define BENCHMARK_OS_CYGWIN 1
-#elif defined(_WIN32)
-#define BENCHMARK_OS_WINDOWS 1
-#elif defined(__APPLE__)
-#include "TargetConditionals.h"
-  #if defined(TARGET_OS_MAC)
-    #define BENCHMARK_OS_MACOSX 1
-    #if defined(TARGET_OS_IPHONE)
-      #define BENCHMARK_OS_IOS 1
-    #endif
-  #endif
-#elif defined(__FreeBSD__)
-#define BENCHMARK_OS_FREEBSD 1
-#elif defined(__linux__)
-#define BENCHMARK_OS_LINUX 1
-#elif defined(__native_client__)
-#define BENCHMARK_OS_NACL 1
-#elif defined(EMSCRIPTEN)
-#define BENCHMARK_OS_EMSCRIPTEN 1
-#elif defined(__rtems__)
-#define BENCHMARK_OS_RTEMS 1
-#endif
-
-#if !__has_feature(cxx_exceptions) && !defined(__cpp_exceptions) \
-     && !defined(__EXCEPTIONS)
-#define BENCHMARK_HAS_NO_EXCEPTIONS
-#endif
-
-#endif  // BENCHMARK_INTERNAL_MACROS_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/json_reporter.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/json_reporter.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/json_reporter.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/json_reporter.cc (removed)
@@ -1,168 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-#include "complexity.h"
-
-#include <algorithm>
-#include <cstdint>
-#include <iostream>
-#include <string>
-#include <tuple>
-#include <vector>
-
-#include "string_util.h"
-#include "timers.h"
-
-namespace benchmark {
-
-namespace {
-
-std::string FormatKV(std::string const& key, std::string const& value) {
-  return StringPrintF("\"%s\": \"%s\"", key.c_str(), value.c_str());
-}
-
-std::string FormatKV(std::string const& key, const char* value) {
-  return StringPrintF("\"%s\": \"%s\"", key.c_str(), value);
-}
-
-std::string FormatKV(std::string const& key, bool value) {
-  return StringPrintF("\"%s\": %s", key.c_str(), value ? "true" : "false");
-}
-
-std::string FormatKV(std::string const& key, int64_t value) {
-  std::stringstream ss;
-  ss << '"' << key << "\": " << value;
-  return ss.str();
-}
-
-std::string FormatKV(std::string const& key, double value) {
-  return StringPrintF("\"%s\": %.2f", key.c_str(), value);
-}
-
-int64_t RoundDouble(double v) { return static_cast<int64_t>(v + 0.5); }
-
-}  // end namespace
-
-bool JSONReporter::ReportContext(const Context& context) {
-  std::ostream& out = GetOutputStream();
-
-  out << "{\n";
-  std::string inner_indent(2, ' ');
-
-  // Open context block and print context information.
-  out << inner_indent << "\"context\": {\n";
-  std::string indent(4, ' ');
-
-  std::string walltime_value = LocalDateTimeString();
-  out << indent << FormatKV("date", walltime_value) << ",\n";
-
-  out << indent << FormatKV("num_cpus", static_cast<int64_t>(context.num_cpus))
-      << ",\n";
-  out << indent << FormatKV("mhz_per_cpu", RoundDouble(context.mhz_per_cpu))
-      << ",\n";
-  out << indent << FormatKV("cpu_scaling_enabled", context.cpu_scaling_enabled)
-      << ",\n";
-
-#if defined(NDEBUG)
-  const char build_type[] = "release";
-#else
-  const char build_type[] = "debug";
-#endif
-  out << indent << FormatKV("library_build_type", build_type) << "\n";
-  // Close context block and open the list of benchmarks.
-  out << inner_indent << "},\n";
-  out << inner_indent << "\"benchmarks\": [\n";
-  return true;
-}
-
-void JSONReporter::ReportRuns(std::vector<Run> const& reports) {
-  if (reports.empty()) {
-    return;
-  }
-  std::string indent(4, ' ');
-  std::ostream& out = GetOutputStream();
-  if (!first_report_) {
-    out << ",\n";
-  }
-  first_report_ = false;
-
-  for (auto it = reports.begin(); it != reports.end(); ++it) {
-    out << indent << "{\n";
-    PrintRunData(*it);
-    out << indent << '}';
-    auto it_cp = it;
-    if (++it_cp != reports.end()) {
-      out << ",\n";
-    }
-  }
-}
-
-void JSONReporter::Finalize() {
-  // Close the list of benchmarks and the top level object.
-  GetOutputStream() << "\n  ]\n}\n";
-}
-
-void JSONReporter::PrintRunData(Run const& run) {
-  std::string indent(6, ' ');
-  std::ostream& out = GetOutputStream();
-  out << indent << FormatKV("name", run.benchmark_name) << ",\n";
-  if (run.error_occurred) {
-    out << indent << FormatKV("error_occurred", run.error_occurred) << ",\n";
-    out << indent << FormatKV("error_message", run.error_message) << ",\n";
-  }
-  if (!run.report_big_o && !run.report_rms) {
-    out << indent << FormatKV("iterations", run.iterations) << ",\n";
-    out << indent
-        << FormatKV("real_time", RoundDouble(run.GetAdjustedRealTime()))
-        << ",\n";
-    out << indent
-        << FormatKV("cpu_time", RoundDouble(run.GetAdjustedCPUTime()));
-    out << ",\n"
-        << indent << FormatKV("time_unit", GetTimeUnitString(run.time_unit));
-  } else if (run.report_big_o) {
-    out << indent
-        << FormatKV("cpu_coefficient", RoundDouble(run.GetAdjustedCPUTime()))
-        << ",\n";
-    out << indent
-        << FormatKV("real_coefficient", RoundDouble(run.GetAdjustedRealTime()))
-        << ",\n";
-    out << indent << FormatKV("big_o", GetBigOString(run.complexity)) << ",\n";
-    out << indent << FormatKV("time_unit", GetTimeUnitString(run.time_unit));
-  } else if (run.report_rms) {
-    out << indent
-        << FormatKV("rms", run.GetAdjustedCPUTime());
-  }
-  if (run.bytes_per_second > 0.0) {
-    out << ",\n"
-        << indent
-        << FormatKV("bytes_per_second", RoundDouble(run.bytes_per_second));
-  }
-  if (run.items_per_second > 0.0) {
-    out << ",\n"
-        << indent
-        << FormatKV("items_per_second", RoundDouble(run.items_per_second));
-  }
-  for(auto &c : run.counters) {
-    out << ",\n"
-        << indent
-        << FormatKV(c.first, RoundDouble(c.second));
-  }
-  if (!run.report_label.empty()) {
-    out << ",\n" << indent << FormatKV("label", run.report_label);
-  }
-  out << '\n';
-}
-
-} // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/log.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/log.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/log.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/log.h (removed)
@@ -1,73 +0,0 @@
-#ifndef BENCHMARK_LOG_H_
-#define BENCHMARK_LOG_H_
-
-#include <iostream>
-#include <ostream>
-
-#include "benchmark/benchmark.h"
-
-namespace benchmark {
-namespace internal {
-
-typedef std::basic_ostream<char>&(EndLType)(std::basic_ostream<char>&);
-
-class LogType {
-  friend LogType& GetNullLogInstance();
-  friend LogType& GetErrorLogInstance();
-
-  // FIXME: Add locking to output.
-  template <class Tp>
-  friend LogType& operator<<(LogType&, Tp const&);
-  friend LogType& operator<<(LogType&, EndLType*);
-
- private:
-  LogType(std::ostream* out) : out_(out) {}
-  std::ostream* out_;
-  BENCHMARK_DISALLOW_COPY_AND_ASSIGN(LogType);
-};
-
-template <class Tp>
-LogType& operator<<(LogType& log, Tp const& value) {
-  if (log.out_) {
-    *log.out_ << value;
-  }
-  return log;
-}
-
-inline LogType& operator<<(LogType& log, EndLType* m) {
-  if (log.out_) {
-    *log.out_ << m;
-  }
-  return log;
-}
-
-inline int& LogLevel() {
-  static int log_level = 0;
-  return log_level;
-}
-
-inline LogType& GetNullLogInstance() {
-  static LogType log(nullptr);
-  return log;
-}
-
-inline LogType& GetErrorLogInstance() {
-  static LogType log(&std::clog);
-  return log;
-}
-
-inline LogType& GetLogInstanceForLevel(int level) {
-  if (level <= LogLevel()) {
-    return GetErrorLogInstance();
-  }
-  return GetNullLogInstance();
-}
-
-}  // end namespace internal
-}  // end namespace benchmark
-
-#define VLOG(x)                                                               \
-  (::benchmark::internal::GetLogInstanceForLevel(x) << "-- LOG(" << x << "):" \
-                                                                         " ")
-
-#endif

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/mutex.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/mutex.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/mutex.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/mutex.h (removed)
@@ -1,155 +0,0 @@
-#ifndef BENCHMARK_MUTEX_H_
-#define BENCHMARK_MUTEX_H_
-
-#include <condition_variable>
-#include <mutex>
-
-#include "check.h"
-
-// Enable thread safety attributes only with clang.
-// The attributes can be safely erased when compiling with other compilers.
-#if defined(HAVE_THREAD_SAFETY_ATTRIBUTES)
-#define THREAD_ANNOTATION_ATTRIBUTE__(x) __attribute__((x))
-#else
-#define THREAD_ANNOTATION_ATTRIBUTE__(x)  // no-op
-#endif
-
-#define CAPABILITY(x) THREAD_ANNOTATION_ATTRIBUTE__(capability(x))
-
-#define SCOPED_CAPABILITY THREAD_ANNOTATION_ATTRIBUTE__(scoped_lockable)
-
-#define GUARDED_BY(x) THREAD_ANNOTATION_ATTRIBUTE__(guarded_by(x))
-
-#define PT_GUARDED_BY(x) THREAD_ANNOTATION_ATTRIBUTE__(pt_guarded_by(x))
-
-#define ACQUIRED_BEFORE(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(acquired_before(__VA_ARGS__))
-
-#define ACQUIRED_AFTER(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(acquired_after(__VA_ARGS__))
-
-#define REQUIRES(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(requires_capability(__VA_ARGS__))
-
-#define REQUIRES_SHARED(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(requires_shared_capability(__VA_ARGS__))
-
-#define ACQUIRE(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(acquire_capability(__VA_ARGS__))
-
-#define ACQUIRE_SHARED(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(acquire_shared_capability(__VA_ARGS__))
-
-#define RELEASE(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(release_capability(__VA_ARGS__))
-
-#define RELEASE_SHARED(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(release_shared_capability(__VA_ARGS__))
-
-#define TRY_ACQUIRE(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(try_acquire_capability(__VA_ARGS__))
-
-#define TRY_ACQUIRE_SHARED(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(try_acquire_shared_capability(__VA_ARGS__))
-
-#define EXCLUDES(...) THREAD_ANNOTATION_ATTRIBUTE__(locks_excluded(__VA_ARGS__))
-
-#define ASSERT_CAPABILITY(x) THREAD_ANNOTATION_ATTRIBUTE__(assert_capability(x))
-
-#define ASSERT_SHARED_CAPABILITY(x) \
-  THREAD_ANNOTATION_ATTRIBUTE__(assert_shared_capability(x))
-
-#define RETURN_CAPABILITY(x) THREAD_ANNOTATION_ATTRIBUTE__(lock_returned(x))
-
-#define NO_THREAD_SAFETY_ANALYSIS \
-  THREAD_ANNOTATION_ATTRIBUTE__(no_thread_safety_analysis)
-
-namespace benchmark {
-
-typedef std::condition_variable Condition;
-
-// NOTE: Wrappers for std::mutex and std::unique_lock are provided so that
-// we can annotate them with thread safety attributes and use the
-// -Wthread-safety warning with clang. The standard library types cannot be
-// used directly because they do not provided the required annotations.
-class CAPABILITY("mutex") Mutex {
- public:
-  Mutex() {}
-
-  void lock() ACQUIRE() { mut_.lock(); }
-  void unlock() RELEASE() { mut_.unlock(); }
-  std::mutex& native_handle() { return mut_; }
-
- private:
-  std::mutex mut_;
-};
-
-class SCOPED_CAPABILITY MutexLock {
-  typedef std::unique_lock<std::mutex> MutexLockImp;
-
- public:
-  MutexLock(Mutex& m) ACQUIRE(m) : ml_(m.native_handle()) {}
-  ~MutexLock() RELEASE() {}
-  MutexLockImp& native_handle() { return ml_; }
-
- private:
-  MutexLockImp ml_;
-};
-
-class Barrier {
- public:
-  Barrier(int num_threads) : running_threads_(num_threads) {}
-
-  // Called by each thread
-  bool wait() EXCLUDES(lock_) {
-    bool last_thread = false;
-    {
-      MutexLock ml(lock_);
-      last_thread = createBarrier(ml);
-    }
-    if (last_thread) phase_condition_.notify_all();
-    return last_thread;
-  }
-
-  void removeThread() EXCLUDES(lock_) {
-    MutexLock ml(lock_);
-    --running_threads_;
-    if (entered_ != 0) phase_condition_.notify_all();
-  }
-
- private:
-  Mutex lock_;
-  Condition phase_condition_;
-  int running_threads_;
-
-  // State for barrier management
-  int phase_number_ = 0;
-  int entered_ = 0;  // Number of threads that have entered this barrier
-
-  // Enter the barrier and wait until all other threads have also
-  // entered the barrier.  Returns iff this is the last thread to
-  // enter the barrier.
-  bool createBarrier(MutexLock& ml) REQUIRES(lock_) {
-    CHECK_LT(entered_, running_threads_);
-    entered_++;
-    if (entered_ < running_threads_) {
-      // Wait for all threads to enter
-      int phase_number_cp = phase_number_;
-      auto cb = [this, phase_number_cp]() {
-        return this->phase_number_ > phase_number_cp ||
-               entered_ == running_threads_;  // A thread has aborted in error
-      };
-      phase_condition_.wait(ml.native_handle(), cb);
-      if (phase_number_ > phase_number_cp) return false;
-      // else (running_threads_ == entered_) and we are the last thread.
-    }
-    // Last thread has reached the barrier
-    phase_number_++;
-    entered_ = 0;
-    return true;
-  }
-};
-
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_MUTEX_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/re.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/re.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/re.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/re.h (removed)
@@ -1,140 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#ifndef BENCHMARK_RE_H_
-#define BENCHMARK_RE_H_
-
-#include "internal_macros.h"
-
-// Prefer C regex libraries when compiling w/o exceptions so that we can
-// correctly report errors.
-#if defined(BENCHMARK_HAS_NO_EXCEPTIONS) && defined(HAVE_STD_REGEX) && \
-    (defined(HAVE_GNU_POSIX_REGEX) || defined(HAVE_POSIX_REGEX))
-#undef HAVE_STD_REGEX
-#endif
-
-#if defined(HAVE_STD_REGEX)
-#include <regex>
-#elif defined(HAVE_GNU_POSIX_REGEX)
-#include <gnuregex.h>
-#elif defined(HAVE_POSIX_REGEX)
-#include <regex.h>
-#else
-#error No regular expression backend was found!
-#endif
-#include <string>
-
-#include "check.h"
-
-namespace benchmark {
-
-// A wrapper around the POSIX regular expression API that provides automatic
-// cleanup
-class Regex {
- public:
-  Regex() : init_(false) {}
-
-  ~Regex();
-
-  // Compile a regular expression matcher from spec.  Returns true on success.
-  //
-  // On failure (and if error is not nullptr), error is populated with a human
-  // readable error message if an error occurs.
-  bool Init(const std::string& spec, std::string* error);
-
-  // Returns whether str matches the compiled regular expression.
-  bool Match(const std::string& str);
-
- private:
-  bool init_;
-// Underlying regular expression object
-#if defined(HAVE_STD_REGEX)
-  std::regex re_;
-#elif defined(HAVE_POSIX_REGEX) || defined(HAVE_GNU_POSIX_REGEX)
-  regex_t re_;
-#else
-#error No regular expression backend implementation available
-#endif
-};
-
-#if defined(HAVE_STD_REGEX)
-
-inline bool Regex::Init(const std::string& spec, std::string* error) {
-#ifdef BENCHMARK_HAS_NO_EXCEPTIONS
-  ((void)error); // suppress unused warning
-#else
-  try {
-#endif
-    re_ = std::regex(spec, std::regex_constants::extended);
-    init_ = true;
-#ifndef BENCHMARK_HAS_NO_EXCEPTIONS
-  } catch (const std::regex_error& e) {
-    if (error) {
-      *error = e.what();
-    }
-  }
-#endif
-  return init_;
-}
-
-inline Regex::~Regex() {}
-
-inline bool Regex::Match(const std::string& str) {
-  if (!init_) {
-    return false;
-  }
-  return std::regex_search(str, re_);
-}
-
-#else
-inline bool Regex::Init(const std::string& spec, std::string* error) {
-  int ec = regcomp(&re_, spec.c_str(), REG_EXTENDED | REG_NOSUB);
-  if (ec != 0) {
-    if (error) {
-      size_t needed = regerror(ec, &re_, nullptr, 0);
-      char* errbuf = new char[needed];
-      regerror(ec, &re_, errbuf, needed);
-
-      // regerror returns the number of bytes necessary to null terminate
-      // the string, so we move that when assigning to error.
-      CHECK_NE(needed, 0);
-      error->assign(errbuf, needed - 1);
-
-      delete[] errbuf;
-    }
-
-    return false;
-  }
-
-  init_ = true;
-  return true;
-}
-
-inline Regex::~Regex() {
-  if (init_) {
-    regfree(&re_);
-  }
-}
-
-inline bool Regex::Match(const std::string& str) {
-  if (!init_) {
-    return false;
-  }
-  return regexec(&re_, str.c_str(), 0, nullptr, 0) == 0;
-}
-#endif
-
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_RE_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/reporter.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/reporter.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/reporter.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/reporter.cc (removed)
@@ -1,68 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-#include "timers.h"
-
-#include <cstdlib>
-
-#include <iostream>
-#include <tuple>
-#include <vector>
-
-#include "check.h"
-#include "stat.h"
-
-namespace benchmark {
-
-BenchmarkReporter::BenchmarkReporter()
-    : output_stream_(&std::cout), error_stream_(&std::cerr) {}
-
-BenchmarkReporter::~BenchmarkReporter() {}
-
-void BenchmarkReporter::PrintBasicContext(std::ostream *out,
-                                          Context const &context) {
-  CHECK(out) << "cannot be null";
-  auto &Out = *out;
-
-  Out << "Run on (" << context.num_cpus << " X " << context.mhz_per_cpu
-      << " MHz CPU " << ((context.num_cpus > 1) ? "s" : "") << ")\n";
-
-  Out << LocalDateTimeString() << "\n";
-
-  if (context.cpu_scaling_enabled) {
-    Out << "***WARNING*** CPU scaling is enabled, the benchmark "
-           "real time measurements may be noisy and will incur extra "
-           "overhead.\n";
-  }
-
-#ifndef NDEBUG
-  Out << "***WARNING*** Library was built as DEBUG. Timings may be "
-         "affected.\n";
-#endif
-}
-
-double BenchmarkReporter::Run::GetAdjustedRealTime() const {
-  double new_time = real_accumulated_time * GetTimeUnitMultiplier(time_unit);
-  if (iterations != 0) new_time /= static_cast<double>(iterations);
-  return new_time;
-}
-
-double BenchmarkReporter::Run::GetAdjustedCPUTime() const {
-  double new_time = cpu_accumulated_time * GetTimeUnitMultiplier(time_unit);
-  if (iterations != 0) new_time /= static_cast<double>(iterations);
-  return new_time;
-}
-
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.cc (removed)
@@ -1,51 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "sleep.h"
-
-#include <cerrno>
-#include <cstdlib>
-#include <ctime>
-
-#include "internal_macros.h"
-
-#ifdef BENCHMARK_OS_WINDOWS
-#include <Windows.h>
-#endif
-
-namespace benchmark {
-#ifdef BENCHMARK_OS_WINDOWS
-// Window's Sleep takes milliseconds argument.
-void SleepForMilliseconds(int milliseconds) { Sleep(milliseconds); }
-void SleepForSeconds(double seconds) {
-  SleepForMilliseconds(static_cast<int>(kNumMillisPerSecond * seconds));
-}
-#else   // BENCHMARK_OS_WINDOWS
-void SleepForMicroseconds(int microseconds) {
-  struct timespec sleep_time;
-  sleep_time.tv_sec = microseconds / kNumMicrosPerSecond;
-  sleep_time.tv_nsec = (microseconds % kNumMicrosPerSecond) * kNumNanosPerMicro;
-  while (nanosleep(&sleep_time, &sleep_time) != 0 && errno == EINTR)
-    ;  // Ignore signals and wait for the full interval to elapse.
-}
-
-void SleepForMilliseconds(int milliseconds) {
-  SleepForMicroseconds(milliseconds * kNumMicrosPerMilli);
-}
-
-void SleepForSeconds(double seconds) {
-  SleepForMicroseconds(static_cast<int>(seconds * kNumMicrosPerSecond));
-}
-#endif  // BENCHMARK_OS_WINDOWS
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.h (removed)
@@ -1,15 +0,0 @@
-#ifndef BENCHMARK_SLEEP_H_
-#define BENCHMARK_SLEEP_H_
-
-namespace benchmark {
-const int kNumMillisPerSecond = 1000;
-const int kNumMicrosPerMilli = 1000;
-const int kNumMicrosPerSecond = kNumMillisPerSecond * 1000;
-const int kNumNanosPerMicro = 1000;
-const int kNumNanosPerSecond = kNumNanosPerMicro * kNumMicrosPerSecond;
-
-void SleepForMilliseconds(int milliseconds);
-void SleepForSeconds(double seconds);
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_SLEEP_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/stat.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/stat.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/stat.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/stat.h (removed)
@@ -1,310 +0,0 @@
-#ifndef BENCHMARK_STAT_H_
-#define BENCHMARK_STAT_H_
-
-#include <cmath>
-#include <limits>
-#include <ostream>
-#include <type_traits>
-
-namespace benchmark {
-
-template <typename VType, typename NumType>
-class Stat1;
-
-template <typename VType, typename NumType>
-class Stat1MinMax;
-
-typedef Stat1<float, int64_t> Stat1_f;
-typedef Stat1<double, int64_t> Stat1_d;
-typedef Stat1MinMax<float, int64_t> Stat1MinMax_f;
-typedef Stat1MinMax<double, int64_t> Stat1MinMax_d;
-
-template <typename VType>
-class Vector2;
-template <typename VType>
-class Vector3;
-template <typename VType>
-class Vector4;
-
-template <typename VType, typename NumType>
-class Stat1 {
- public:
-  typedef Stat1<VType, NumType> Self;
-
-  Stat1() { Clear(); }
-  // Create a sample of value dat and weight 1
-  explicit Stat1(const VType &dat) {
-    sum_ = dat;
-    sum_squares_ = Sqr(dat);
-    numsamples_ = 1;
-  }
-  // Create statistics for all the samples between begin (included)
-  // and end(excluded)
-  explicit Stat1(const VType *begin, const VType *end) {
-    Clear();
-    for (const VType *item = begin; item < end; ++item) {
-      (*this) += Stat1(*item);
-    }
-  }
-  // Create a sample of value dat and weight w
-  Stat1(const VType &dat, const NumType &w) {
-    sum_ = w * dat;
-    sum_squares_ = w * Sqr(dat);
-    numsamples_ = w;
-  }
-  // Copy operator
-  Stat1(const Self &stat) {
-    sum_ = stat.sum_;
-    sum_squares_ = stat.sum_squares_;
-    numsamples_ = stat.numsamples_;
-  }
-
-  void Clear() {
-    numsamples_ = NumType();
-    sum_squares_ = sum_ = VType();
-  }
-
-  Self &operator=(const Self &stat) {
-    sum_ = stat.sum_;
-    sum_squares_ = stat.sum_squares_;
-    numsamples_ = stat.numsamples_;
-    return (*this);
-  }
-  // Merge statistics from two sample sets.
-  Self &operator+=(const Self &stat) {
-    sum_ += stat.sum_;
-    sum_squares_ += stat.sum_squares_;
-    numsamples_ += stat.numsamples_;
-    return (*this);
-  }
-  // The operation opposite to +=
-  Self &operator-=(const Self &stat) {
-    sum_ -= stat.sum_;
-    sum_squares_ -= stat.sum_squares_;
-    numsamples_ -= stat.numsamples_;
-    return (*this);
-  }
-  // Multiply the weight of the set of samples by a factor k
-  Self &operator*=(const VType &k) {
-    sum_ *= k;
-    sum_squares_ *= k;
-    numsamples_ *= k;
-    return (*this);
-  }
-
-  // Merge statistics from two sample sets.
-  Self operator+(const Self &stat) const { return Self(*this) += stat; }
-
-  // The operation opposite to +
-  Self operator-(const Self &stat) const { return Self(*this) -= stat; }
-
-  // Multiply the weight of the set of samples by a factor k
-  Self operator*(const VType &k) const { return Self(*this) *= k; }
-
-  // Return the total weight of this sample set
-  NumType numSamples() const { return numsamples_; }
-
-  // Return the sum of this sample set
-  VType Sum() const { return sum_; }
-
-  // Return the mean of this sample set
-  VType Mean() const {
-    if (numsamples_ == 0) return VType();
-    return sum_ * (1.0 / numsamples_);
-  }
-
-  // Return the mean of this sample set and compute the standard deviation at
-  // the same time.
-  VType Mean(VType *stddev) const {
-    if (numsamples_ == 0) return VType();
-    VType mean = sum_ * (1.0 / numsamples_);
-    if (stddev) {
-      // Sample standard deviation is undefined for n = 1
-      if (numsamples_ == 1) {
-        *stddev = VType();
-      } else {
-        VType avg_squares = sum_squares_ * (1.0 / numsamples_);
-        *stddev = Sqrt(numsamples_ / (numsamples_ - 1.0) * (avg_squares - Sqr(mean)));
-      }
-    }
-    return mean;
-  }
-
-  // Return the standard deviation of the sample set
-  VType StdDev() const {
-    VType stddev = VType();
-    Mean(&stddev);
-    return stddev;
-  }
-
- private:
-  static_assert(std::is_integral<NumType>::value &&
-                    !std::is_same<NumType, bool>::value,
-                "NumType must be an integral type that is not bool.");
-  // Let i be the index of the samples provided (using +=)
-  // and weight[i],value[i] be the data of sample #i
-  // then the variables have the following meaning:
-  NumType numsamples_;  // sum of weight[i];
-  VType sum_;           // sum of weight[i]*value[i];
-  VType sum_squares_;   // sum of weight[i]*value[i]^2;
-
-  // Template function used to square a number.
-  // For a vector we square all components
-  template <typename SType>
-  static inline SType Sqr(const SType &dat) {
-    return dat * dat;
-  }
-
-  template <typename SType>
-  static inline Vector2<SType> Sqr(const Vector2<SType> &dat) {
-    return dat.MulComponents(dat);
-  }
-
-  template <typename SType>
-  static inline Vector3<SType> Sqr(const Vector3<SType> &dat) {
-    return dat.MulComponents(dat);
-  }
-
-  template <typename SType>
-  static inline Vector4<SType> Sqr(const Vector4<SType> &dat) {
-    return dat.MulComponents(dat);
-  }
-
-  // Template function used to take the square root of a number.
-  // For a vector we square all components
-  template <typename SType>
-  static inline SType Sqrt(const SType &dat) {
-    // Avoid NaN due to imprecision in the calculations
-    if (dat < 0) return 0;
-    return sqrt(dat);
-  }
-
-  template <typename SType>
-  static inline Vector2<SType> Sqrt(const Vector2<SType> &dat) {
-    // Avoid NaN due to imprecision in the calculations
-    return Max(dat, Vector2<SType>()).Sqrt();
-  }
-
-  template <typename SType>
-  static inline Vector3<SType> Sqrt(const Vector3<SType> &dat) {
-    // Avoid NaN due to imprecision in the calculations
-    return Max(dat, Vector3<SType>()).Sqrt();
-  }
-
-  template <typename SType>
-  static inline Vector4<SType> Sqrt(const Vector4<SType> &dat) {
-    // Avoid NaN due to imprecision in the calculations
-    return Max(dat, Vector4<SType>()).Sqrt();
-  }
-};
-
-// Useful printing function
-template <typename VType, typename NumType>
-std::ostream &operator<<(std::ostream &out, const Stat1<VType, NumType> &s) {
-  out << "{ avg = " << s.Mean() << " std = " << s.StdDev()
-      << " nsamples = " << s.NumSamples() << "}";
-  return out;
-}
-
-// Stat1MinMax: same as Stat1, but it also
-// keeps the Min and Max values; the "-"
-// operator is disabled because it cannot be implemented
-// efficiently
-template <typename VType, typename NumType>
-class Stat1MinMax : public Stat1<VType, NumType> {
- public:
-  typedef Stat1MinMax<VType, NumType> Self;
-
-  Stat1MinMax() { Clear(); }
-  // Create a sample of value dat and weight 1
-  explicit Stat1MinMax(const VType &dat) : Stat1<VType, NumType>(dat) {
-    max_ = dat;
-    min_ = dat;
-  }
-  // Create statistics for all the samples between begin (included)
-  // and end(excluded)
-  explicit Stat1MinMax(const VType *begin, const VType *end) {
-    Clear();
-    for (const VType *item = begin; item < end; ++item) {
-      (*this) += Stat1MinMax(*item);
-    }
-  }
-  // Create a sample of value dat and weight w
-  Stat1MinMax(const VType &dat, const NumType &w)
-      : Stat1<VType, NumType>(dat, w) {
-    max_ = dat;
-    min_ = dat;
-  }
-  // Copy operator
-  Stat1MinMax(const Self &stat) : Stat1<VType, NumType>(stat) {
-    max_ = stat.max_;
-    min_ = stat.min_;
-  }
-
-  void Clear() {
-    Stat1<VType, NumType>::Clear();
-    if (std::numeric_limits<VType>::has_infinity) {
-      min_ = std::numeric_limits<VType>::infinity();
-      max_ = -std::numeric_limits<VType>::infinity();
-    } else {
-      min_ = std::numeric_limits<VType>::max();
-      max_ = std::numeric_limits<VType>::min();
-    }
-  }
-
-  Self &operator=(const Self &stat) {
-    this->Stat1<VType, NumType>::operator=(stat);
-    max_ = stat.max_;
-    min_ = stat.min_;
-    return (*this);
-  }
-  // Merge statistics from two sample sets.
-  Self &operator+=(const Self &stat) {
-    this->Stat1<VType, NumType>::operator+=(stat);
-    if (stat.max_ > max_) max_ = stat.max_;
-    if (stat.min_ < min_) min_ = stat.min_;
-    return (*this);
-  }
-  // Multiply the weight of the set of samples by a factor k
-  Self &operator*=(const VType &stat) {
-    this->Stat1<VType, NumType>::operator*=(stat);
-    return (*this);
-  }
-  // Merge statistics from two sample sets.
-  Self operator+(const Self &stat) const { return Self(*this) += stat; }
-  // Multiply the weight of the set of samples by a factor k
-  Self operator*(const VType &k) const { return Self(*this) *= k; }
-
-  // Return the maximal value in this sample set
-  VType Max() const { return max_; }
-  // Return the minimal value in this sample set
-  VType Min() const { return min_; }
-
- private:
-  // The - operation makes no sense with Min/Max
-  // unless we keep the full list of values (but we don't)
-  // make it private, and let it undefined so nobody can call it
-  Self &operator-=(const Self &stat);  // senseless. let it undefined.
-
-  // The operation opposite to -
-  Self operator-(const Self &stat) const;  // senseless. let it undefined.
-
-  // Let i be the index of the samples provided (using +=)
-  // and weight[i],value[i] be the data of sample #i
-  // then the variables have the following meaning:
-  VType max_;  // max of value[i]
-  VType min_;  // min of value[i]
-};
-
-// Useful printing function
-template <typename VType, typename NumType>
-std::ostream &operator<<(std::ostream &out,
-                         const Stat1MinMax<VType, NumType> &s) {
-  out << "{ avg = " << s.Mean() << " std = " << s.StdDev()
-      << " nsamples = " << s.NumSamples() << " min = " << s.Min()
-      << " max = " << s.Max() << "}";
-  return out;
-}
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_STAT_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.cc (removed)
@@ -1,172 +0,0 @@
-#include "string_util.h"
-
-#include <array>
-#include <cmath>
-#include <cstdarg>
-#include <cstdio>
-#include <memory>
-#include <sstream>
-
-#include "arraysize.h"
-
-namespace benchmark {
-namespace {
-
-// kilo, Mega, Giga, Tera, Peta, Exa, Zetta, Yotta.
-const char kBigSIUnits[] = "kMGTPEZY";
-// Kibi, Mebi, Gibi, Tebi, Pebi, Exbi, Zebi, Yobi.
-const char kBigIECUnits[] = "KMGTPEZY";
-// milli, micro, nano, pico, femto, atto, zepto, yocto.
-const char kSmallSIUnits[] = "munpfazy";
-
-// We require that all three arrays have the same size.
-static_assert(arraysize(kBigSIUnits) == arraysize(kBigIECUnits),
-              "SI and IEC unit arrays must be the same size");
-static_assert(arraysize(kSmallSIUnits) == arraysize(kBigSIUnits),
-              "Small SI and Big SI unit arrays must be the same size");
-
-static const int64_t kUnitsSize = arraysize(kBigSIUnits);
-
-}  // end anonymous namespace
-
-void ToExponentAndMantissa(double val, double thresh, int precision,
-                           double one_k, std::string* mantissa,
-                           int64_t* exponent) {
-  std::stringstream mantissa_stream;
-
-  if (val < 0) {
-    mantissa_stream << "-";
-    val = -val;
-  }
-
-  // Adjust threshold so that it never excludes things which can't be rendered
-  // in 'precision' digits.
-  const double adjusted_threshold =
-      std::max(thresh, 1.0 / std::pow(10.0, precision));
-  const double big_threshold = adjusted_threshold * one_k;
-  const double small_threshold = adjusted_threshold;
-  // Values in ]simple_threshold,small_threshold[ will be printed as-is
-  const double simple_threshold = 0.01;
-
-  if (val > big_threshold) {
-    // Positive powers
-    double scaled = val;
-    for (size_t i = 0; i < arraysize(kBigSIUnits); ++i) {
-      scaled /= one_k;
-      if (scaled <= big_threshold) {
-        mantissa_stream << scaled;
-        *exponent = i + 1;
-        *mantissa = mantissa_stream.str();
-        return;
-      }
-    }
-    mantissa_stream << val;
-    *exponent = 0;
-  } else if (val < small_threshold) {
-    // Negative powers
-    if (val < simple_threshold) {
-      double scaled = val;
-      for (size_t i = 0; i < arraysize(kSmallSIUnits); ++i) {
-        scaled *= one_k;
-        if (scaled >= small_threshold) {
-          mantissa_stream << scaled;
-          *exponent = -static_cast<int64_t>(i + 1);
-          *mantissa = mantissa_stream.str();
-          return;
-        }
-      }
-    }
-    mantissa_stream << val;
-    *exponent = 0;
-  } else {
-    mantissa_stream << val;
-    *exponent = 0;
-  }
-  *mantissa = mantissa_stream.str();
-}
-
-std::string ExponentToPrefix(int64_t exponent, bool iec) {
-  if (exponent == 0) return "";
-
-  const int64_t index = (exponent > 0 ? exponent - 1 : -exponent - 1);
-  if (index >= kUnitsSize) return "";
-
-  const char* array =
-      (exponent > 0 ? (iec ? kBigIECUnits : kBigSIUnits) : kSmallSIUnits);
-  if (iec)
-    return array[index] + std::string("i");
-  else
-    return std::string(1, array[index]);
-}
-
-std::string ToBinaryStringFullySpecified(double value, double threshold,
-                                         int precision) {
-  std::string mantissa;
-  int64_t exponent;
-  ToExponentAndMantissa(value, threshold, precision, 1024.0, &mantissa,
-                        &exponent);
-  return mantissa + ExponentToPrefix(exponent, false);
-}
-
-void AppendHumanReadable(int n, std::string* str) {
-  std::stringstream ss;
-  // Round down to the nearest SI prefix.
-  ss << ToBinaryStringFullySpecified(n, 1.0, 0);
-  *str += ss.str();
-}
-
-std::string HumanReadableNumber(double n) {
-  // 1.1 means that figures up to 1.1k should be shown with the next unit down;
-  // this softens edge effects.
-  // 1 means that we should show one decimal place of precision.
-  return ToBinaryStringFullySpecified(n, 1.1, 1);
-}
-
-std::string StringPrintFImp(const char* msg, va_list args) {
-  // we might need a second shot at this, so pre-emptivly make a copy
-  va_list args_cp;
-  va_copy(args_cp, args);
-
-  // TODO(ericwf): use std::array for first attempt to avoid one memory
-  // allocation guess what the size might be
-  std::array<char, 256> local_buff;
-  std::size_t size = local_buff.size();
-  // 2015-10-08: vsnprintf is used instead of snd::vsnprintf due to a limitation
-  // in the android-ndk
-  auto ret = vsnprintf(local_buff.data(), size, msg, args_cp);
-
-  va_end(args_cp);
-
-  // handle empty expansion
-  if (ret == 0) return std::string{};
-  if (static_cast<std::size_t>(ret) < size)
-    return std::string(local_buff.data());
-
-  // we did not provide a long enough buffer on our first attempt.
-  // add 1 to size to account for null-byte in size cast to prevent overflow
-  size = static_cast<std::size_t>(ret) + 1;
-  auto buff_ptr = std::unique_ptr<char[]>(new char[size]);
-  // 2015-10-08: vsnprintf is used instead of snd::vsnprintf due to a limitation
-  // in the android-ndk
-  ret = vsnprintf(buff_ptr.get(), size, msg, args);
-  return std::string(buff_ptr.get());
-}
-
-std::string StringPrintF(const char* format, ...) {
-  va_list args;
-  va_start(args, format);
-  std::string tmp = StringPrintFImp(format, args);
-  va_end(args);
-  return tmp;
-}
-
-void ReplaceAll(std::string* str, const std::string& from,
-                const std::string& to) {
-  std::size_t start = 0;
-  while ((start = str->find(from, start)) != std::string::npos) {
-    str->replace(start, from.length(), to);
-    start += to.length();
-  }
-}
-
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.h (removed)
@@ -1,40 +0,0 @@
-#ifndef BENCHMARK_STRING_UTIL_H_
-#define BENCHMARK_STRING_UTIL_H_
-
-#include <sstream>
-#include <string>
-#include <utility>
-#include "internal_macros.h"
-
-namespace benchmark {
-
-void AppendHumanReadable(int n, std::string* str);
-
-std::string HumanReadableNumber(double n);
-
-std::string StringPrintF(const char* format, ...);
-
-inline std::ostream& StringCatImp(std::ostream& out) BENCHMARK_NOEXCEPT {
-  return out;
-}
-
-template <class First, class... Rest>
-inline std::ostream& StringCatImp(std::ostream& out, First&& f,
-                                  Rest&&... rest) {
-  out << std::forward<First>(f);
-  return StringCatImp(out, std::forward<Rest>(rest)...);
-}
-
-template <class... Args>
-inline std::string StrCat(Args&&... args) {
-  std::ostringstream ss;
-  StringCatImp(ss, std::forward<Args>(args)...);
-  return ss.str();
-}
-
-void ReplaceAll(std::string* str, const std::string& from,
-                const std::string& to);
-
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_STRING_UTIL_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.cc (removed)
@@ -1,355 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "sysinfo.h"
-#include "internal_macros.h"
-
-#ifdef BENCHMARK_OS_WINDOWS
-#include <Shlwapi.h>
-#include <VersionHelpers.h>
-#include <Windows.h>
-#else
-#include <fcntl.h>
-#include <sys/resource.h>
-#include <sys/time.h>
-#include <sys/types.h>  // this header must be included before 'sys/sysctl.h' to avoid compilation error on FreeBSD
-#include <unistd.h>
-#if defined BENCHMARK_OS_FREEBSD || defined BENCHMARK_OS_MACOSX
-#include <sys/sysctl.h>
-#endif
-#endif
-
-#include <cerrno>
-#include <cstdint>
-#include <cstdio>
-#include <cstdlib>
-#include <cstring>
-#include <iostream>
-#include <limits>
-#include <mutex>
-
-#include "arraysize.h"
-#include "check.h"
-#include "cycleclock.h"
-#include "internal_macros.h"
-#include "log.h"
-#include "sleep.h"
-#include "string_util.h"
-
-namespace benchmark {
-namespace {
-std::once_flag cpuinfo_init;
-double cpuinfo_cycles_per_second = 1.0;
-int cpuinfo_num_cpus = 1;  // Conservative guess
-
-#if !defined BENCHMARK_OS_MACOSX
-const int64_t estimate_time_ms = 1000;
-
-// Helper function estimates cycles/sec by observing cycles elapsed during
-// sleep(). Using small sleep time decreases accuracy significantly.
-int64_t EstimateCyclesPerSecond() {
-  const int64_t start_ticks = cycleclock::Now();
-  SleepForMilliseconds(estimate_time_ms);
-  return cycleclock::Now() - start_ticks;
-}
-#endif
-
-#if defined BENCHMARK_OS_LINUX || defined BENCHMARK_OS_CYGWIN
-// Helper function for reading an int from a file. Returns true if successful
-// and the memory location pointed to by value is set to the value read.
-bool ReadIntFromFile(const char* file, long* value) {
-  bool ret = false;
-  int fd = open(file, O_RDONLY);
-  if (fd != -1) {
-    char line[1024];
-    char* err;
-    memset(line, '\0', sizeof(line));
-    ssize_t read_err = read(fd, line, sizeof(line) - 1);
-    ((void)read_err); // prevent unused warning
-    CHECK(read_err >= 0);
-    const long temp_value = strtol(line, &err, 10);
-    if (line[0] != '\0' && (*err == '\n' || *err == '\0')) {
-      *value = temp_value;
-      ret = true;
-    }
-    close(fd);
-  }
-  return ret;
-}
-#endif
-
-#if defined BENCHMARK_OS_LINUX || defined BENCHMARK_OS_CYGWIN
-static std::string convertToLowerCase(std::string s) {
-  for (auto& ch : s)
-    ch = std::tolower(ch);
-  return s;
-}
-static bool startsWithKey(std::string Value, std::string Key,
-                          bool IgnoreCase = true) {
-  if (IgnoreCase) {
-    Key = convertToLowerCase(std::move(Key));
-    Value = convertToLowerCase(std::move(Value));
-  }
-  return Value.compare(0, Key.size(), Key) == 0;
-}
-#endif
-
-void InitializeSystemInfo() {
-#if defined BENCHMARK_OS_LINUX || defined BENCHMARK_OS_CYGWIN
-  char line[1024];
-  char* err;
-  long freq;
-
-  bool saw_mhz = false;
-
-  // If the kernel is exporting the tsc frequency use that. There are issues
-  // where cpuinfo_max_freq cannot be relied on because the BIOS may be
-  // exporintg an invalid p-state (on x86) or p-states may be used to put the
-  // processor in a new mode (turbo mode). Essentially, those frequencies
-  // cannot always be relied upon. The same reasons apply to /proc/cpuinfo as
-  // well.
-  if (!saw_mhz &&
-      ReadIntFromFile("/sys/devices/system/cpu/cpu0/tsc_freq_khz", &freq)) {
-    // The value is in kHz (as the file name suggests).  For example, on a
-    // 2GHz warpstation, the file contains the value "2000000".
-    cpuinfo_cycles_per_second = freq * 1000.0;
-    saw_mhz = true;
-  }
-
-  // If CPU scaling is in effect, we want to use the *maximum* frequency,
-  // not whatever CPU speed some random processor happens to be using now.
-  if (!saw_mhz &&
-      ReadIntFromFile("/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq",
-                      &freq)) {
-    // The value is in kHz.  For example, on a 2GHz warpstation, the file
-    // contains the value "2000000".
-    cpuinfo_cycles_per_second = freq * 1000.0;
-    saw_mhz = true;
-  }
-
-  // Read /proc/cpuinfo for other values, and if there is no cpuinfo_max_freq.
-  const char* pname = "/proc/cpuinfo";
-  int fd = open(pname, O_RDONLY);
-  if (fd == -1) {
-    perror(pname);
-    if (!saw_mhz) {
-      cpuinfo_cycles_per_second =
-          static_cast<double>(EstimateCyclesPerSecond());
-    }
-    return;
-  }
-
-  double bogo_clock = 1.0;
-  bool saw_bogo = false;
-  long max_cpu_id = 0;
-  int num_cpus = 0;
-  line[0] = line[1] = '\0';
-  size_t chars_read = 0;
-  do {  // we'll exit when the last read didn't read anything
-    // Move the next line to the beginning of the buffer
-    const size_t oldlinelen = strlen(line);
-    if (sizeof(line) == oldlinelen + 1)  // oldlinelen took up entire line
-      line[0] = '\0';
-    else  // still other lines left to save
-      memmove(line, line + oldlinelen + 1, sizeof(line) - (oldlinelen + 1));
-    // Terminate the new line, reading more if we can't find the newline
-    char* newline = strchr(line, '\n');
-    if (newline == nullptr) {
-      const size_t linelen = strlen(line);
-      const size_t bytes_to_read = sizeof(line) - 1 - linelen;
-      CHECK(bytes_to_read > 0);  // because the memmove recovered >=1 bytes
-      chars_read = read(fd, line + linelen, bytes_to_read);
-      line[linelen + chars_read] = '\0';
-      newline = strchr(line, '\n');
-    }
-    if (newline != nullptr) *newline = '\0';
-
-    // When parsing the "cpu MHz" and "bogomips" (fallback) entries, we only
-    // accept postive values. Some environments (virtual machines) report zero,
-    // which would cause infinite looping in WallTime_Init.
-    if (!saw_mhz && startsWithKey(line, "cpu MHz")) {
-      const char* freqstr = strchr(line, ':');
-      if (freqstr) {
-        cpuinfo_cycles_per_second = strtod(freqstr + 1, &err) * 1000000.0;
-        if (freqstr[1] != '\0' && *err == '\0' && cpuinfo_cycles_per_second > 0)
-          saw_mhz = true;
-      }
-    } else if (startsWithKey(line, "bogomips")) {
-      const char* freqstr = strchr(line, ':');
-      if (freqstr) {
-        bogo_clock = strtod(freqstr + 1, &err) * 1000000.0;
-        if (freqstr[1] != '\0' && *err == '\0' && bogo_clock > 0)
-          saw_bogo = true;
-      }
-    } else if (startsWithKey(line, "processor", /*IgnoreCase*/false)) {
-      // The above comparison is case-sensitive because ARM kernels often
-      // include a "Processor" line that tells you about the CPU, distinct
-      // from the usual "processor" lines that give you CPU ids. No current
-      // Linux architecture is using "Processor" for CPU ids.
-      num_cpus++;  // count up every time we see an "processor :" entry
-      const char* id_str = strchr(line, ':');
-      if (id_str) {
-        const long cpu_id = strtol(id_str + 1, &err, 10);
-        if (id_str[1] != '\0' && *err == '\0' && max_cpu_id < cpu_id)
-          max_cpu_id = cpu_id;
-      }
-    }
-  } while (chars_read > 0);
-  close(fd);
-
-  if (!saw_mhz) {
-    if (saw_bogo) {
-      // If we didn't find anything better, we'll use bogomips, but
-      // we're not happy about it.
-      cpuinfo_cycles_per_second = bogo_clock;
-    } else {
-      // If we don't even have bogomips, we'll use the slow estimation.
-      cpuinfo_cycles_per_second =
-          static_cast<double>(EstimateCyclesPerSecond());
-    }
-  }
-  if (num_cpus == 0) {
-    fprintf(stderr, "Failed to read num. CPUs correctly from /proc/cpuinfo\n");
-  } else {
-    if ((max_cpu_id + 1) != num_cpus) {
-      fprintf(stderr,
-              "CPU ID assignments in /proc/cpuinfo seem messed up."
-              " This is usually caused by a bad BIOS.\n");
-    }
-    cpuinfo_num_cpus = num_cpus;
-  }
-
-#elif defined BENCHMARK_OS_FREEBSD
-// For this sysctl to work, the machine must be configured without
-// SMP, APIC, or APM support.  hz should be 64-bit in freebsd 7.0
-// and later.  Before that, it's a 32-bit quantity (and gives the
-// wrong answer on machines faster than 2^32 Hz).  See
-//  http://lists.freebsd.org/pipermail/freebsd-i386/2004-November/001846.html
-// But also compare FreeBSD 7.0:
-//  http://fxr.watson.org/fxr/source/i386/i386/tsc.c?v=RELENG70#L223
-//  231         error = sysctl_handle_quad(oidp, &freq, 0, req);
-// To FreeBSD 6.3 (it's the same in 6-STABLE):
-//  http://fxr.watson.org/fxr/source/i386/i386/tsc.c?v=RELENG6#L131
-//  139         error = sysctl_handle_int(oidp, &freq, sizeof(freq), req);
-#if __FreeBSD__ >= 7
-  uint64_t hz = 0;
-#else
-  unsigned int hz = 0;
-#endif
-  size_t sz = sizeof(hz);
-  const char* sysctl_path = "machdep.tsc_freq";
-  if (sysctlbyname(sysctl_path, &hz, &sz, nullptr, 0) != 0) {
-    fprintf(stderr, "Unable to determine clock rate from sysctl: %s: %s\n",
-            sysctl_path, strerror(errno));
-    cpuinfo_cycles_per_second = static_cast<double>(EstimateCyclesPerSecond());
-  } else {
-    cpuinfo_cycles_per_second = hz;
-  }
-// TODO: also figure out cpuinfo_num_cpus
-
-#elif defined BENCHMARK_OS_WINDOWS
-  // In NT, read MHz from the registry. If we fail to do so or we're in win9x
-  // then make a crude estimate.
-  DWORD data, data_size = sizeof(data);
-  if (IsWindowsXPOrGreater() &&
-      SUCCEEDED(
-          SHGetValueA(HKEY_LOCAL_MACHINE,
-                      "HARDWARE\\DESCRIPTION\\System\\CentralProcessor\\0",
-                      "~MHz", nullptr, &data, &data_size)))
-    cpuinfo_cycles_per_second =
-        static_cast<double>((int64_t)data * (int64_t)(1000 * 1000));  // was mhz
-  else
-    cpuinfo_cycles_per_second = static_cast<double>(EstimateCyclesPerSecond());
-
-  SYSTEM_INFO sysinfo;
-  // Use memset as opposed to = {} to avoid GCC missing initializer false
-  // positives.
-  std::memset(&sysinfo, 0, sizeof(SYSTEM_INFO));
-  GetSystemInfo(&sysinfo);
-  cpuinfo_num_cpus = sysinfo.dwNumberOfProcessors;  // number of logical
-                                                    // processors in the current
-                                                    // group
-
-#elif defined BENCHMARK_OS_MACOSX
-  int32_t num_cpus = 0;
-  size_t size = sizeof(num_cpus);
-  if (::sysctlbyname("hw.ncpu", &num_cpus, &size, nullptr, 0) == 0 &&
-      (size == sizeof(num_cpus))) {
-    cpuinfo_num_cpus = num_cpus;
-  } else {
-    fprintf(stderr, "%s\n", strerror(errno));
-    std::exit(EXIT_FAILURE);
-  }
-  int64_t cpu_freq = 0;
-  size = sizeof(cpu_freq);
-  if (::sysctlbyname("hw.cpufrequency", &cpu_freq, &size, nullptr, 0) == 0 &&
-      (size == sizeof(cpu_freq))) {
-    cpuinfo_cycles_per_second = cpu_freq;
-  } else {
-    #if defined BENCHMARK_OS_IOS
-    fprintf(stderr, "CPU frequency cannot be detected. \n");
-    cpuinfo_cycles_per_second = 0;
-    #else
-    fprintf(stderr, "%s\n", strerror(errno));
-    std::exit(EXIT_FAILURE);
-    #endif
-  }
-#else
-  // Generic cycles per second counter
-  cpuinfo_cycles_per_second = static_cast<double>(EstimateCyclesPerSecond());
-#endif
-}
-
-}  // end namespace
-
-double CyclesPerSecond(void) {
-  std::call_once(cpuinfo_init, InitializeSystemInfo);
-  return cpuinfo_cycles_per_second;
-}
-
-int NumCPUs(void) {
-  std::call_once(cpuinfo_init, InitializeSystemInfo);
-  return cpuinfo_num_cpus;
-}
-
-// The ""'s catch people who don't pass in a literal for "str"
-#define strliterallen(str) (sizeof("" str "") - 1)
-
-// Must use a string literal for prefix.
-#define memprefix(str, len, prefix)                       \
-  ((((len) >= strliterallen(prefix)) &&                   \
-    std::memcmp(str, prefix, strliterallen(prefix)) == 0) \
-       ? str + strliterallen(prefix)                      \
-       : nullptr)
-
-bool CpuScalingEnabled() {
-#ifndef BENCHMARK_OS_WINDOWS
-  // On Linux, the CPUfreq subsystem exposes CPU information as files on the
-  // local file system. If reading the exported files fails, then we may not be
-  // running on Linux, so we silently ignore all the read errors.
-  for (int cpu = 0, num_cpus = NumCPUs(); cpu < num_cpus; ++cpu) {
-    std::string governor_file =
-        StrCat("/sys/devices/system/cpu/cpu", cpu, "/cpufreq/scaling_governor");
-    FILE* file = fopen(governor_file.c_str(), "r");
-    if (!file) break;
-    char buff[16];
-    size_t bytes_read = fread(buff, 1, sizeof(buff), file);
-    fclose(file);
-    if (memprefix(buff, bytes_read, "performance") == nullptr) return true;
-  }
-#endif
-  return false;
-}
-
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.h (removed)
@@ -1,10 +0,0 @@
-#ifndef BENCHMARK_SYSINFO_H_
-#define BENCHMARK_SYSINFO_H_
-
-namespace benchmark {
-int NumCPUs();
-double CyclesPerSecond();
-bool CpuScalingEnabled();
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_SYSINFO_H_

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.cc (removed)
@@ -1,212 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "timers.h"
-#include "internal_macros.h"
-
-#ifdef BENCHMARK_OS_WINDOWS
-#include <Shlwapi.h>
-#include <VersionHelpers.h>
-#include <Windows.h>
-#else
-#include <fcntl.h>
-#include <sys/resource.h>
-#include <sys/time.h>
-#include <sys/types.h>  // this header must be included before 'sys/sysctl.h' to avoid compilation error on FreeBSD
-#include <unistd.h>
-#if defined BENCHMARK_OS_FREEBSD || defined BENCHMARK_OS_MACOSX
-#include <sys/sysctl.h>
-#endif
-#if defined(BENCHMARK_OS_MACOSX)
-#include <mach/mach_init.h>
-#include <mach/mach_port.h>
-#include <mach/thread_act.h>
-#endif
-#endif
-
-#ifdef BENCHMARK_OS_EMSCRIPTEN
-#include <emscripten.h>
-#endif
-
-#include <cerrno>
-#include <cstdint>
-#include <cstdio>
-#include <cstdlib>
-#include <cstring>
-#include <ctime>
-#include <iostream>
-#include <limits>
-#include <mutex>
-
-#include "check.h"
-#include "log.h"
-#include "sleep.h"
-#include "string_util.h"
-
-namespace benchmark {
-
-// Suppress unused warnings on helper functions.
-#if defined(__GNUC__)
-#pragma GCC diagnostic ignored "-Wunused-function"
-#endif
-
-namespace {
-#if defined(BENCHMARK_OS_WINDOWS)
-double MakeTime(FILETIME const& kernel_time, FILETIME const& user_time) {
-  ULARGE_INTEGER kernel;
-  ULARGE_INTEGER user;
-  kernel.HighPart = kernel_time.dwHighDateTime;
-  kernel.LowPart = kernel_time.dwLowDateTime;
-  user.HighPart = user_time.dwHighDateTime;
-  user.LowPart = user_time.dwLowDateTime;
-  return (static_cast<double>(kernel.QuadPart) +
-          static_cast<double>(user.QuadPart)) *
-         1e-7;
-}
-#else
-double MakeTime(struct rusage const& ru) {
-  return (static_cast<double>(ru.ru_utime.tv_sec) +
-          static_cast<double>(ru.ru_utime.tv_usec) * 1e-6 +
-          static_cast<double>(ru.ru_stime.tv_sec) +
-          static_cast<double>(ru.ru_stime.tv_usec) * 1e-6);
-}
-#endif
-#if defined(BENCHMARK_OS_MACOSX)
-double MakeTime(thread_basic_info_data_t const& info) {
-  return (static_cast<double>(info.user_time.seconds) +
-          static_cast<double>(info.user_time.microseconds) * 1e-6 +
-          static_cast<double>(info.system_time.seconds) +
-          static_cast<double>(info.system_time.microseconds) * 1e-6);
-}
-#endif
-#if defined(CLOCK_PROCESS_CPUTIME_ID) || defined(CLOCK_THREAD_CPUTIME_ID)
-double MakeTime(struct timespec const& ts) {
-  return ts.tv_sec + (static_cast<double>(ts.tv_nsec) * 1e-9);
-}
-#endif
-
-BENCHMARK_NORETURN static void DiagnoseAndExit(const char* msg) {
-  std::cerr << "ERROR: " << msg << std::endl;
-  std::exit(EXIT_FAILURE);
-}
-
-}  // end namespace
-
-double ProcessCPUUsage() {
-#if defined(BENCHMARK_OS_WINDOWS)
-  HANDLE proc = GetCurrentProcess();
-  FILETIME creation_time;
-  FILETIME exit_time;
-  FILETIME kernel_time;
-  FILETIME user_time;
-  if (GetProcessTimes(proc, &creation_time, &exit_time, &kernel_time,
-                      &user_time))
-    return MakeTime(kernel_time, user_time);
-  DiagnoseAndExit("GetProccessTimes() failed");
-#elif defined(BENCHMARK_OS_EMSCRIPTEN)
-  // clock_gettime(CLOCK_PROCESS_CPUTIME_ID, ...) returns 0 on Emscripten.
-  // Use Emscripten-specific API. Reported CPU time would be exactly the
-  // same as total time, but this is ok because there aren't long-latency
-  // syncronous system calls in Emscripten.
-  return emscripten_get_now() * 1e-3;
-#elif defined(CLOCK_PROCESS_CPUTIME_ID) && !defined(BENCHMARK_OS_MACOSX)
-  // FIXME We want to use clock_gettime, but its not available in MacOS 10.11. See
-  // https://github.com/google/benchmark/pull/292
-  struct timespec spec;
-  if (clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &spec) == 0)
-    return MakeTime(spec);
-  DiagnoseAndExit("clock_gettime(CLOCK_PROCESS_CPUTIME_ID, ...) failed");
-#else
-  struct rusage ru;
-  if (getrusage(RUSAGE_SELF, &ru) == 0) return MakeTime(ru);
-  DiagnoseAndExit("getrusage(RUSAGE_SELF, ...) failed");
-#endif
-}
-
-double ThreadCPUUsage() {
-#if defined(BENCHMARK_OS_WINDOWS)
-  HANDLE this_thread = GetCurrentThread();
-  FILETIME creation_time;
-  FILETIME exit_time;
-  FILETIME kernel_time;
-  FILETIME user_time;
-  GetThreadTimes(this_thread, &creation_time, &exit_time, &kernel_time,
-                 &user_time);
-  return MakeTime(kernel_time, user_time);
-#elif defined(BENCHMARK_OS_MACOSX)
-  // FIXME We want to use clock_gettime, but its not available in MacOS 10.11. See
-  // https://github.com/google/benchmark/pull/292
-  mach_msg_type_number_t count = THREAD_BASIC_INFO_COUNT;
-  thread_basic_info_data_t info;
-  mach_port_t thread = pthread_mach_thread_np(pthread_self());
-  if (thread_info(thread, THREAD_BASIC_INFO, (thread_info_t)&info, &count) ==
-      KERN_SUCCESS) {
-    return MakeTime(info);
-  }
-  DiagnoseAndExit("ThreadCPUUsage() failed when evaluating thread_info");
-#elif defined(BENCHMARK_OS_EMSCRIPTEN)
-  // Emscripten doesn't support traditional threads
-  return ProcessCPUUsage();
-#elif defined(BENCHMARK_OS_RTEMS)
-  // RTEMS doesn't support CLOCK_THREAD_CPUTIME_ID. See
-  // https://github.com/RTEMS/rtems/blob/master/cpukit/posix/src/clockgettime.c
-  return ProcessCPUUsage();
-#elif defined(CLOCK_THREAD_CPUTIME_ID)
-  struct timespec ts;
-  if (clock_gettime(CLOCK_THREAD_CPUTIME_ID, &ts) == 0) return MakeTime(ts);
-  DiagnoseAndExit("clock_gettime(CLOCK_THREAD_CPUTIME_ID, ...) failed");
-#else
-#error Per-thread timing is not available on your system.
-#endif
-}
-
-namespace {
-
-std::string DateTimeString(bool local) {
-  typedef std::chrono::system_clock Clock;
-  std::time_t now = Clock::to_time_t(Clock::now());
-  const std::size_t kStorageSize = 128;
-  char storage[kStorageSize];
-  std::size_t written;
-
-  if (local) {
-#if defined(BENCHMARK_OS_WINDOWS)
-    written =
-        std::strftime(storage, sizeof(storage), "%x %X", ::localtime(&now));
-#else
-    std::tm timeinfo;
-    std::memset(&timeinfo, 0, sizeof(std::tm));
-    ::localtime_r(&now, &timeinfo);
-    written = std::strftime(storage, sizeof(storage), "%F %T", &timeinfo);
-#endif
-  } else {
-#if defined(BENCHMARK_OS_WINDOWS)
-    written = std::strftime(storage, sizeof(storage), "%x %X", ::gmtime(&now));
-#else
-    std::tm timeinfo;
-    std::memset(&timeinfo, 0, sizeof(std::tm));
-    ::gmtime_r(&now, &timeinfo);
-    written = std::strftime(storage, sizeof(storage), "%F %T", &timeinfo);
-#endif
-  }
-  CHECK(written < kStorageSize);
-  ((void)written);  // prevent unused variable in optimized mode.
-  return std::string(storage);
-}
-
-}  // end namespace
-
-std::string LocalDateTimeString() { return DateTimeString(true); }
-
-}  // end namespace benchmark

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.h (removed)
@@ -1,48 +0,0 @@
-#ifndef BENCHMARK_TIMERS_H
-#define BENCHMARK_TIMERS_H
-
-#include <chrono>
-#include <string>
-
-namespace benchmark {
-
-// Return the CPU usage of the current process
-double ProcessCPUUsage();
-
-// Return the CPU usage of the children of the current process
-double ChildrenCPUUsage();
-
-// Return the CPU usage of the current thread
-double ThreadCPUUsage();
-
-#if defined(HAVE_STEADY_CLOCK)
-template <bool HighResIsSteady = std::chrono::high_resolution_clock::is_steady>
-struct ChooseSteadyClock {
-  typedef std::chrono::high_resolution_clock type;
-};
-
-template <>
-struct ChooseSteadyClock<false> {
-  typedef std::chrono::steady_clock type;
-};
-#endif
-
-struct ChooseClockType {
-#if defined(HAVE_STEADY_CLOCK)
-  typedef ChooseSteadyClock<>::type type;
-#else
-  typedef std::chrono::high_resolution_clock type;
-#endif
-};
-
-inline double ChronoClockNow() {
-  typedef ChooseClockType::type ClockType;
-  using FpSeconds = std::chrono::duration<double, std::chrono::seconds::period>;
-  return FpSeconds(ClockType::now().time_since_epoch()).count();
-}
-
-std::string LocalDateTimeString();
-
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_TIMERS_H

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/CMakeLists.txt
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/CMakeLists.txt?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/CMakeLists.txt (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/CMakeLists.txt (removed)
@@ -1,170 +0,0 @@
-# Enable the tests
-
-find_package(Threads REQUIRED)
-include(CheckCXXCompilerFlag)
-
-# NOTE: Some tests use `<cassert>` to perform the test. Therefore we must
-# strip -DNDEBUG from the default CMake flags in DEBUG mode.
-string(TOUPPER "${CMAKE_BUILD_TYPE}" uppercase_CMAKE_BUILD_TYPE)
-if( NOT uppercase_CMAKE_BUILD_TYPE STREQUAL "DEBUG" )
-  add_definitions( -UNDEBUG )
-  add_definitions(-DTEST_BENCHMARK_LIBRARY_HAS_NO_ASSERTIONS)
-  # Also remove /D NDEBUG to avoid MSVC warnings about conflicting defines.
-  foreach (flags_var_to_scrub
-      CMAKE_CXX_FLAGS_RELEASE
-      CMAKE_CXX_FLAGS_RELWITHDEBINFO
-      CMAKE_CXX_FLAGS_MINSIZEREL
-      CMAKE_C_FLAGS_RELEASE
-      CMAKE_C_FLAGS_RELWITHDEBINFO
-      CMAKE_C_FLAGS_MINSIZEREL)
-    string (REGEX REPLACE "(^| )[/-]D *NDEBUG($| )" " "
-      "${flags_var_to_scrub}" "${${flags_var_to_scrub}}")
-  endforeach()
-endif()
-
-# NOTE: These flags must be added after find_package(Threads REQUIRED) otherwise
-# they will break the configuration check.
-if (DEFINED BENCHMARK_CXX_LINKER_FLAGS)
-  list(APPEND CMAKE_EXE_LINKER_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS})
-endif()
-
-add_library(output_test_helper STATIC output_test_helper.cc output_test.h)
-
-macro(compile_benchmark_test name)
-  add_executable(${name} "${name}.cc")
-  target_link_libraries(${name} benchmark ${CMAKE_THREAD_LIBS_INIT})
-endmacro(compile_benchmark_test)
-
-
-macro(compile_output_test name)
-  add_executable(${name} "${name}.cc" output_test.h)
-  target_link_libraries(${name} output_test_helper benchmark
-          ${BENCHMARK_CXX_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
-endmacro(compile_output_test)
-
-
-# Demonstration executable
-compile_benchmark_test(benchmark_test)
-add_test(benchmark benchmark_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(filter_test)
-macro(add_filter_test name filter expect)
-  add_test(${name} filter_test --benchmark_min_time=0.01 --benchmark_filter=${filter} ${expect})
-  add_test(${name}_list_only filter_test --benchmark_list_tests --benchmark_filter=${filter} ${expect})
-endmacro(add_filter_test)
-
-add_filter_test(filter_simple "Foo" 3)
-add_filter_test(filter_suffix "BM_.*" 4)
-add_filter_test(filter_regex_all ".*" 5)
-add_filter_test(filter_regex_blank "" 5)
-add_filter_test(filter_regex_none "monkey" 0)
-add_filter_test(filter_regex_wildcard ".*Foo.*" 3)
-add_filter_test(filter_regex_begin "^BM_.*" 4)
-add_filter_test(filter_regex_begin2 "^N" 1)
-add_filter_test(filter_regex_end ".*Ba$" 1)
-
-compile_benchmark_test(options_test)
-add_test(options_benchmarks options_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(basic_test)
-add_test(basic_benchmark basic_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(diagnostics_test)
-add_test(diagnostics_test diagnostics_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(skip_with_error_test)
-add_test(skip_with_error_test skip_with_error_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(donotoptimize_test)
-# Some of the issues with DoNotOptimize only occur when optimization is enabled
-check_cxx_compiler_flag(-O3 BENCHMARK_HAS_O3_FLAG)
-if (BENCHMARK_HAS_O3_FLAG)
-  set_target_properties(donotoptimize_test PROPERTIES COMPILE_FLAGS "-O3")
-endif()
-add_test(donotoptimize_test donotoptimize_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(fixture_test)
-add_test(fixture_test fixture_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(register_benchmark_test)
-add_test(register_benchmark_test register_benchmark_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(map_test)
-add_test(map_test map_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(multiple_ranges_test)
-add_test(multiple_ranges_test multiple_ranges_test --benchmark_min_time=0.01)
-
-compile_output_test(reporter_output_test)
-add_test(reporter_output_test reporter_output_test --benchmark_min_time=0.01)
-
-compile_output_test(user_counters_test)
-add_test(user_counters_test user_counters_test --benchmark_min_time=0.01)
-
-compile_output_test(user_counters_tabular_test)
-add_test(user_counters_tabular_test user_counters_tabular_test --benchmark_counters_tabular=true --benchmark_min_time=0.01)
-
-check_cxx_compiler_flag(-std=c++03 BENCHMARK_HAS_CXX03_FLAG)
-if (BENCHMARK_HAS_CXX03_FLAG)
-  set(CXX03_FLAGS "${CMAKE_CXX_FLAGS}")
-  string(REPLACE "-std=c++11" "-std=c++03" CXX03_FLAGS "${CXX03_FLAGS}")
-  string(REPLACE "-std=c++0x" "-std=c++03" CXX03_FLAGS "${CXX03_FLAGS}")
-
-  compile_benchmark_test(cxx03_test)
-  set_target_properties(cxx03_test
-      PROPERTIES COMPILE_FLAGS "${CXX03_FLAGS}")
-  add_test(cxx03 cxx03_test --benchmark_min_time=0.01)
-endif()
-
-# Attempt to work around flaky test failures when running on Appveyor servers.
-if (DEFINED ENV{APPVEYOR})
-  set(COMPLEXITY_MIN_TIME "0.5")
-else()
-  set(COMPLEXITY_MIN_TIME "0.01")
-endif()
-compile_output_test(complexity_test)
-add_test(complexity_benchmark complexity_test --benchmark_min_time=${COMPLEXITY_MIN_TIME})
-
-# Add the coverage command(s)
-if(CMAKE_BUILD_TYPE)
-  string(TOLOWER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_LOWER)
-endif()
-if (${CMAKE_BUILD_TYPE_LOWER} MATCHES "coverage")
-  find_program(GCOV gcov)
-  find_program(LCOV lcov)
-  find_program(GENHTML genhtml)
-  find_program(CTEST ctest)
-  if (GCOV AND LCOV AND GENHTML AND CTEST AND HAVE_CXX_FLAG_COVERAGE)
-    add_custom_command(
-      OUTPUT ${CMAKE_BINARY_DIR}/lcov/index.html
-      COMMAND ${LCOV} -q -z -d .
-      COMMAND ${LCOV} -q --no-external -c -b "${CMAKE_SOURCE_DIR}" -d . -o before.lcov -i
-      COMMAND ${CTEST} --force-new-ctest-process
-      COMMAND ${LCOV} -q --no-external -c -b "${CMAKE_SOURCE_DIR}" -d . -o after.lcov
-      COMMAND ${LCOV} -q -a before.lcov -a after.lcov --output-file final.lcov
-      COMMAND ${LCOV} -q -r final.lcov "'${CMAKE_SOURCE_DIR}/test/*'" -o final.lcov
-      COMMAND ${GENHTML} final.lcov -o lcov --demangle-cpp --sort -p "${CMAKE_BINARY_DIR}" -t benchmark
-      DEPENDS filter_test benchmark_test options_test basic_test fixture_test cxx03_test complexity_test
-      WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
-      COMMENT "Running LCOV"
-    )
-    add_custom_target(coverage
-      DEPENDS ${CMAKE_BINARY_DIR}/lcov/index.html
-      COMMENT "LCOV report at lcov/index.html"
-    )
-    message(STATUS "Coverage command added")
-  else()
-    if (HAVE_CXX_FLAG_COVERAGE)
-      set(CXX_FLAG_COVERAGE_MESSAGE supported)
-    else()
-      set(CXX_FLAG_COVERAGE_MESSAGE unavailable)
-    endif()
-    message(WARNING
-      "Coverage not available:\n"
-      "  gcov: ${GCOV}\n"
-      "  lcov: ${LCOV}\n"
-      "  genhtml: ${GENHTML}\n"
-      "  ctest: ${CTEST}\n"
-      "  --coverage flag: ${CXX_FLAG_COVERAGE_MESSAGE}")
-  endif()
-endif()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/basic_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/basic_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/basic_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/basic_test.cc (removed)
@@ -1,99 +0,0 @@
-
-#include "benchmark/benchmark.h"
-
-#define BASIC_BENCHMARK_TEST(x) BENCHMARK(x)->Arg(8)->Arg(512)->Arg(8192)
-
-void BM_empty(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    benchmark::DoNotOptimize(state.iterations());
-  }
-}
-BENCHMARK(BM_empty);
-BENCHMARK(BM_empty)->ThreadPerCpu();
-
-void BM_spin_empty(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    for (int x = 0; x < state.range(0); ++x) {
-      benchmark::DoNotOptimize(x);
-    }
-  }
-}
-BASIC_BENCHMARK_TEST(BM_spin_empty);
-BASIC_BENCHMARK_TEST(BM_spin_empty)->ThreadPerCpu();
-
-void BM_spin_pause_before(benchmark::State& state) {
-  for (int i = 0; i < state.range(0); ++i) {
-    benchmark::DoNotOptimize(i);
-  }
-  while (state.KeepRunning()) {
-    for (int i = 0; i < state.range(0); ++i) {
-      benchmark::DoNotOptimize(i);
-    }
-  }
-}
-BASIC_BENCHMARK_TEST(BM_spin_pause_before);
-BASIC_BENCHMARK_TEST(BM_spin_pause_before)->ThreadPerCpu();
-
-void BM_spin_pause_during(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    state.PauseTiming();
-    for (int i = 0; i < state.range(0); ++i) {
-      benchmark::DoNotOptimize(i);
-    }
-    state.ResumeTiming();
-    for (int i = 0; i < state.range(0); ++i) {
-      benchmark::DoNotOptimize(i);
-    }
-  }
-}
-BASIC_BENCHMARK_TEST(BM_spin_pause_during);
-BASIC_BENCHMARK_TEST(BM_spin_pause_during)->ThreadPerCpu();
-
-void BM_pause_during(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    state.PauseTiming();
-    state.ResumeTiming();
-  }
-}
-BENCHMARK(BM_pause_during);
-BENCHMARK(BM_pause_during)->ThreadPerCpu();
-BENCHMARK(BM_pause_during)->UseRealTime();
-BENCHMARK(BM_pause_during)->UseRealTime()->ThreadPerCpu();
-
-void BM_spin_pause_after(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    for (int i = 0; i < state.range(0); ++i) {
-      benchmark::DoNotOptimize(i);
-    }
-  }
-  for (int i = 0; i < state.range(0); ++i) {
-    benchmark::DoNotOptimize(i);
-  }
-}
-BASIC_BENCHMARK_TEST(BM_spin_pause_after);
-BASIC_BENCHMARK_TEST(BM_spin_pause_after)->ThreadPerCpu();
-
-void BM_spin_pause_before_and_after(benchmark::State& state) {
-  for (int i = 0; i < state.range(0); ++i) {
-    benchmark::DoNotOptimize(i);
-  }
-  while (state.KeepRunning()) {
-    for (int i = 0; i < state.range(0); ++i) {
-      benchmark::DoNotOptimize(i);
-    }
-  }
-  for (int i = 0; i < state.range(0); ++i) {
-    benchmark::DoNotOptimize(i);
-  }
-}
-BASIC_BENCHMARK_TEST(BM_spin_pause_before_and_after);
-BASIC_BENCHMARK_TEST(BM_spin_pause_before_and_after)->ThreadPerCpu();
-
-void BM_empty_stop_start(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_empty_stop_start);
-BENCHMARK(BM_empty_stop_start)->ThreadPerCpu();
-
-BENCHMARK_MAIN()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/benchmark_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/benchmark_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/benchmark_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/benchmark_test.cc (removed)
@@ -1,240 +0,0 @@
-#include "benchmark/benchmark.h"
-
-#include <assert.h>
-#include <math.h>
-#include <stdint.h>
-
-#include <chrono>
-#include <cstdlib>
-#include <iostream>
-#include <limits>
-#include <list>
-#include <map>
-#include <mutex>
-#include <set>
-#include <sstream>
-#include <string>
-#include <thread>
-#include <utility>
-#include <vector>
-
-#if defined(__GNUC__)
-#define BENCHMARK_NOINLINE __attribute__((noinline))
-#else
-#define BENCHMARK_NOINLINE
-#endif
-
-namespace {
-
-int BENCHMARK_NOINLINE Factorial(uint32_t n) {
-  return (n == 1) ? 1 : n * Factorial(n - 1);
-}
-
-double CalculatePi(int depth) {
-  double pi = 0.0;
-  for (int i = 0; i < depth; ++i) {
-    double numerator = static_cast<double>(((i % 2) * 2) - 1);
-    double denominator = static_cast<double>((2 * i) - 1);
-    pi += numerator / denominator;
-  }
-  return (pi - 1.0) * 4;
-}
-
-std::set<int> ConstructRandomSet(int size) {
-  std::set<int> s;
-  for (int i = 0; i < size; ++i) s.insert(i);
-  return s;
-}
-
-std::mutex test_vector_mu;
-std::vector<int>* test_vector = nullptr;
-
-}  // end namespace
-
-static void BM_Factorial(benchmark::State& state) {
-  int fac_42 = 0;
-  while (state.KeepRunning()) fac_42 = Factorial(8);
-  // Prevent compiler optimizations
-  std::stringstream ss;
-  ss << fac_42;
-  state.SetLabel(ss.str());
-}
-BENCHMARK(BM_Factorial);
-BENCHMARK(BM_Factorial)->UseRealTime();
-
-static void BM_CalculatePiRange(benchmark::State& state) {
-  double pi = 0.0;
-  while (state.KeepRunning()) pi = CalculatePi(state.range(0));
-  std::stringstream ss;
-  ss << pi;
-  state.SetLabel(ss.str());
-}
-BENCHMARK_RANGE(BM_CalculatePiRange, 1, 1024 * 1024);
-
-static void BM_CalculatePi(benchmark::State& state) {
-  static const int depth = 1024;
-  while (state.KeepRunning()) {
-    benchmark::DoNotOptimize(CalculatePi(depth));
-  }
-}
-BENCHMARK(BM_CalculatePi)->Threads(8);
-BENCHMARK(BM_CalculatePi)->ThreadRange(1, 32);
-BENCHMARK(BM_CalculatePi)->ThreadPerCpu();
-
-static void BM_SetInsert(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    state.PauseTiming();
-    std::set<int> data = ConstructRandomSet(state.range(0));
-    state.ResumeTiming();
-    for (int j = 0; j < state.range(1); ++j) data.insert(rand());
-  }
-  state.SetItemsProcessed(state.iterations() * state.range(1));
-  state.SetBytesProcessed(state.iterations() * state.range(1) * sizeof(int));
-}
-BENCHMARK(BM_SetInsert)->Ranges({{1 << 10, 8 << 10}, {1, 10}});
-
-template <typename Container,
-          typename ValueType = typename Container::value_type>
-static void BM_Sequential(benchmark::State& state) {
-  ValueType v = 42;
-  while (state.KeepRunning()) {
-    Container c;
-    for (int i = state.range(0); --i;) c.push_back(v);
-  }
-  const size_t items_processed = state.iterations() * state.range(0);
-  state.SetItemsProcessed(items_processed);
-  state.SetBytesProcessed(items_processed * sizeof(v));
-}
-BENCHMARK_TEMPLATE2(BM_Sequential, std::vector<int>, int)
-    ->Range(1 << 0, 1 << 10);
-BENCHMARK_TEMPLATE(BM_Sequential, std::list<int>)->Range(1 << 0, 1 << 10);
-// Test the variadic version of BENCHMARK_TEMPLATE in C++11 and beyond.
-#if __cplusplus >= 201103L
-BENCHMARK_TEMPLATE(BM_Sequential, std::vector<int>, int)->Arg(512);
-#endif
-
-static void BM_StringCompare(benchmark::State& state) {
-  std::string s1(state.range(0), '-');
-  std::string s2(state.range(0), '-');
-  while (state.KeepRunning()) benchmark::DoNotOptimize(s1.compare(s2));
-}
-BENCHMARK(BM_StringCompare)->Range(1, 1 << 20);
-
-static void BM_SetupTeardown(benchmark::State& state) {
-  if (state.thread_index == 0) {
-    // No need to lock test_vector_mu here as this is running single-threaded.
-    test_vector = new std::vector<int>();
-  }
-  int i = 0;
-  while (state.KeepRunning()) {
-    std::lock_guard<std::mutex> l(test_vector_mu);
-    if (i % 2 == 0)
-      test_vector->push_back(i);
-    else
-      test_vector->pop_back();
-    ++i;
-  }
-  if (state.thread_index == 0) {
-    delete test_vector;
-  }
-}
-BENCHMARK(BM_SetupTeardown)->ThreadPerCpu();
-
-static void BM_LongTest(benchmark::State& state) {
-  double tracker = 0.0;
-  while (state.KeepRunning()) {
-    for (int i = 0; i < state.range(0); ++i)
-      benchmark::DoNotOptimize(tracker += i);
-  }
-}
-BENCHMARK(BM_LongTest)->Range(1 << 16, 1 << 28);
-
-static void BM_ParallelMemset(benchmark::State& state) {
-  int size = state.range(0) / static_cast<int>(sizeof(int));
-  int thread_size = size / state.threads;
-  int from = thread_size * state.thread_index;
-  int to = from + thread_size;
-
-  if (state.thread_index == 0) {
-    test_vector = new std::vector<int>(size);
-  }
-
-  while (state.KeepRunning()) {
-    for (int i = from; i < to; i++) {
-      // No need to lock test_vector_mu as ranges
-      // do not overlap between threads.
-      benchmark::DoNotOptimize(test_vector->at(i) = 1);
-    }
-  }
-
-  if (state.thread_index == 0) {
-    delete test_vector;
-  }
-}
-BENCHMARK(BM_ParallelMemset)->Arg(10 << 20)->ThreadRange(1, 4);
-
-static void BM_ManualTiming(benchmark::State& state) {
-  size_t slept_for = 0;
-  int microseconds = state.range(0);
-  std::chrono::duration<double, std::micro> sleep_duration{
-      static_cast<double>(microseconds)};
-
-  while (state.KeepRunning()) {
-    auto start = std::chrono::high_resolution_clock::now();
-    // Simulate some useful workload with a sleep
-    std::this_thread::sleep_for(
-        std::chrono::duration_cast<std::chrono::nanoseconds>(sleep_duration));
-    auto end = std::chrono::high_resolution_clock::now();
-
-    auto elapsed =
-        std::chrono::duration_cast<std::chrono::duration<double>>(end - start);
-
-    state.SetIterationTime(elapsed.count());
-    slept_for += microseconds;
-  }
-  state.SetItemsProcessed(slept_for);
-}
-BENCHMARK(BM_ManualTiming)->Range(1, 1 << 14)->UseRealTime();
-BENCHMARK(BM_ManualTiming)->Range(1, 1 << 14)->UseManualTime();
-
-#if __cplusplus >= 201103L
-
-template <class... Args>
-void BM_with_args(benchmark::State& state, Args&&...) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK_CAPTURE(BM_with_args, int_test, 42, 43, 44);
-BENCHMARK_CAPTURE(BM_with_args, string_and_pair_test, std::string("abc"),
-                  std::pair<int, double>(42, 3.8));
-
-void BM_non_template_args(benchmark::State& state, int, double) {
-  while(state.KeepRunning()) {}
-}
-BENCHMARK_CAPTURE(BM_non_template_args, basic_test, 0, 0);
-
-#endif  // __cplusplus >= 201103L
-
-static void BM_DenseThreadRanges(benchmark::State& st) {
-  switch (st.range(0)) {
-    case 1:
-      assert(st.threads == 1 || st.threads == 2 || st.threads == 3);
-      break;
-    case 2:
-      assert(st.threads == 1 || st.threads == 3 || st.threads == 4);
-      break;
-    case 3:
-      assert(st.threads == 5 || st.threads == 8 || st.threads == 11 ||
-             st.threads == 14);
-      break;
-    default:
-      assert(false && "Invalid test case number");
-  }
-  while (st.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_DenseThreadRanges)->Arg(1)->DenseThreadRange(1, 3);
-BENCHMARK(BM_DenseThreadRanges)->Arg(2)->DenseThreadRange(1, 4, 2);
-BENCHMARK(BM_DenseThreadRanges)->Arg(3)->DenseThreadRange(5, 14, 3);
-
-BENCHMARK_MAIN()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/complexity_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/complexity_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/complexity_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/complexity_test.cc (removed)
@@ -1,167 +0,0 @@
-#undef NDEBUG
-#include <algorithm>
-#include <cassert>
-#include <cmath>
-#include <cstdlib>
-#include <vector>
-#include "benchmark/benchmark.h"
-#include "output_test.h"
-
-namespace {
-
-#define ADD_COMPLEXITY_CASES(...) \
-  int CONCAT(dummy, __LINE__) = AddComplexityTest(__VA_ARGS__)
-
-int AddComplexityTest(std::string big_o_test_name, std::string rms_test_name,
-                      std::string big_o) {
-  SetSubstitutions({{"%bigo_name", big_o_test_name},
-                    {"%rms_name", rms_test_name},
-                    {"%bigo_str", "[ ]* %float " + big_o},
-                    {"%bigo", big_o},
-                    {"%rms", "[ ]*[0-9]+ %"}});
-  AddCases(
-      TC_ConsoleOut,
-      {{"^%bigo_name %bigo_str %bigo_str[ ]*$"},
-       {"^%bigo_name", MR_Not},  // Assert we we didn't only matched a name.
-       {"^%rms_name %rms %rms[ ]*$", MR_Next}});
-  AddCases(TC_JSONOut, {{"\"name\": \"%bigo_name\",$"},
-                        {"\"cpu_coefficient\": [0-9]+,$", MR_Next},
-                        {"\"real_coefficient\": [0-9]{1,5},$", MR_Next},
-                        {"\"big_o\": \"%bigo\",$", MR_Next},
-                        {"\"time_unit\": \"ns\"$", MR_Next},
-                        {"}", MR_Next},
-                        {"\"name\": \"%rms_name\",$"},
-                        {"\"rms\": %float$", MR_Next},
-                        {"}", MR_Next}});
-  AddCases(TC_CSVOut, {{"^\"%bigo_name\",,%float,%float,%bigo,,,,,$"},
-                       {"^\"%bigo_name\"", MR_Not},
-                       {"^\"%rms_name\",,%float,%float,,,,,,$", MR_Next}});
-  return 0;
-}
-
-}  // end namespace
-
-// ========================================================================= //
-// --------------------------- Testing BigO O(1) --------------------------- //
-// ========================================================================= //
-
-void BM_Complexity_O1(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    for (int i = 0; i < 1024; ++i) {
-      benchmark::DoNotOptimize(&i);
-    }
-  }
-  state.SetComplexityN(state.range(0));
-}
-BENCHMARK(BM_Complexity_O1)->Range(1, 1 << 18)->Complexity(benchmark::o1);
-BENCHMARK(BM_Complexity_O1)->Range(1, 1 << 18)->Complexity();
-BENCHMARK(BM_Complexity_O1)->Range(1, 1 << 18)->Complexity([](int) {
-  return 1.0;
-});
-
-const char *big_o_1_test_name = "BM_Complexity_O1_BigO";
-const char *rms_o_1_test_name = "BM_Complexity_O1_RMS";
-const char *enum_big_o_1 = "\\([0-9]+\\)";
-// FIXME: Tolerate both '(1)' and 'lgN' as output when the complexity is auto
-// deduced.
-// See https://github.com/google/benchmark/issues/272
-const char *auto_big_o_1 = "(\\([0-9]+\\))|(lgN)";
-const char *lambda_big_o_1 = "f\\(N\\)";
-
-// Add enum tests
-ADD_COMPLEXITY_CASES(big_o_1_test_name, rms_o_1_test_name, enum_big_o_1);
-
-// Add auto enum tests
-ADD_COMPLEXITY_CASES(big_o_1_test_name, rms_o_1_test_name, auto_big_o_1);
-
-// Add lambda tests
-ADD_COMPLEXITY_CASES(big_o_1_test_name, rms_o_1_test_name, lambda_big_o_1);
-
-// ========================================================================= //
-// --------------------------- Testing BigO O(N) --------------------------- //
-// ========================================================================= //
-
-std::vector<int> ConstructRandomVector(int size) {
-  std::vector<int> v;
-  v.reserve(size);
-  for (int i = 0; i < size; ++i) {
-    v.push_back(std::rand() % size);
-  }
-  return v;
-}
-
-void BM_Complexity_O_N(benchmark::State& state) {
-  auto v = ConstructRandomVector(state.range(0));
-  const int item_not_in_vector =
-      state.range(0) * 2;  // Test worst case scenario (item not in vector)
-  while (state.KeepRunning()) {
-    benchmark::DoNotOptimize(std::find(v.begin(), v.end(), item_not_in_vector));
-  }
-  state.SetComplexityN(state.range(0));
-}
-BENCHMARK(BM_Complexity_O_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity(benchmark::oN);
-BENCHMARK(BM_Complexity_O_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity([](int n) -> double { return n; });
-BENCHMARK(BM_Complexity_O_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity();
-
-const char *big_o_n_test_name = "BM_Complexity_O_N_BigO";
-const char *rms_o_n_test_name = "BM_Complexity_O_N_RMS";
-const char *enum_auto_big_o_n = "N";
-const char *lambda_big_o_n = "f\\(N\\)";
-
-// Add enum tests
-ADD_COMPLEXITY_CASES(big_o_n_test_name, rms_o_n_test_name, enum_auto_big_o_n);
-
-// Add lambda tests
-ADD_COMPLEXITY_CASES(big_o_n_test_name, rms_o_n_test_name, lambda_big_o_n);
-
-// ========================================================================= //
-// ------------------------- Testing BigO O(N*lgN) ------------------------- //
-// ========================================================================= //
-
-static void BM_Complexity_O_N_log_N(benchmark::State& state) {
-  auto v = ConstructRandomVector(state.range(0));
-  while (state.KeepRunning()) {
-    std::sort(v.begin(), v.end());
-  }
-  state.SetComplexityN(state.range(0));
-}
-BENCHMARK(BM_Complexity_O_N_log_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity(benchmark::oNLogN);
-BENCHMARK(BM_Complexity_O_N_log_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity([](int n) { return n * log2(n); });
-BENCHMARK(BM_Complexity_O_N_log_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity();
-
-const char *big_o_n_lg_n_test_name = "BM_Complexity_O_N_log_N_BigO";
-const char *rms_o_n_lg_n_test_name = "BM_Complexity_O_N_log_N_RMS";
-const char *enum_auto_big_o_n_lg_n = "NlgN";
-const char *lambda_big_o_n_lg_n = "f\\(N\\)";
-
-// Add enum tests
-ADD_COMPLEXITY_CASES(big_o_n_lg_n_test_name, rms_o_n_lg_n_test_name,
-                     enum_auto_big_o_n_lg_n);
-
-// Add lambda tests
-ADD_COMPLEXITY_CASES(big_o_n_lg_n_test_name, rms_o_n_lg_n_test_name,
-                     lambda_big_o_n_lg_n);
-
-// ========================================================================= //
-// --------------------------- TEST CASES END ------------------------------ //
-// ========================================================================= //
-
-int main(int argc, char *argv[]) { RunOutputTests(argc, argv); }

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/cxx03_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/cxx03_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/cxx03_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/cxx03_test.cc (removed)
@@ -1,48 +0,0 @@
-#undef NDEBUG
-#include <cassert>
-#include <cstddef>
-
-#include "benchmark/benchmark.h"
-
-#if __cplusplus >= 201103L
-#error C++11 or greater detected. Should be C++03.
-#endif
-
-void BM_empty(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    volatile std::size_t x = state.iterations();
-    ((void)x);
-  }
-}
-BENCHMARK(BM_empty);
-
-// The new C++11 interface for args/ranges requires initializer list support.
-// Therefore we provide the old interface to support C++03.
-void BM_old_arg_range_interface(benchmark::State& state) {
-  assert((state.range(0) == 1 && state.range(1) == 2) ||
-         (state.range(0) == 5 && state.range(1) == 6));
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_old_arg_range_interface)->ArgPair(1, 2)->RangePair(5, 5, 6, 6);
-
-template <class T, class U>
-void BM_template2(benchmark::State& state) {
-  BM_empty(state);
-}
-BENCHMARK_TEMPLATE2(BM_template2, int, long);
-
-template <class T>
-void BM_template1(benchmark::State& state) {
-  BM_empty(state);
-}
-BENCHMARK_TEMPLATE(BM_template1, long);
-BENCHMARK_TEMPLATE1(BM_template1, int);
-
-void BM_counters(benchmark::State& state) {
-    BM_empty(state);
-    state.counters["Foo"] = 2;
-}
-BENCHMARK(BM_counters);
-
-BENCHMARK_MAIN()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/diagnostics_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/diagnostics_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/diagnostics_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/diagnostics_test.cc (removed)
@@ -1,64 +0,0 @@
-// Testing:
-//   State::PauseTiming()
-//   State::ResumeTiming()
-// Test that CHECK's within these function diagnose when they are called
-// outside of the KeepRunning() loop.
-//
-// NOTE: Users should NOT include or use src/check.h. This is only done in
-// order to test library internals.
-
-#include <cstdlib>
-#include <stdexcept>
-
-#include "../src/check.h"
-#include "benchmark/benchmark.h"
-
-#if defined(__GNUC__) && !defined(__EXCEPTIONS)
-#define TEST_HAS_NO_EXCEPTIONS
-#endif
-
-void TestHandler() {
-#ifndef TEST_HAS_NO_EXCEPTIONS
-  throw std::logic_error("");
-#else
-  std::abort();
-#endif
-}
-
-void try_invalid_pause_resume(benchmark::State& state) {
-#if !defined(TEST_BENCHMARK_LIBRARY_HAS_NO_ASSERTIONS) && !defined(TEST_HAS_NO_EXCEPTIONS)
-  try {
-    state.PauseTiming();
-    std::abort();
-  } catch (std::logic_error const&) {
-  }
-  try {
-    state.ResumeTiming();
-    std::abort();
-  } catch (std::logic_error const&) {
-  }
-#else
-  (void)state;  // avoid unused warning
-#endif
-}
-
-void BM_diagnostic_test(benchmark::State& state) {
-  static bool called_once = false;
-
-  if (called_once == false) try_invalid_pause_resume(state);
-
-  while (state.KeepRunning()) {
-    benchmark::DoNotOptimize(state.iterations());
-  }
-
-  if (called_once == false) try_invalid_pause_resume(state);
-
-  called_once = true;
-}
-BENCHMARK(BM_diagnostic_test);
-
-int main(int argc, char* argv[]) {
-  benchmark::internal::GetAbortHandler() = &TestHandler;
-  benchmark::Initialize(&argc, argv);
-  benchmark::RunSpecifiedBenchmarks();
-}

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/donotoptimize_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/donotoptimize_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/donotoptimize_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/donotoptimize_test.cc (removed)
@@ -1,52 +0,0 @@
-#include "benchmark/benchmark.h"
-
-#include <cstdint>
-
-namespace {
-#if defined(__GNUC__)
-std::uint64_t double_up(const std::uint64_t x) __attribute__((const));
-#endif
-std::uint64_t double_up(const std::uint64_t x) { return x * 2; }
-}
-
-// Using DoNotOptimize on types like BitRef seem to cause a lot of problems
-// with the inline assembly on both GCC and Clang.
-struct BitRef {
-  int index;
-  unsigned char &byte;
-
-public:
-  static BitRef Make() {
-    static unsigned char arr[2] = {};
-    BitRef b(1, arr[0]);
-    return b;
-  }
-private:
-  BitRef(int i, unsigned char& b) : index(i), byte(b) {}
-};
-
-int main(int, char*[]) {
-  // this test verifies compilation of DoNotOptimize() for some types
-
-  char buffer8[8];
-  benchmark::DoNotOptimize(buffer8);
-
-  char buffer20[20];
-  benchmark::DoNotOptimize(buffer20);
-
-  char buffer1024[1024];
-  benchmark::DoNotOptimize(buffer1024);
-  benchmark::DoNotOptimize(&buffer1024[0]);
-
-  int x = 123;
-  benchmark::DoNotOptimize(x);
-  benchmark::DoNotOptimize(&x);
-  benchmark::DoNotOptimize(x += 42);
-
-  benchmark::DoNotOptimize(double_up(x));
-
-  // These tests are to e
-  benchmark::DoNotOptimize(BitRef::Make());
-  BitRef lval = BitRef::Make();
-  benchmark::DoNotOptimize(lval);
-}

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/filter_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/filter_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/filter_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/filter_test.cc (removed)
@@ -1,104 +0,0 @@
-#include "benchmark/benchmark.h"
-
-#include <cassert>
-#include <cmath>
-#include <cstdint>
-#include <cstdlib>
-
-#include <iostream>
-#include <limits>
-#include <sstream>
-#include <string>
-
-namespace {
-
-class TestReporter : public benchmark::ConsoleReporter {
- public:
-  virtual bool ReportContext(const Context& context) {
-    return ConsoleReporter::ReportContext(context);
-  };
-
-  virtual void ReportRuns(const std::vector<Run>& report) {
-    ++count_;
-    ConsoleReporter::ReportRuns(report);
-  };
-
-  TestReporter() : count_(0) {}
-
-  virtual ~TestReporter() {}
-
-  size_t GetCount() const { return count_; }
-
- private:
-  mutable size_t count_;
-};
-
-}  // end namespace
-
-static void NoPrefix(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(NoPrefix);
-
-static void BM_Foo(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_Foo);
-
-static void BM_Bar(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_Bar);
-
-static void BM_FooBar(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_FooBar);
-
-static void BM_FooBa(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_FooBa);
-
-int main(int argc, char **argv) {
-  bool list_only = false;
-  for (int i = 0; i < argc; ++i)
-    list_only |= std::string(argv[i]).find("--benchmark_list_tests") !=
-                 std::string::npos;
-
-  benchmark::Initialize(&argc, argv);
-
-  TestReporter test_reporter;
-  const size_t returned_count =
-      benchmark::RunSpecifiedBenchmarks(&test_reporter);
-
-  if (argc == 2) {
-    // Make sure we ran all of the tests
-    std::stringstream ss(argv[1]);
-    size_t expected_return;
-    ss >> expected_return;
-
-    if (returned_count != expected_return) {
-      std::cerr << "ERROR: Expected " << expected_return
-                << " tests to match the filter but returned_count = "
-                << returned_count << std::endl;
-      return -1;
-    }
-
-    const size_t expected_reports = list_only ? 0 : expected_return;
-    const size_t reports_count = test_reporter.GetCount();
-    if (reports_count != expected_reports) {
-      std::cerr << "ERROR: Expected " << expected_reports
-                << " tests to be run but reported_count = " << reports_count
-                << std::endl;
-      return -1;
-    }
-  }
-
-  return 0;
-}

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/fixture_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/fixture_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/fixture_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/fixture_test.cc (removed)
@@ -1,49 +0,0 @@
-
-#include "benchmark/benchmark.h"
-
-#include <cassert>
-#include <memory>
-
-class MyFixture : public ::benchmark::Fixture {
- public:
-  void SetUp(const ::benchmark::State& state) {
-    if (state.thread_index == 0) {
-      assert(data.get() == nullptr);
-      data.reset(new int(42));
-    }
-  }
-
-  void TearDown(const ::benchmark::State& state) {
-    if (state.thread_index == 0) {
-      assert(data.get() != nullptr);
-      data.reset();
-    }
-  }
-
-  ~MyFixture() { assert(data == nullptr); }
-
-  std::unique_ptr<int> data;
-};
-
-BENCHMARK_F(MyFixture, Foo)(benchmark::State &st) {
-  assert(data.get() != nullptr);
-  assert(*data == 42);
-  while (st.KeepRunning()) {
-  }
-}
-
-BENCHMARK_DEFINE_F(MyFixture, Bar)(benchmark::State& st) {
-  if (st.thread_index == 0) {
-    assert(data.get() != nullptr);
-    assert(*data == 42);
-  }
-  while (st.KeepRunning()) {
-    assert(data.get() != nullptr);
-    assert(*data == 42);
-  }
-  st.SetItemsProcessed(st.range(0));
-}
-BENCHMARK_REGISTER_F(MyFixture, Bar)->Arg(42);
-BENCHMARK_REGISTER_F(MyFixture, Bar)->Arg(42)->ThreadPerCpu();
-
-BENCHMARK_MAIN()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/map_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/map_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/map_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/map_test.cc (removed)
@@ -1,56 +0,0 @@
-#include "benchmark/benchmark.h"
-
-#include <cstdlib>
-#include <map>
-
-namespace {
-
-std::map<int, int> ConstructRandomMap(int size) {
-  std::map<int, int> m;
-  for (int i = 0; i < size; ++i) {
-    m.insert(std::make_pair(rand() % size, rand() % size));
-  }
-  return m;
-}
-
-}  // namespace
-
-// Basic version.
-static void BM_MapLookup(benchmark::State& state) {
-  const int size = state.range(0);
-  while (state.KeepRunning()) {
-    state.PauseTiming();
-    std::map<int, int> m = ConstructRandomMap(size);
-    state.ResumeTiming();
-    for (int i = 0; i < size; ++i) {
-      benchmark::DoNotOptimize(m.find(rand() % size));
-    }
-  }
-  state.SetItemsProcessed(state.iterations() * size);
-}
-BENCHMARK(BM_MapLookup)->Range(1 << 3, 1 << 12);
-
-// Using fixtures.
-class MapFixture : public ::benchmark::Fixture {
- public:
-  void SetUp(const ::benchmark::State& st) {
-    m = ConstructRandomMap(st.range(0));
-  }
-
-  void TearDown(const ::benchmark::State&) { m.clear(); }
-
-  std::map<int, int> m;
-};
-
-BENCHMARK_DEFINE_F(MapFixture, Lookup)(benchmark::State& state) {
-  const int size = state.range(0);
-  while (state.KeepRunning()) {
-    for (int i = 0; i < size; ++i) {
-      benchmark::DoNotOptimize(m.find(rand() % size));
-    }
-  }
-  state.SetItemsProcessed(state.iterations() * size);
-}
-BENCHMARK_REGISTER_F(MapFixture, Lookup)->Range(1 << 3, 1 << 12);
-
-BENCHMARK_MAIN()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/multiple_ranges_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/multiple_ranges_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/multiple_ranges_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/multiple_ranges_test.cc (removed)
@@ -1,74 +0,0 @@
-#include "benchmark/benchmark.h"
-
-#include <cassert>
-#include <set>
-
-class MultipleRangesFixture : public ::benchmark::Fixture {
- public:
-  MultipleRangesFixture()
-      : expectedValues({{1, 3, 5},
-                        {1, 3, 8},
-                        {1, 3, 15},
-                        {2, 3, 5},
-                        {2, 3, 8},
-                        {2, 3, 15},
-                        {1, 4, 5},
-                        {1, 4, 8},
-                        {1, 4, 15},
-                        {2, 4, 5},
-                        {2, 4, 8},
-                        {2, 4, 15},
-                        {1, 7, 5},
-                        {1, 7, 8},
-                        {1, 7, 15},
-                        {2, 7, 5},
-                        {2, 7, 8},
-                        {2, 7, 15},
-                        {7, 6, 3}}) {}
-
-  void SetUp(const ::benchmark::State& state) {
-    std::vector<int> ranges = {state.range(0), state.range(1), state.range(2)};
-
-    assert(expectedValues.find(ranges) != expectedValues.end());
-
-    actualValues.insert(ranges);
-  }
-
-  virtual ~MultipleRangesFixture() {
-    assert(actualValues.size() == expectedValues.size());
-  }
-
-  std::set<std::vector<int>> expectedValues;
-  std::set<std::vector<int>> actualValues;
-};
-
-BENCHMARK_DEFINE_F(MultipleRangesFixture, Empty)(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    int product = state.range(0) * state.range(1) * state.range(2);
-    for (int x = 0; x < product; x++) {
-      benchmark::DoNotOptimize(x);
-    }
-  }
-}
-
-BENCHMARK_REGISTER_F(MultipleRangesFixture, Empty)
-    ->RangeMultiplier(2)
-    ->Ranges({{1, 2}, {3, 7}, {5, 15}})
-    ->Args({7, 6, 3});
-
-void BM_CheckDefaultArgument(benchmark::State& state) {
-  // Test that the 'range()' without an argument is the same as 'range(0)'.
-  assert(state.range() == state.range(0));
-  assert(state.range() != state.range(1));
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_CheckDefaultArgument)->Ranges({{1, 5}, {6, 10}});
-
-static void BM_MultipleRanges(benchmark::State& st) {
-  while (st.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_MultipleRanges)->Ranges({{5, 5}, {6, 6}});
-
-BENCHMARK_MAIN()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/options_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/options_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/options_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/options_test.cc (removed)
@@ -1,65 +0,0 @@
-#include "benchmark/benchmark.h"
-#include <chrono>
-#include <thread>
-
-#if defined(NDEBUG)
-#undef NDEBUG
-#endif
-#include <cassert>
-
-void BM_basic(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-
-void BM_basic_slow(benchmark::State& state) {
-  std::chrono::milliseconds sleep_duration(state.range(0));
-  while (state.KeepRunning()) {
-    std::this_thread::sleep_for(
-        std::chrono::duration_cast<std::chrono::nanoseconds>(sleep_duration));
-  }
-}
-
-BENCHMARK(BM_basic);
-BENCHMARK(BM_basic)->Arg(42);
-BENCHMARK(BM_basic_slow)->Arg(10)->Unit(benchmark::kNanosecond);
-BENCHMARK(BM_basic_slow)->Arg(100)->Unit(benchmark::kMicrosecond);
-BENCHMARK(BM_basic_slow)->Arg(1000)->Unit(benchmark::kMillisecond);
-BENCHMARK(BM_basic)->Range(1, 8);
-BENCHMARK(BM_basic)->RangeMultiplier(2)->Range(1, 8);
-BENCHMARK(BM_basic)->DenseRange(10, 15);
-BENCHMARK(BM_basic)->Args({42, 42});
-BENCHMARK(BM_basic)->Ranges({{64, 512}, {64, 512}});
-BENCHMARK(BM_basic)->MinTime(0.7);
-BENCHMARK(BM_basic)->UseRealTime();
-BENCHMARK(BM_basic)->ThreadRange(2, 4);
-BENCHMARK(BM_basic)->ThreadPerCpu();
-BENCHMARK(BM_basic)->Repetitions(3);
-
-void CustomArgs(benchmark::internal::Benchmark* b) {
-  for (int i = 0; i < 10; ++i) {
-    b->Arg(i);
-  }
-}
-
-BENCHMARK(BM_basic)->Apply(CustomArgs);
-
-void BM_explicit_iteration_count(benchmark::State& st) {
-  // Test that benchmarks specified with an explicit iteration count are
-  // only run once.
-  static bool invoked_before = false;
-  assert(!invoked_before);
-  invoked_before = true;
-
-  // Test that the requested iteration count is respected.
-  assert(st.max_iterations == 42);
-  size_t actual_iterations = 0;
-  while (st.KeepRunning())
-    ++actual_iterations;
-  assert(st.iterations() == st.max_iterations);
-  assert(st.iterations() == 42);
-
-}
-BENCHMARK(BM_explicit_iteration_count)->Iterations(42);
-
-BENCHMARK_MAIN()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test.h?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test.h (removed)
@@ -1,201 +0,0 @@
-#ifndef TEST_OUTPUT_TEST_H
-#define TEST_OUTPUT_TEST_H
-
-#undef NDEBUG
-#include <initializer_list>
-#include <memory>
-#include <string>
-#include <utility>
-#include <vector>
-#include <functional>
-#include <sstream>
-
-#include "../src/re.h"
-#include "benchmark/benchmark.h"
-
-#define CONCAT2(x, y) x##y
-#define CONCAT(x, y) CONCAT2(x, y)
-
-#define ADD_CASES(...) int CONCAT(dummy, __LINE__) = ::AddCases(__VA_ARGS__)
-
-#define SET_SUBSTITUTIONS(...) \
-  int CONCAT(dummy, __LINE__) = ::SetSubstitutions(__VA_ARGS__)
-
-enum MatchRules {
-  MR_Default,  // Skip non-matching lines until a match is found.
-  MR_Next,     // Match must occur on the next line.
-  MR_Not  // No line between the current position and the next match matches
-          // the regex
-};
-
-struct TestCase {
-  TestCase(std::string re, int rule = MR_Default);
-
-  std::string regex_str;
-  int match_rule;
-  std::string substituted_regex;
-  std::shared_ptr<benchmark::Regex> regex;
-};
-
-enum TestCaseID {
-  TC_ConsoleOut,
-  TC_ConsoleErr,
-  TC_JSONOut,
-  TC_JSONErr,
-  TC_CSVOut,
-  TC_CSVErr,
-
-  TC_NumID  // PRIVATE
-};
-
-// Add a list of test cases to be run against the output specified by
-// 'ID'
-int AddCases(TestCaseID ID, std::initializer_list<TestCase> il);
-
-// Add or set a list of substitutions to be performed on constructed regex's
-// See 'output_test_helper.cc' for a list of default substitutions.
-int SetSubstitutions(
-    std::initializer_list<std::pair<std::string, std::string>> il);
-
-// Run all output tests.
-void RunOutputTests(int argc, char* argv[]);
-
-// ========================================================================= //
-// ------------------------- Results checking ------------------------------ //
-// ========================================================================= //
-
-// Call this macro to register a benchmark for checking its results. This
-// should be all that's needed. It subscribes a function to check the (CSV)
-// results of a benchmark. This is done only after verifying that the output
-// strings are really as expected.
-// bm_name_pattern: a name or a regex pattern which will be matched against
-//                  all the benchmark names. Matching benchmarks
-//                  will be the subject of a call to checker_function
-// checker_function: should be of type ResultsCheckFn (see below)
-#define CHECK_BENCHMARK_RESULTS(bm_name_pattern, checker_function) \
-    size_t CONCAT(dummy, __LINE__) = AddChecker(bm_name_pattern, checker_function)
-
-struct Results;
-typedef std::function< void(Results const&) > ResultsCheckFn;
-
-size_t AddChecker(const char* bm_name_pattern, ResultsCheckFn fn);
-
-// Class holding the results of a benchmark.
-// It is passed in calls to checker functions.
-struct Results {
-
-  // the benchmark name
-  std::string name;
-  // the benchmark fields
-  std::map< std::string, std::string > values;
-
-  Results(const std::string& n) : name(n) {}
-
-  int NumThreads() const;
-
-  typedef enum { kCpuTime, kRealTime } BenchmarkTime;
-
-  // get cpu_time or real_time in seconds
-  double GetTime(BenchmarkTime which) const;
-
-  // get the real_time duration of the benchmark in seconds.
-  // it is better to use fuzzy float checks for this, as the float
-  // ASCII formatting is lossy.
-  double DurationRealTime() const {
-    return GetAs< double >("iterations") * GetTime(kRealTime);
-  }
-  // get the cpu_time duration of the benchmark in seconds
-  double DurationCPUTime() const {
-    return GetAs< double >("iterations") * GetTime(kCpuTime);
-  }
-
-  // get the string for a result by name, or nullptr if the name
-  // is not found
-  const std::string* Get(const char* entry_name) const {
-    auto it = values.find(entry_name);
-    if(it == values.end()) return nullptr;
-    return &it->second;
-  }
-
-  // get a result by name, parsed as a specific type.
-  // NOTE: for counters, use GetCounterAs instead.
-  template <class T>
-  T GetAs(const char* entry_name) const;
-
-  // counters are written as doubles, so they have to be read first
-  // as a double, and only then converted to the asked type.
-  template <class T>
-  T GetCounterAs(const char* entry_name) const {
-    double dval = GetAs< double >(entry_name);
-    T tval = static_cast< T >(dval);
-    return tval;
-  }
-};
-
-template <class T>
-T Results::GetAs(const char* entry_name) const {
-  auto *sv = Get(entry_name);
-  CHECK(sv != nullptr && !sv->empty());
-  std::stringstream ss;
-  ss << *sv;
-  T out;
-  ss >> out;
-  CHECK(!ss.fail());
-  return out;
-}
-
-//----------------------------------
-// Macros to help in result checking. Do not use them with arguments causing
-// side-effects.
-
-#define _CHECK_RESULT_VALUE(entry, getfn, var_type, var_name, relationship, value) \
-    CONCAT(CHECK_, relationship)                                        \
-    (entry.getfn< var_type >(var_name), (value)) << "\n"                \
-    << __FILE__ << ":" << __LINE__ << ": " << (entry).name << ":\n"     \
-    << __FILE__ << ":" << __LINE__ << ": "                              \
-    << "expected (" << #var_type << ")" << (var_name)                   \
-    << "=" << (entry).getfn< var_type >(var_name)                       \
-    << " to be " #relationship " to " << (value) << "\n"
-
-// check with tolerance. eps_factor is the tolerance window, which is
-// interpreted relative to value (eg, 0.1 means 10% of value).
-#define _CHECK_FLOAT_RESULT_VALUE(entry, getfn, var_type, var_name, relationship, value, eps_factor) \
-    CONCAT(CHECK_FLOAT_, relationship)                                  \
-    (entry.getfn< var_type >(var_name), (value), (eps_factor) * (value)) << "\n" \
-    << __FILE__ << ":" << __LINE__ << ": " << (entry).name << ":\n"     \
-    << __FILE__ << ":" << __LINE__ << ": "                              \
-    << "expected (" << #var_type << ")" << (var_name)                   \
-    << "=" << (entry).getfn< var_type >(var_name)                       \
-    << " to be " #relationship " to " << (value) << "\n"                \
-    << __FILE__ << ":" << __LINE__ << ": "                              \
-    << "with tolerance of " << (eps_factor) * (value)                   \
-    << " (" << (eps_factor)*100. << "%), "                              \
-    << "but delta was " << ((entry).getfn< var_type >(var_name) - (value)) \
-    << " (" << (((entry).getfn< var_type >(var_name) - (value))         \
-               /                                                        \
-               ((value) > 1.e-5 || value < -1.e-5 ? value : 1.e-5)*100.) \
-    << "%)"
-
-#define CHECK_RESULT_VALUE(entry, var_type, var_name, relationship, value) \
-    _CHECK_RESULT_VALUE(entry, GetAs, var_type, var_name, relationship, value)
-
-#define CHECK_COUNTER_VALUE(entry, var_type, var_name, relationship, value) \
-    _CHECK_RESULT_VALUE(entry, GetCounterAs, var_type, var_name, relationship, value)
-
-#define CHECK_FLOAT_RESULT_VALUE(entry, var_name, relationship, value, eps_factor) \
-    _CHECK_FLOAT_RESULT_VALUE(entry, GetAs, double, var_name, relationship, value, eps_factor)
-
-#define CHECK_FLOAT_COUNTER_VALUE(entry, var_name, relationship, value, eps_factor) \
-    _CHECK_FLOAT_RESULT_VALUE(entry, GetCounterAs, double, var_name, relationship, value, eps_factor)
-
-// ========================================================================= //
-// --------------------------- Misc Utilities ------------------------------ //
-// ========================================================================= //
-
-namespace {
-
-const char* const dec_re = "[0-9]*[.]?[0-9]+([eE][-+][0-9]+)?";
-
-}  //  end namespace
-
-#endif  // TEST_OUTPUT_TEST_H

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test_helper.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test_helper.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test_helper.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test_helper.cc (removed)
@@ -1,423 +0,0 @@
-#include <iostream>
-#include <map>
-#include <memory>
-#include <sstream>
-#include <cstring>
-
-#include "../src/check.h"  // NOTE: check.h is for internal use only!
-#include "../src/re.h"     // NOTE: re.h is for internal use only
-#include "output_test.h"
-#include "../src/benchmark_api_internal.h"
-
-// ========================================================================= //
-// ------------------------------ Internals -------------------------------- //
-// ========================================================================= //
-namespace internal {
-namespace {
-
-using TestCaseList = std::vector<TestCase>;
-
-// Use a vector because the order elements are added matters during iteration.
-// std::map/unordered_map don't guarantee that.
-// For example:
-//  SetSubstitutions({{"%HelloWorld", "Hello"}, {"%Hello", "Hi"}});
-//     Substitute("%HelloWorld") // Always expands to Hello.
-using SubMap = std::vector<std::pair<std::string, std::string>>;
-
-TestCaseList& GetTestCaseList(TestCaseID ID) {
-  // Uses function-local statics to ensure initialization occurs
-  // before first use.
-  static TestCaseList lists[TC_NumID];
-  return lists[ID];
-}
-
-SubMap& GetSubstitutions() {
-  // Don't use 'dec_re' from header because it may not yet be initialized.
-  static std::string safe_dec_re = "[0-9]*[.]?[0-9]+([eE][-+][0-9]+)?";
-  static SubMap map = {
-      {"%float", "[0-9]*[.]?[0-9]+([eE][-+][0-9]+)?"},
-      // human-readable float
-      {"%hrfloat", "[0-9]*[.]?[0-9]+([eE][-+][0-9]+)?[kMGTPEZYmunpfazy]?"},
-      {"%int", "[ ]*[0-9]+"},
-      {" %s ", "[ ]+"},
-      {"%time", "[ ]*[0-9]{1,5} ns"},
-      {"%console_report", "[ ]*[0-9]{1,5} ns [ ]*[0-9]{1,5} ns [ ]*[0-9]+"},
-      {"%console_us_report", "[ ]*[0-9] us [ ]*[0-9] us [ ]*[0-9]+"},
-      {"%csv_header",
-       "name,iterations,real_time,cpu_time,time_unit,bytes_per_second,"
-       "items_per_second,label,error_occurred,error_message"},
-      {"%csv_report", "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",ns,,,,,"},
-      {"%csv_us_report", "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",us,,,,,"},
-      {"%csv_bytes_report",
-       "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",ns," + safe_dec_re + ",,,,"},
-      {"%csv_items_report",
-       "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",ns,," + safe_dec_re + ",,,"},
-      {"%csv_bytes_items_report",
-       "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",ns," + safe_dec_re +
-       "," + safe_dec_re + ",,,"},
-      {"%csv_label_report_begin", "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",ns,,,"},
-      {"%csv_label_report_end", ",,"}};
-  return map;
-}
-
-std::string PerformSubstitutions(std::string source) {
-  SubMap const& subs = GetSubstitutions();
-  using SizeT = std::string::size_type;
-  for (auto const& KV : subs) {
-    SizeT pos;
-    SizeT next_start = 0;
-    while ((pos = source.find(KV.first, next_start)) != std::string::npos) {
-      next_start = pos + KV.second.size();
-      source.replace(pos, KV.first.size(), KV.second);
-    }
-  }
-  return source;
-}
-
-void CheckCase(std::stringstream& remaining_output, TestCase const& TC,
-               TestCaseList const& not_checks) {
-  std::string first_line;
-  bool on_first = true;
-  std::string line;
-  while (remaining_output.eof() == false) {
-    CHECK(remaining_output.good());
-    std::getline(remaining_output, line);
-    if (on_first) {
-      first_line = line;
-      on_first = false;
-    }
-    for (const auto& NC : not_checks) {
-      CHECK(!NC.regex->Match(line))
-          << "Unexpected match for line \"" << line << "\" for MR_Not regex \""
-          << NC.regex_str << "\""
-          << "\n    actual regex string \"" << TC.substituted_regex << "\""
-          << "\n    started matching near: " << first_line;
-    }
-    if (TC.regex->Match(line)) return;
-    CHECK(TC.match_rule != MR_Next)
-        << "Expected line \"" << line << "\" to match regex \"" << TC.regex_str
-        << "\""
-        << "\n    actual regex string \"" << TC.substituted_regex << "\""
-        << "\n    started matching near: " << first_line;
-  }
-  CHECK(remaining_output.eof() == false)
-      << "End of output reached before match for regex \"" << TC.regex_str
-      << "\" was found"
-      << "\n    actual regex string \"" << TC.substituted_regex << "\""
-      << "\n    started matching near: " << first_line;
-}
-
-void CheckCases(TestCaseList const& checks, std::stringstream& output) {
-  std::vector<TestCase> not_checks;
-  for (size_t i = 0; i < checks.size(); ++i) {
-    const auto& TC = checks[i];
-    if (TC.match_rule == MR_Not) {
-      not_checks.push_back(TC);
-      continue;
-    }
-    CheckCase(output, TC, not_checks);
-    not_checks.clear();
-  }
-}
-
-class TestReporter : public benchmark::BenchmarkReporter {
- public:
-  TestReporter(std::vector<benchmark::BenchmarkReporter*> reps)
-      : reporters_(reps) {}
-
-  virtual bool ReportContext(const Context& context) {
-    bool last_ret = false;
-    bool first = true;
-    for (auto rep : reporters_) {
-      bool new_ret = rep->ReportContext(context);
-      CHECK(first || new_ret == last_ret)
-          << "Reports return different values for ReportContext";
-      first = false;
-      last_ret = new_ret;
-    }
-    (void)first;
-    return last_ret;
-  }
-
-  void ReportRuns(const std::vector<Run>& report) {
-    for (auto rep : reporters_) rep->ReportRuns(report);
-  }
-  void Finalize() {
-    for (auto rep : reporters_) rep->Finalize();
-  }
-
- private:
-  std::vector<benchmark::BenchmarkReporter *> reporters_;
-};
-}
-
-}  // end namespace internal
-
-// ========================================================================= //
-// -------------------------- Results checking ----------------------------- //
-// ========================================================================= //
-
-namespace internal {
-
-// Utility class to manage subscribers for checking benchmark results.
-// It works by parsing the CSV output to read the results.
-class ResultsChecker {
- public:
-
-  struct PatternAndFn : public TestCase { // reusing TestCase for its regexes
-    PatternAndFn(const std::string& rx, ResultsCheckFn fn_)
-    : TestCase(rx), fn(fn_) {}
-    ResultsCheckFn fn;
-  };
-
-  std::vector< PatternAndFn > check_patterns;
-  std::vector< Results > results;
-  std::vector< std::string > field_names;
-
-  void Add(const std::string& entry_pattern, ResultsCheckFn fn);
-
-  void CheckResults(std::stringstream& output);
-
- private:
-
-  void SetHeader_(const std::string& csv_header);
-  void SetValues_(const std::string& entry_csv_line);
-
-  std::vector< std::string > SplitCsv_(const std::string& line);
-
-};
-
-// store the static ResultsChecker in a function to prevent initialization
-// order problems
-ResultsChecker& GetResultsChecker() {
-  static ResultsChecker rc;
-  return rc;
-}
-
-// add a results checker for a benchmark
-void ResultsChecker::Add(const std::string& entry_pattern, ResultsCheckFn fn) {
-  check_patterns.emplace_back(entry_pattern, fn);
-}
-
-// check the results of all subscribed benchmarks
-void ResultsChecker::CheckResults(std::stringstream& output) {
-  // first reset the stream to the start
-  {
-    auto start = std::ios::streampos(0);
-    // clear before calling tellg()
-    output.clear();
-    // seek to zero only when needed
-    if(output.tellg() > start) output.seekg(start);
-    // and just in case
-    output.clear();
-  }
-  // now go over every line and publish it to the ResultsChecker
-  std::string line;
-  bool on_first = true;
-  while (output.eof() == false) {
-    CHECK(output.good());
-    std::getline(output, line);
-    if (on_first) {
-      SetHeader_(line); // this is important
-      on_first = false;
-      continue;
-    }
-    SetValues_(line);
-  }
-  // finally we can call the subscribed check functions
-  for(const auto& p : check_patterns) {
-    VLOG(2) << "--------------------------------\n";
-    VLOG(2) << "checking for benchmarks matching " << p.regex_str << "...\n";
-    for(const auto& r : results) {
-      if(!p.regex->Match(r.name)) {
-        VLOG(2) << p.regex_str << " is not matched by " << r.name << "\n";
-        continue;
-      } else {
-        VLOG(2) << p.regex_str << " is matched by " << r.name << "\n";
-      }
-      VLOG(1) << "Checking results of " << r.name << ": ... \n";
-      p.fn(r);
-      VLOG(1) << "Checking results of " << r.name << ": OK.\n";
-    }
-  }
-}
-
-// prepare for the names in this header
-void ResultsChecker::SetHeader_(const std::string& csv_header) {
-  field_names = SplitCsv_(csv_header);
-}
-
-// set the values for a benchmark
-void ResultsChecker::SetValues_(const std::string& entry_csv_line) {
-  if(entry_csv_line.empty()) return; // some lines are empty
-  CHECK(!field_names.empty());
-  auto vals = SplitCsv_(entry_csv_line);
-  CHECK_EQ(vals.size(), field_names.size());
-  results.emplace_back(vals[0]); // vals[0] is the benchmark name
-  auto &entry = results.back();
-  for (size_t i = 1, e = vals.size(); i < e; ++i) {
-    entry.values[field_names[i]] = vals[i];
-  }
-}
-
-// a quick'n'dirty csv splitter (eliminating quotes)
-std::vector< std::string > ResultsChecker::SplitCsv_(const std::string& line) {
-  std::vector< std::string > out;
-  if(line.empty()) return out;
-  if(!field_names.empty()) out.reserve(field_names.size());
-  size_t prev = 0, pos = line.find_first_of(','), curr = pos;
-  while(pos != line.npos) {
-    CHECK(curr > 0);
-    if(line[prev] == '"') ++prev;
-    if(line[curr-1] == '"') --curr;
-    out.push_back(line.substr(prev, curr-prev));
-    prev = pos + 1;
-    pos = line.find_first_of(',', pos + 1);
-    curr = pos;
-  }
-  curr = line.size();
-  if(line[prev] == '"') ++prev;
-  if(line[curr-1] == '"') --curr;
-  out.push_back(line.substr(prev, curr-prev));
-  return out;
-}
-
-}  // end namespace internal
-
-size_t AddChecker(const char* bm_name, ResultsCheckFn fn)
-{
-  auto &rc = internal::GetResultsChecker();
-  rc.Add(bm_name, fn);
-  return rc.results.size();
-}
-
-int Results::NumThreads() const {
-  auto pos = name.find("/threads:");
-  if(pos == name.npos) return 1;
-  auto end = name.find('/', pos + 9);
-  std::stringstream ss;
-  ss << name.substr(pos + 9, end);
-  int num = 1;
-  ss >> num;
-  CHECK(!ss.fail());
-  return num;
-}
-
-double Results::GetTime(BenchmarkTime which) const {
-  CHECK(which == kCpuTime || which == kRealTime);
-  const char *which_str = which == kCpuTime ? "cpu_time" : "real_time";
-  double val = GetAs< double >(which_str);
-  auto unit = Get("time_unit");
-  CHECK(unit);
-  if(*unit == "ns") {
-    return val * 1.e-9;
-  } else if(*unit == "us") {
-    return val * 1.e-6;
-  } else if(*unit == "ms") {
-    return val * 1.e-3;
-  } else if(*unit == "s") {
-    return val;
-  } else {
-    CHECK(1 == 0) << "unknown time unit: " << *unit;
-    return 0;
-  }
-}
-
-// ========================================================================= //
-// -------------------------- Public API Definitions------------------------ //
-// ========================================================================= //
-
-TestCase::TestCase(std::string re, int rule)
-    : regex_str(std::move(re)),
-      match_rule(rule),
-      substituted_regex(internal::PerformSubstitutions(regex_str)),
-      regex(std::make_shared<benchmark::Regex>()) {
-  std::string err_str;
-  regex->Init(substituted_regex,& err_str);
-  CHECK(err_str.empty()) << "Could not construct regex \"" << substituted_regex
-                         << "\""
-                         << "\n    originally \"" << regex_str << "\""
-                         << "\n    got error: " << err_str;
-}
-
-int AddCases(TestCaseID ID, std::initializer_list<TestCase> il) {
-  auto& L = internal::GetTestCaseList(ID);
-  L.insert(L.end(), il);
-  return 0;
-}
-
-int SetSubstitutions(
-    std::initializer_list<std::pair<std::string, std::string>> il) {
-  auto& subs = internal::GetSubstitutions();
-  for (auto KV : il) {
-    bool exists = false;
-    KV.second = internal::PerformSubstitutions(KV.second);
-    for (auto& EKV : subs) {
-      if (EKV.first == KV.first) {
-        EKV.second = std::move(KV.second);
-        exists = true;
-        break;
-      }
-    }
-    if (!exists) subs.push_back(std::move(KV));
-  }
-  return 0;
-}
-
-void RunOutputTests(int argc, char* argv[]) {
-  using internal::GetTestCaseList;
-  benchmark::Initialize(&argc, argv);
-  auto options = benchmark::internal::GetOutputOptions(/*force_no_color*/true);
-  benchmark::ConsoleReporter CR(options);
-  benchmark::JSONReporter JR;
-  benchmark::CSVReporter CSVR;
-  struct ReporterTest {
-    const char* name;
-    std::vector<TestCase>& output_cases;
-    std::vector<TestCase>& error_cases;
-    benchmark::BenchmarkReporter& reporter;
-    std::stringstream out_stream;
-    std::stringstream err_stream;
-
-    ReporterTest(const char* n, std::vector<TestCase>& out_tc,
-                 std::vector<TestCase>& err_tc,
-                 benchmark::BenchmarkReporter& br)
-        : name(n), output_cases(out_tc), error_cases(err_tc), reporter(br) {
-      reporter.SetOutputStream(&out_stream);
-      reporter.SetErrorStream(&err_stream);
-    }
-  } TestCases[] = {
-      {"ConsoleReporter", GetTestCaseList(TC_ConsoleOut),
-       GetTestCaseList(TC_ConsoleErr), CR},
-      {"JSONReporter", GetTestCaseList(TC_JSONOut), GetTestCaseList(TC_JSONErr),
-       JR},
-      {"CSVReporter", GetTestCaseList(TC_CSVOut), GetTestCaseList(TC_CSVErr),
-       CSVR},
-  };
-
-  // Create the test reporter and run the benchmarks.
-  std::cout << "Running benchmarks...\n";
-  internal::TestReporter test_rep({&CR, &JR, &CSVR});
-  benchmark::RunSpecifiedBenchmarks(&test_rep);
-
-  for (auto& rep_test : TestCases) {
-    std::string msg = std::string("\nTesting ") + rep_test.name + " Output\n";
-    std::string banner(msg.size() - 1, '-');
-    std::cout << banner << msg << banner << "\n";
-
-    std::cerr << rep_test.err_stream.str();
-    std::cout << rep_test.out_stream.str();
-
-    internal::CheckCases(rep_test.error_cases, rep_test.err_stream);
-    internal::CheckCases(rep_test.output_cases, rep_test.out_stream);
-
-    std::cout << "\n";
-  }
-
-  // now that we know the output is as expected, we can dispatch
-  // the checks to subscribees.
-  auto &csv = TestCases[2];
-  // would use == but gcc spits a warning
-  CHECK(std::strcmp(csv.name, "CSVReporter") == 0);
-  internal::GetResultsChecker().CheckResults(csv.out_stream);
-}

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/register_benchmark_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/register_benchmark_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/register_benchmark_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/register_benchmark_test.cc (removed)
@@ -1,182 +0,0 @@
-
-#undef NDEBUG
-#include <cassert>
-#include <vector>
-
-#include "../src/check.h"  // NOTE: check.h is for internal use only!
-#include "benchmark/benchmark.h"
-
-namespace {
-
-class TestReporter : public benchmark::ConsoleReporter {
- public:
-  virtual void ReportRuns(const std::vector<Run>& report) {
-    all_runs_.insert(all_runs_.end(), begin(report), end(report));
-    ConsoleReporter::ReportRuns(report);
-  }
-
-  std::vector<Run> all_runs_;
-};
-
-struct TestCase {
-  std::string name;
-  const char* label;
-  // Note: not explicit as we rely on it being converted through ADD_CASES.
-  TestCase(const char* xname) : TestCase(xname, nullptr) {}
-  TestCase(const char* xname, const char* xlabel)
-      : name(xname), label(xlabel) {}
-
-  typedef benchmark::BenchmarkReporter::Run Run;
-
-  void CheckRun(Run const& run) const {
-    CHECK(name == run.benchmark_name) << "expected " << name << " got "
-                                      << run.benchmark_name;
-    if (label) {
-      CHECK(run.report_label == label) << "expected " << label << " got "
-                                       << run.report_label;
-    } else {
-      CHECK(run.report_label == "");
-    }
-  }
-};
-
-std::vector<TestCase> ExpectedResults;
-
-int AddCases(std::initializer_list<TestCase> const& v) {
-  for (auto N : v) {
-    ExpectedResults.push_back(N);
-  }
-  return 0;
-}
-
-#define CONCAT(x, y) CONCAT2(x, y)
-#define CONCAT2(x, y) x##y
-#define ADD_CASES(...) int CONCAT(dummy, __LINE__) = AddCases({__VA_ARGS__})
-
-}  // end namespace
-
-typedef benchmark::internal::Benchmark* ReturnVal;
-
-//----------------------------------------------------------------------------//
-// Test RegisterBenchmark with no additional arguments
-//----------------------------------------------------------------------------//
-void BM_function(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_function);
-ReturnVal dummy = benchmark::RegisterBenchmark(
-    "BM_function_manual_registration", BM_function);
-ADD_CASES({"BM_function"}, {"BM_function_manual_registration"});
-
-//----------------------------------------------------------------------------//
-// Test RegisterBenchmark with additional arguments
-// Note: GCC <= 4.8 do not support this form of RegisterBenchmark because they
-//       reject the variadic pack expansion of lambda captures.
-//----------------------------------------------------------------------------//
-#ifndef BENCHMARK_HAS_NO_VARIADIC_REGISTER_BENCHMARK
-
-void BM_extra_args(benchmark::State& st, const char* label) {
-  while (st.KeepRunning()) {
-  }
-  st.SetLabel(label);
-}
-int RegisterFromFunction() {
-  std::pair<const char*, const char*> cases[] = {
-      {"test1", "One"}, {"test2", "Two"}, {"test3", "Three"}};
-  for (auto const& c : cases)
-    benchmark::RegisterBenchmark(c.first, &BM_extra_args, c.second);
-  return 0;
-}
-int dummy2 = RegisterFromFunction();
-ADD_CASES({"test1", "One"}, {"test2", "Two"}, {"test3", "Three"});
-
-#endif  // BENCHMARK_HAS_NO_VARIADIC_REGISTER_BENCHMARK
-
-//----------------------------------------------------------------------------//
-// Test RegisterBenchmark with different callable types
-//----------------------------------------------------------------------------//
-
-struct CustomFixture {
-  void operator()(benchmark::State& st) {
-    while (st.KeepRunning()) {
-    }
-  }
-};
-
-void TestRegistrationAtRuntime() {
-#ifdef BENCHMARK_HAS_CXX11
-  {
-    CustomFixture fx;
-    benchmark::RegisterBenchmark("custom_fixture", fx);
-    AddCases({"custom_fixture"});
-  }
-#endif
-#ifndef BENCHMARK_HAS_NO_VARIADIC_REGISTER_BENCHMARK
-  {
-    const char* x = "42";
-    auto capturing_lam = [=](benchmark::State& st) {
-      while (st.KeepRunning()) {
-      }
-      st.SetLabel(x);
-    };
-    benchmark::RegisterBenchmark("lambda_benchmark", capturing_lam);
-    AddCases({{"lambda_benchmark", x}});
-  }
-#endif
-}
-
-// Test that all benchmarks, registered at either during static init or runtime,
-// are run and the results are passed to the reported.
-void RunTestOne() {
-  TestRegistrationAtRuntime();
-
-  TestReporter test_reporter;
-  benchmark::RunSpecifiedBenchmarks(&test_reporter);
-
-  typedef benchmark::BenchmarkReporter::Run Run;
-  auto EB = ExpectedResults.begin();
-
-  for (Run const& run : test_reporter.all_runs_) {
-    assert(EB != ExpectedResults.end());
-    EB->CheckRun(run);
-    ++EB;
-  }
-  assert(EB == ExpectedResults.end());
-}
-
-// Test that ClearRegisteredBenchmarks() clears all previously registered
-// benchmarks.
-// Also test that new benchmarks can be registered and ran afterwards.
-void RunTestTwo() {
-  assert(ExpectedResults.size() != 0 &&
-         "must have at least one registered benchmark");
-  ExpectedResults.clear();
-  benchmark::ClearRegisteredBenchmarks();
-
-  TestReporter test_reporter;
-  size_t num_ran = benchmark::RunSpecifiedBenchmarks(&test_reporter);
-  assert(num_ran == 0);
-  assert(test_reporter.all_runs_.begin() == test_reporter.all_runs_.end());
-
-  TestRegistrationAtRuntime();
-  num_ran = benchmark::RunSpecifiedBenchmarks(&test_reporter);
-  assert(num_ran == ExpectedResults.size());
-
-  typedef benchmark::BenchmarkReporter::Run Run;
-  auto EB = ExpectedResults.begin();
-
-  for (Run const& run : test_reporter.all_runs_) {
-    assert(EB != ExpectedResults.end());
-    EB->CheckRun(run);
-    ++EB;
-  }
-  assert(EB == ExpectedResults.end());
-}
-
-int main(int argc, char* argv[]) {
-  benchmark::Initialize(&argc, argv);
-
-  RunTestOne();
-  RunTestTwo();
-}

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/reporter_output_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/reporter_output_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/reporter_output_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/reporter_output_test.cc (removed)
@@ -1,256 +0,0 @@
-
-#undef NDEBUG
-#include <utility>
-
-#include "benchmark/benchmark.h"
-#include "output_test.h"
-
-// ========================================================================= //
-// ---------------------- Testing Prologue Output -------------------------- //
-// ========================================================================= //
-
-ADD_CASES(TC_ConsoleOut,
-          {{"^[-]+$", MR_Next},
-           {"^Benchmark %s Time %s CPU %s Iterations$", MR_Next},
-           {"^[-]+$", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"%csv_header"}});
-
-// ========================================================================= //
-// ------------------------ Testing Basic Output --------------------------- //
-// ========================================================================= //
-
-void BM_basic(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_basic);
-
-ADD_CASES(TC_ConsoleOut, {{"^BM_basic %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_basic\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\"$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_basic\",%csv_report$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Bytes per Second Output ---------------- //
-// ========================================================================= //
-
-void BM_bytes_per_second(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  state.SetBytesProcessed(1);
-}
-BENCHMARK(BM_bytes_per_second);
-
-ADD_CASES(TC_ConsoleOut,
-          {{"^BM_bytes_per_second %console_report +%floatB/s$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_bytes_per_second\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bytes_per_second\": %int$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_bytes_per_second\",%csv_bytes_report$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Items per Second Output ---------------- //
-// ========================================================================= //
-
-void BM_items_per_second(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  state.SetItemsProcessed(1);
-}
-BENCHMARK(BM_items_per_second);
-
-ADD_CASES(TC_ConsoleOut,
-          {{"^BM_items_per_second %console_report +%float items/s$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_items_per_second\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"items_per_second\": %int$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_items_per_second\",%csv_items_report$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Label Output --------------------------- //
-// ========================================================================= //
-
-void BM_label(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  state.SetLabel("some label");
-}
-BENCHMARK(BM_label);
-
-ADD_CASES(TC_ConsoleOut, {{"^BM_label %console_report some label$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_label\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"label\": \"some label\"$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_label\",%csv_label_report_begin\"some "
-                       "label\"%csv_label_report_end$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Error Output --------------------------- //
-// ========================================================================= //
-
-void BM_error(benchmark::State& state) {
-  state.SkipWithError("message");
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_error);
-ADD_CASES(TC_ConsoleOut, {{"^BM_error[ ]+ERROR OCCURRED: 'message'$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_error\",$"},
-                       {"\"error_occurred\": true,$", MR_Next},
-                       {"\"error_message\": \"message\",$", MR_Next}});
-
-ADD_CASES(TC_CSVOut, {{"^\"BM_error\",,,,,,,,true,\"message\"$"}});
-
-// ========================================================================= //
-// ------------------------ Testing No Arg Name Output -----------------------
-// //
-// ========================================================================= //
-
-void BM_no_arg_name(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_no_arg_name)->Arg(3);
-ADD_CASES(TC_ConsoleOut, {{"^BM_no_arg_name/3 %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_no_arg_name/3\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_no_arg_name/3\",%csv_report$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Arg Name Output ----------------------- //
-// ========================================================================= //
-
-void BM_arg_name(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_arg_name)->ArgName("first")->Arg(3);
-ADD_CASES(TC_ConsoleOut, {{"^BM_arg_name/first:3 %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_arg_name/first:3\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_arg_name/first:3\",%csv_report$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Arg Names Output ----------------------- //
-// ========================================================================= //
-
-void BM_arg_names(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_arg_names)->Args({2, 5, 4})->ArgNames({"first", "", "third"});
-ADD_CASES(TC_ConsoleOut,
-          {{"^BM_arg_names/first:2/5/third:4 %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_arg_names/first:2/5/third:4\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_arg_names/first:2/5/third:4\",%csv_report$"}});
-
-// ========================================================================= //
-// ----------------------- Testing Complexity Output ----------------------- //
-// ========================================================================= //
-
-void BM_Complexity_O1(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  state.SetComplexityN(state.range(0));
-}
-BENCHMARK(BM_Complexity_O1)->Range(1, 1 << 18)->Complexity(benchmark::o1);
-SET_SUBSTITUTIONS({{"%bigOStr", "[ ]* %float \\([0-9]+\\)"},
-                   {"%RMS", "[ ]*[0-9]+ %"}});
-ADD_CASES(TC_ConsoleOut, {{"^BM_Complexity_O1_BigO %bigOStr %bigOStr[ ]*$"},
-                          {"^BM_Complexity_O1_RMS %RMS %RMS[ ]*$"}});
-
-// ========================================================================= //
-// ----------------------- Testing Aggregate Output ------------------------ //
-// ========================================================================= //
-
-// Test that non-aggregate data is printed by default
-void BM_Repeat(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_Repeat)->Repetitions(3);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Repeat/repeats:3 %console_report$"},
-                          {"^BM_Repeat/repeats:3 %console_report$"},
-                          {"^BM_Repeat/repeats:3 %console_report$"},
-                          {"^BM_Repeat/repeats:3_mean %console_report$"},
-                          {"^BM_Repeat/repeats:3_stddev %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Repeat/repeats:3\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:3\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:3\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:3_mean\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:3_stddev\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Repeat/repeats:3\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:3\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:3\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:3_mean\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:3_stddev\",%csv_report$"}});
-
-// Test that a non-repeated test still prints non-aggregate results even when
-// only-aggregate reports have been requested
-void BM_RepeatOnce(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_RepeatOnce)->Repetitions(1)->ReportAggregatesOnly();
-ADD_CASES(TC_ConsoleOut, {{"^BM_RepeatOnce/repeats:1 %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_RepeatOnce/repeats:1\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_RepeatOnce/repeats:1\",%csv_report$"}});
-
-// Test that non-aggregate data is not reported
-void BM_SummaryRepeat(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_SummaryRepeat)->Repetitions(3)->ReportAggregatesOnly();
-ADD_CASES(TC_ConsoleOut,
-          {{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
-           {"^BM_SummaryRepeat/repeats:3_mean %console_report$"},
-           {"^BM_SummaryRepeat/repeats:3_stddev %console_report$"}});
-ADD_CASES(TC_JSONOut, {{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
-                       {"\"name\": \"BM_SummaryRepeat/repeats:3_mean\",$"},
-                       {"\"name\": \"BM_SummaryRepeat/repeats:3_stddev\",$"}});
-ADD_CASES(TC_CSVOut, {{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
-                      {"^\"BM_SummaryRepeat/repeats:3_mean\",%csv_report$"},
-                      {"^\"BM_SummaryRepeat/repeats:3_stddev\",%csv_report$"}});
-
-void BM_RepeatTimeUnit(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_RepeatTimeUnit)
-    ->Repetitions(3)
-    ->ReportAggregatesOnly()
-    ->Unit(benchmark::kMicrosecond);
-ADD_CASES(TC_ConsoleOut,
-          {{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
-           {"^BM_RepeatTimeUnit/repeats:3_mean %console_us_report$"},
-           {"^BM_RepeatTimeUnit/repeats:3_stddev %console_us_report$"}});
-ADD_CASES(TC_JSONOut, {{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
-                       {"\"name\": \"BM_RepeatTimeUnit/repeats:3_mean\",$"},
-                       {"\"time_unit\": \"us\",?$"},
-                       {"\"name\": \"BM_RepeatTimeUnit/repeats:3_stddev\",$"},
-                       {"\"time_unit\": \"us\",?$"}});
-ADD_CASES(TC_CSVOut,
-          {{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
-           {"^\"BM_RepeatTimeUnit/repeats:3_mean\",%csv_us_report$"},
-           {"^\"BM_RepeatTimeUnit/repeats:3_stddev\",%csv_us_report$"}});
-
-// ========================================================================= //
-// --------------------------- TEST CASES END ------------------------------ //
-// ========================================================================= //
-
-int main(int argc, char* argv[]) { RunOutputTests(argc, argv); }

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/skip_with_error_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/skip_with_error_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/skip_with_error_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/skip_with_error_test.cc (removed)
@@ -1,150 +0,0 @@
-
-#undef NDEBUG
-#include <cassert>
-#include <vector>
-
-#include "../src/check.h"  // NOTE: check.h is for internal use only!
-#include "benchmark/benchmark.h"
-
-namespace {
-
-class TestReporter : public benchmark::ConsoleReporter {
- public:
-  virtual bool ReportContext(const Context& context) {
-    return ConsoleReporter::ReportContext(context);
-  };
-
-  virtual void ReportRuns(const std::vector<Run>& report) {
-    all_runs_.insert(all_runs_.end(), begin(report), end(report));
-    ConsoleReporter::ReportRuns(report);
-  }
-
-  TestReporter() {}
-  virtual ~TestReporter() {}
-
-  mutable std::vector<Run> all_runs_;
-};
-
-struct TestCase {
-  std::string name;
-  bool error_occurred;
-  std::string error_message;
-
-  typedef benchmark::BenchmarkReporter::Run Run;
-
-  void CheckRun(Run const& run) const {
-    CHECK(name == run.benchmark_name) << "expected " << name << " got "
-                                      << run.benchmark_name;
-    CHECK(error_occurred == run.error_occurred);
-    CHECK(error_message == run.error_message);
-    if (error_occurred) {
-      // CHECK(run.iterations == 0);
-    } else {
-      CHECK(run.iterations != 0);
-    }
-  }
-};
-
-std::vector<TestCase> ExpectedResults;
-
-int AddCases(const char* base_name, std::initializer_list<TestCase> const& v) {
-  for (auto TC : v) {
-    TC.name = base_name + TC.name;
-    ExpectedResults.push_back(std::move(TC));
-  }
-  return 0;
-}
-
-#define CONCAT(x, y) CONCAT2(x, y)
-#define CONCAT2(x, y) x##y
-#define ADD_CASES(...) int CONCAT(dummy, __LINE__) = AddCases(__VA_ARGS__)
-
-}  // end namespace
-
-void BM_error_before_running(benchmark::State& state) {
-  state.SkipWithError("error message");
-  while (state.KeepRunning()) {
-    assert(false);
-  }
-}
-BENCHMARK(BM_error_before_running);
-ADD_CASES("BM_error_before_running", {{"", true, "error message"}});
-
-void BM_error_during_running(benchmark::State& state) {
-  int first_iter = true;
-  while (state.KeepRunning()) {
-    if (state.range(0) == 1 && state.thread_index <= (state.threads / 2)) {
-      assert(first_iter);
-      first_iter = false;
-      state.SkipWithError("error message");
-    } else {
-      state.PauseTiming();
-      state.ResumeTiming();
-    }
-  }
-}
-BENCHMARK(BM_error_during_running)->Arg(1)->Arg(2)->ThreadRange(1, 8);
-ADD_CASES("BM_error_during_running", {{"/1/threads:1", true, "error message"},
-                                      {"/1/threads:2", true, "error message"},
-                                      {"/1/threads:4", true, "error message"},
-                                      {"/1/threads:8", true, "error message"},
-                                      {"/2/threads:1", false, ""},
-                                      {"/2/threads:2", false, ""},
-                                      {"/2/threads:4", false, ""},
-                                      {"/2/threads:8", false, ""}});
-
-void BM_error_after_running(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    benchmark::DoNotOptimize(state.iterations());
-  }
-  if (state.thread_index <= (state.threads / 2))
-    state.SkipWithError("error message");
-}
-BENCHMARK(BM_error_after_running)->ThreadRange(1, 8);
-ADD_CASES("BM_error_after_running", {{"/threads:1", true, "error message"},
-                                     {"/threads:2", true, "error message"},
-                                     {"/threads:4", true, "error message"},
-                                     {"/threads:8", true, "error message"}});
-
-void BM_error_while_paused(benchmark::State& state) {
-  bool first_iter = true;
-  while (state.KeepRunning()) {
-    if (state.range(0) == 1 && state.thread_index <= (state.threads / 2)) {
-      assert(first_iter);
-      first_iter = false;
-      state.PauseTiming();
-      state.SkipWithError("error message");
-    } else {
-      state.PauseTiming();
-      state.ResumeTiming();
-    }
-  }
-}
-BENCHMARK(BM_error_while_paused)->Arg(1)->Arg(2)->ThreadRange(1, 8);
-ADD_CASES("BM_error_while_paused", {{"/1/threads:1", true, "error message"},
-                                    {"/1/threads:2", true, "error message"},
-                                    {"/1/threads:4", true, "error message"},
-                                    {"/1/threads:8", true, "error message"},
-                                    {"/2/threads:1", false, ""},
-                                    {"/2/threads:2", false, ""},
-                                    {"/2/threads:4", false, ""},
-                                    {"/2/threads:8", false, ""}});
-
-int main(int argc, char* argv[]) {
-  benchmark::Initialize(&argc, argv);
-
-  TestReporter test_reporter;
-  benchmark::RunSpecifiedBenchmarks(&test_reporter);
-
-  typedef benchmark::BenchmarkReporter::Run Run;
-  auto EB = ExpectedResults.begin();
-
-  for (Run const& run : test_reporter.all_runs_) {
-    assert(EB != ExpectedResults.end());
-    EB->CheckRun(run);
-    ++EB;
-  }
-  assert(EB == ExpectedResults.end());
-
-  return 0;
-}

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_tabular_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_tabular_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_tabular_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_tabular_test.cc (removed)
@@ -1,250 +0,0 @@
-
-#undef NDEBUG
-
-#include "benchmark/benchmark.h"
-#include "output_test.h"
-
-// @todo: <jpmag> this checks the full output at once; the rule for
-// CounterSet1 was failing because it was not matching "^[-]+$".
-// @todo: <jpmag> check that the counters are vertically aligned.
-ADD_CASES(TC_ConsoleOut, {
-// keeping these lines long improves readability, so:
-// clang-format off
-    {"^[-]+$", MR_Next},
-    {"^Benchmark %s Time %s CPU %s Iterations %s Bar %s Bat %s Baz %s Foo %s Frob %s Lob$", MR_Next},
-    {"^[-]+$", MR_Next},
-    {"^BM_Counters_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_Counters_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_Counters_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_Counters_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_Counters_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterRates_Tabular/threads:%int %console_report [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s$", MR_Next},
-    {"^BM_CounterRates_Tabular/threads:%int %console_report [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s$", MR_Next},
-    {"^BM_CounterRates_Tabular/threads:%int %console_report [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s$", MR_Next},
-    {"^BM_CounterRates_Tabular/threads:%int %console_report [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s$", MR_Next},
-    {"^BM_CounterRates_Tabular/threads:%int %console_report [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s$", MR_Next},
-    {"^[-]+$", MR_Next},
-    {"^Benchmark %s Time %s CPU %s Iterations %s Bar %s Baz %s Foo$", MR_Next},
-    {"^[-]+$", MR_Next},
-    {"^BM_CounterSet0_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet0_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet0_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet0_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet0_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet1_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet1_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet1_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet1_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet1_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^[-]+$", MR_Next},
-    {"^Benchmark %s Time %s CPU %s Iterations %s Bat %s Baz %s Foo$", MR_Next},
-    {"^[-]+$", MR_Next},
-    {"^BM_CounterSet2_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet2_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet2_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet2_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet2_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$"},
-// clang-format on
-});
-ADD_CASES(TC_CSVOut, {{"%csv_header,"
-                       "\"Bar\",\"Bat\",\"Baz\",\"Foo\",\"Frob\",\"Lob\""}});
-
-// ========================================================================= //
-// ------------------------- Tabular Counters Output ----------------------- //
-// ========================================================================= //
-
-void BM_Counters_Tabular(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  namespace bm = benchmark;
-  state.counters.insert({
-    {"Foo",  { 1, bm::Counter::kAvgThreads}},
-    {"Bar",  { 2, bm::Counter::kAvgThreads}},
-    {"Baz",  { 4, bm::Counter::kAvgThreads}},
-    {"Bat",  { 8, bm::Counter::kAvgThreads}},
-    {"Frob", {16, bm::Counter::kAvgThreads}},
-    {"Lob",  {32, bm::Counter::kAvgThreads}},
-  });
-}
-BENCHMARK(BM_Counters_Tabular)->ThreadRange(1, 16);
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Tabular/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"Bar\": %float,$", MR_Next},
-                       {"\"Bat\": %float,$", MR_Next},
-                       {"\"Baz\": %float,$", MR_Next},
-                       {"\"Foo\": %float,$", MR_Next},
-                       {"\"Frob\": %float,$", MR_Next},
-                       {"\"Lob\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_Tabular/threads:%int\",%csv_report,"
-                       "%float,%float,%float,%float,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckTabular(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "Foo", EQ, 1);
-  CHECK_COUNTER_VALUE(e, int, "Bar", EQ, 2);
-  CHECK_COUNTER_VALUE(e, int, "Baz", EQ, 4);
-  CHECK_COUNTER_VALUE(e, int, "Bat", EQ, 8);
-  CHECK_COUNTER_VALUE(e, int, "Frob", EQ, 16);
-  CHECK_COUNTER_VALUE(e, int, "Lob", EQ, 32);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_Tabular/threads:%int", &CheckTabular);
-
-// ========================================================================= //
-// -------------------- Tabular+Rate Counters Output ----------------------- //
-// ========================================================================= //
-
-void BM_CounterRates_Tabular(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  namespace bm = benchmark;
-  state.counters.insert({
-    {"Foo",  { 1, bm::Counter::kAvgThreadsRate}},
-    {"Bar",  { 2, bm::Counter::kAvgThreadsRate}},
-    {"Baz",  { 4, bm::Counter::kAvgThreadsRate}},
-    {"Bat",  { 8, bm::Counter::kAvgThreadsRate}},
-    {"Frob", {16, bm::Counter::kAvgThreadsRate}},
-    {"Lob",  {32, bm::Counter::kAvgThreadsRate}},
-  });
-}
-BENCHMARK(BM_CounterRates_Tabular)->ThreadRange(1, 16);
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterRates_Tabular/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"Bar\": %float,$", MR_Next},
-                       {"\"Bat\": %float,$", MR_Next},
-                       {"\"Baz\": %float,$", MR_Next},
-                       {"\"Foo\": %float,$", MR_Next},
-                       {"\"Frob\": %float,$", MR_Next},
-                       {"\"Lob\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_CounterRates_Tabular/threads:%int\",%csv_report,"
-                       "%float,%float,%float,%float,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckTabularRate(Results const& e) {
-  double t = e.DurationCPUTime();
-  CHECK_FLOAT_COUNTER_VALUE(e, "Foo", EQ, 1./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "Bar", EQ, 2./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "Baz", EQ, 4./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "Bat", EQ, 8./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "Frob", EQ, 16./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "Lob", EQ, 32./t, 0.001);
-}
-CHECK_BENCHMARK_RESULTS("BM_CounterRates_Tabular/threads:%int",
-                        &CheckTabularRate);
-
-// ========================================================================= //
-// ------------------------- Tabular Counters Output ----------------------- //
-// ========================================================================= //
-
-// set only some of the counters
-void BM_CounterSet0_Tabular(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  namespace bm = benchmark;
-  state.counters.insert({
-    {"Foo", {10, bm::Counter::kAvgThreads}},
-    {"Bar", {20, bm::Counter::kAvgThreads}},
-    {"Baz", {40, bm::Counter::kAvgThreads}},
-  });
-}
-BENCHMARK(BM_CounterSet0_Tabular)->ThreadRange(1, 16);
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterSet0_Tabular/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"Bar\": %float,$", MR_Next},
-                       {"\"Baz\": %float,$", MR_Next},
-                       {"\"Foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_CounterSet0_Tabular/threads:%int\",%csv_report,"
-                       "%float,,%float,%float,,"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckSet0(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "Foo", EQ, 10);
-  CHECK_COUNTER_VALUE(e, int, "Bar", EQ, 20);
-  CHECK_COUNTER_VALUE(e, int, "Baz", EQ, 40);
-}
-CHECK_BENCHMARK_RESULTS("BM_CounterSet0_Tabular", &CheckSet0);
-
-// again.
-void BM_CounterSet1_Tabular(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  namespace bm = benchmark;
-  state.counters.insert({
-    {"Foo", {15, bm::Counter::kAvgThreads}},
-    {"Bar", {25, bm::Counter::kAvgThreads}},
-    {"Baz", {45, bm::Counter::kAvgThreads}},
-  });
-}
-BENCHMARK(BM_CounterSet1_Tabular)->ThreadRange(1, 16);
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterSet1_Tabular/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"Bar\": %float,$", MR_Next},
-                       {"\"Baz\": %float,$", MR_Next},
-                       {"\"Foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_CounterSet1_Tabular/threads:%int\",%csv_report,"
-                       "%float,,%float,%float,,"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckSet1(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "Foo", EQ, 15);
-  CHECK_COUNTER_VALUE(e, int, "Bar", EQ, 25);
-  CHECK_COUNTER_VALUE(e, int, "Baz", EQ, 45);
-}
-CHECK_BENCHMARK_RESULTS("BM_CounterSet1_Tabular/threads:%int", &CheckSet1);
-
-// ========================================================================= //
-// ------------------------- Tabular Counters Output ----------------------- //
-// ========================================================================= //
-
-// set only some of the counters, different set now.
-void BM_CounterSet2_Tabular(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  namespace bm = benchmark;
-  state.counters.insert({
-    {"Foo", {10, bm::Counter::kAvgThreads}},
-    {"Bat", {30, bm::Counter::kAvgThreads}},
-    {"Baz", {40, bm::Counter::kAvgThreads}},
-  });
-}
-BENCHMARK(BM_CounterSet2_Tabular)->ThreadRange(1, 16);
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterSet2_Tabular/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"Bat\": %float,$", MR_Next},
-                       {"\"Baz\": %float,$", MR_Next},
-                       {"\"Foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_CounterSet2_Tabular/threads:%int\",%csv_report,"
-                       ",%float,%float,%float,,"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckSet2(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "Foo", EQ, 10);
-  CHECK_COUNTER_VALUE(e, int, "Bat", EQ, 30);
-  CHECK_COUNTER_VALUE(e, int, "Baz", EQ, 40);
-}
-CHECK_BENCHMARK_RESULTS("BM_CounterSet2_Tabular", &CheckSet2);
-
-// ========================================================================= //
-// --------------------------- TEST CASES END ------------------------------ //
-// ========================================================================= //
-
-int main(int argc, char* argv[]) { RunOutputTests(argc, argv); }

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_test.cc?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_test.cc (removed)
@@ -1,217 +0,0 @@
-
-#undef NDEBUG
-
-#include "benchmark/benchmark.h"
-#include "output_test.h"
-
-// ========================================================================= //
-// ---------------------- Testing Prologue Output -------------------------- //
-// ========================================================================= //
-
-ADD_CASES(TC_ConsoleOut,
-          {{"^[-]+$", MR_Next},
-           {"^Benchmark %s Time %s CPU %s Iterations UserCounters...$", MR_Next},
-           {"^[-]+$", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"%csv_header,\"bar\",\"foo\""}});
-
-// ========================================================================= //
-// ------------------------- Simple Counters Output ------------------------ //
-// ========================================================================= //
-
-void BM_Counters_Simple(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  state.counters["foo"] = 1;
-  state.counters["bar"] = 2 * (double)state.iterations();
-}
-BENCHMARK(BM_Counters_Simple);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_Simple %console_report bar=%hrfloat foo=%hrfloat$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Simple\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_Simple\",%csv_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckSimple(Results const& e) {
-  double its = e.GetAs< double >("iterations");
-  CHECK_COUNTER_VALUE(e, int, "foo", EQ, 1);
-  // check that the value of bar is within 0.1% of the expected value
-  CHECK_FLOAT_COUNTER_VALUE(e, "bar", EQ, 2.*its, 0.001);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_Simple", &CheckSimple);
-
-// ========================================================================= //
-// --------------------- Counters+Items+Bytes/s Output --------------------- //
-// ========================================================================= //
-
-namespace { int num_calls1 = 0; }
-void BM_Counters_WithBytesAndItemsPSec(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  state.counters["foo"] = 1;
-  state.counters["bar"] = ++num_calls1;
-  state.SetBytesProcessed(364);
-  state.SetItemsProcessed(150);
-}
-BENCHMARK(BM_Counters_WithBytesAndItemsPSec);
-ADD_CASES(TC_ConsoleOut,
-          {{"^BM_Counters_WithBytesAndItemsPSec %console_report "
-            "bar=%hrfloat foo=%hrfloat +%hrfloatB/s +%hrfloat items/s$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_WithBytesAndItemsPSec\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bytes_per_second\": %int,$", MR_Next},
-                       {"\"items_per_second\": %int,$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_WithBytesAndItemsPSec\","
-                       "%csv_bytes_items_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckBytesAndItemsPSec(Results const& e) {
-  double t = e.DurationCPUTime(); // this (and not real time) is the time used
-  CHECK_COUNTER_VALUE(e, int, "foo", EQ, 1);
-  CHECK_COUNTER_VALUE(e, int, "bar", EQ, num_calls1);
-  // check that the values are within 0.1% of the expected values
-  CHECK_FLOAT_RESULT_VALUE(e, "bytes_per_second", EQ, 364./t, 0.001);
-  CHECK_FLOAT_RESULT_VALUE(e, "items_per_second", EQ, 150./t, 0.001);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_WithBytesAndItemsPSec",
-                        &CheckBytesAndItemsPSec);
-
-// ========================================================================= //
-// ------------------------- Rate Counters Output -------------------------- //
-// ========================================================================= //
-
-void BM_Counters_Rate(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  namespace bm = benchmark;
-  state.counters["foo"] = bm::Counter{1, bm::Counter::kIsRate};
-  state.counters["bar"] = bm::Counter{2, bm::Counter::kIsRate};
-}
-BENCHMARK(BM_Counters_Rate);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_Rate %console_report bar=%hrfloat/s foo=%hrfloat/s$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Rate\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_Rate\",%csv_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckRate(Results const& e) {
-  double t = e.DurationCPUTime(); // this (and not real time) is the time used
-  // check that the values are within 0.1% of the expected values
-  CHECK_FLOAT_COUNTER_VALUE(e, "foo", EQ, 1./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "bar", EQ, 2./t, 0.001);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_Rate", &CheckRate);
-
-// ========================================================================= //
-// ------------------------- Thread Counters Output ------------------------ //
-// ========================================================================= //
-
-void BM_Counters_Threads(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  state.counters["foo"] = 1;
-  state.counters["bar"] = 2;
-}
-BENCHMARK(BM_Counters_Threads)->ThreadRange(1, 8);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_Threads/threads:%int %console_report bar=%hrfloat foo=%hrfloat$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Threads/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_Threads/threads:%int\",%csv_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckThreads(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "foo", EQ, e.NumThreads());
-  CHECK_COUNTER_VALUE(e, int, "bar", EQ, 2 * e.NumThreads());
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_Threads/threads:%int", &CheckThreads);
-
-// ========================================================================= //
-// ---------------------- ThreadAvg Counters Output ------------------------ //
-// ========================================================================= //
-
-void BM_Counters_AvgThreads(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  namespace bm = benchmark;
-  state.counters["foo"] = bm::Counter{1, bm::Counter::kAvgThreads};
-  state.counters["bar"] = bm::Counter{2, bm::Counter::kAvgThreads};
-}
-BENCHMARK(BM_Counters_AvgThreads)->ThreadRange(1, 8);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_AvgThreads/threads:%int %console_report bar=%hrfloat foo=%hrfloat$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_AvgThreads/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_AvgThreads/threads:%int\",%csv_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckAvgThreads(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "foo", EQ, 1);
-  CHECK_COUNTER_VALUE(e, int, "bar", EQ, 2);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_AvgThreads/threads:%int",
-                        &CheckAvgThreads);
-
-// ========================================================================= //
-// ---------------------- ThreadAvg Counters Output ------------------------ //
-// ========================================================================= //
-
-void BM_Counters_AvgThreadsRate(benchmark::State& state) {
-  while (state.KeepRunning()) {
-  }
-  namespace bm = benchmark;
-  state.counters["foo"] = bm::Counter{1, bm::Counter::kAvgThreadsRate};
-  state.counters["bar"] = bm::Counter{2, bm::Counter::kAvgThreadsRate};
-}
-BENCHMARK(BM_Counters_AvgThreadsRate)->ThreadRange(1, 8);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_AvgThreadsRate/threads:%int %console_report bar=%hrfloat/s foo=%hrfloat/s$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_AvgThreadsRate/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_AvgThreadsRate/threads:%int\",%csv_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckAvgThreadsRate(Results const& e) {
-  CHECK_FLOAT_COUNTER_VALUE(e, "foo", EQ, 1./e.DurationCPUTime(), 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "bar", EQ, 2./e.DurationCPUTime(), 0.001);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_AvgThreadsRate/threads:%int",
-                        &CheckAvgThreadsRate);
-
-// ========================================================================= //
-// --------------------------- TEST CASES END ------------------------------ //
-// ========================================================================= //
-
-int main(int argc, char* argv[]) { RunOutputTests(argc, argv); }

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/compare_bench.py
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/compare_bench.py?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/compare_bench.py (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/compare_bench.py (removed)
@@ -1,68 +0,0 @@
-#!/usr/bin/env python
-"""
-compare_bench.py - Compare two benchmarks or their results and report the
-                   difference.
-"""
-import argparse
-from argparse import ArgumentParser
-import sys
-import gbench
-from gbench import util, report
-from gbench.util import *
-
-def check_inputs(in1, in2, flags):
-    """
-    Perform checking on the user provided inputs and diagnose any abnormalities
-    """
-    in1_kind, in1_err = classify_input_file(in1)
-    in2_kind, in2_err = classify_input_file(in2)
-    output_file = find_benchmark_flag('--benchmark_out=', flags)
-    output_type = find_benchmark_flag('--benchmark_out_format=', flags)
-    if in1_kind == IT_Executable and in2_kind == IT_Executable and output_file:
-        print(("WARNING: '--benchmark_out=%s' will be passed to both "
-              "benchmarks causing it to be overwritten") % output_file)
-    if in1_kind == IT_JSON and in2_kind == IT_JSON and len(flags) > 0:
-        print("WARNING: passing --benchmark flags has no effect since both "
-              "inputs are JSON")
-    if output_type is not None and output_type != 'json':
-        print(("ERROR: passing '--benchmark_out_format=%s' to 'compare_bench.py`"
-              " is not supported.") % output_type)
-        sys.exit(1)
-
-
-def main():
-    parser = ArgumentParser(
-        description='compare the results of two benchmarks')
-    parser.add_argument(
-        'test1', metavar='test1', type=str, nargs=1,
-        help='A benchmark executable or JSON output file')
-    parser.add_argument(
-        'test2', metavar='test2', type=str, nargs=1,
-        help='A benchmark executable or JSON output file')
-    # FIXME this is a dummy argument which will never actually match
-    # any --benchmark flags but it helps generate a better usage message
-    parser.add_argument(
-        'benchmark_options', metavar='benchmark_option', nargs='*',
-        help='Arguments to pass when running benchmark executables'
-    )
-    args, unknown_args = parser.parse_known_args()
-    # Parse the command line flags
-    test1 = args.test1[0]
-    test2 = args.test2[0]
-    if args.benchmark_options:
-        print("Unrecognized positional argument arguments: '%s'"
-              % args.benchmark_options)
-        exit(1)
-    benchmark_options = unknown_args
-    check_inputs(test1, test2, benchmark_options)
-    # Run the benchmarks and report the results
-    json1 = gbench.util.run_or_load_benchmark(test1, benchmark_options)
-    json2 = gbench.util.run_or_load_benchmark(test2, benchmark_options)
-    output_lines = gbench.report.generate_difference_report(json1, json2)
-    print('Comparing %s to %s' % (test1, test2))
-    for ln in output_lines:
-        print(ln)
-
-
-if __name__ == '__main__':
-    main()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run1.json
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run1.json?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run1.json (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run1.json (removed)
@@ -1,60 +0,0 @@
-{
-  "context": {
-    "date": "2016-08-02 17:44:46",
-    "num_cpus": 4,
-    "mhz_per_cpu": 4228,
-    "cpu_scaling_enabled": false,
-    "library_build_type": "release"
-  },
-  "benchmarks": [
-    {
-      "name": "BM_SameTimes",
-      "iterations": 1000,
-      "real_time": 10,
-      "cpu_time": 10,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_2xFaster",
-      "iterations": 1000,
-      "real_time": 50,
-      "cpu_time": 50,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_2xSlower",
-      "iterations": 1000,
-      "real_time": 50,
-      "cpu_time": 50,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_10PercentFaster",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_10PercentSlower",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_100xSlower",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_100xFaster",
-      "iterations": 1000,
-      "real_time": 10000,
-      "cpu_time": 10000,
-      "time_unit": "ns"
-    }
-  ]
-}
\ No newline at end of file

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run2.json
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run2.json?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run2.json (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run2.json (removed)
@@ -1,60 +0,0 @@
-{
-  "context": {
-    "date": "2016-08-02 17:44:46",
-    "num_cpus": 4,
-    "mhz_per_cpu": 4228,
-    "cpu_scaling_enabled": false,
-    "library_build_type": "release"
-  },
-  "benchmarks": [
-    {
-      "name": "BM_SameTimes",
-      "iterations": 1000,
-      "real_time": 10,
-      "cpu_time": 10,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_2xFaster",
-      "iterations": 1000,
-      "real_time": 25,
-      "cpu_time": 25,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_2xSlower",
-      "iterations": 20833333,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_10PercentFaster",
-      "iterations": 1000,
-      "real_time": 90,
-      "cpu_time": 90,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_10PercentSlower",
-      "iterations": 1000,
-      "real_time": 110,
-      "cpu_time": 110,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_100xSlower",
-      "iterations": 1000,
-      "real_time": 10000,
-      "cpu_time": 10000,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_100xFaster",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    }
-  ]
-}
\ No newline at end of file

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/__init__.py
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/__init__.py?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/__init__.py (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/__init__.py (removed)
@@ -1,8 +0,0 @@
-"""Google Benchmark tooling"""
-
-__author__ = 'Eric Fiselier'
-__email__ = 'eric at efcs.ca'
-__versioninfo__ = (0, 5, 0)
-__version__ = '.'.join(str(v) for v in __versioninfo__) + 'dev'
-
-__all__ = []

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/report.py
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/report.py?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/report.py (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/report.py (removed)
@@ -1,146 +0,0 @@
-"""report.py - Utilities for reporting statistics about benchmark results
-"""
-import os
-
-class BenchmarkColor(object):
-    def __init__(self, name, code):
-        self.name = name
-        self.code = code
-
-    def __repr__(self):
-        return '%s%r' % (self.__class__.__name__,
-                         (self.name, self.code))
-
-    def __format__(self, format):
-        return self.code
-
-# Benchmark Colors Enumeration
-BC_NONE = BenchmarkColor('NONE', '')
-BC_MAGENTA = BenchmarkColor('MAGENTA', '\033[95m')
-BC_CYAN = BenchmarkColor('CYAN', '\033[96m')
-BC_OKBLUE = BenchmarkColor('OKBLUE', '\033[94m')
-BC_HEADER = BenchmarkColor('HEADER', '\033[92m')
-BC_WARNING = BenchmarkColor('WARNING', '\033[93m')
-BC_WHITE = BenchmarkColor('WHITE', '\033[97m')
-BC_FAIL = BenchmarkColor('FAIL', '\033[91m')
-BC_ENDC = BenchmarkColor('ENDC', '\033[0m')
-BC_BOLD = BenchmarkColor('BOLD', '\033[1m')
-BC_UNDERLINE = BenchmarkColor('UNDERLINE', '\033[4m')
-
-def color_format(use_color, fmt_str, *args, **kwargs):
-    """
-    Return the result of 'fmt_str.format(*args, **kwargs)' after transforming
-    'args' and 'kwargs' according to the value of 'use_color'. If 'use_color'
-    is False then all color codes in 'args' and 'kwargs' are replaced with
-    the empty string.
-    """
-    assert use_color is True or use_color is False
-    if not use_color:
-        args = [arg if not isinstance(arg, BenchmarkColor) else BC_NONE
-                for arg in args]
-        kwargs = {key: arg if not isinstance(arg, BenchmarkColor) else BC_NONE
-                  for key, arg in kwargs.items()}
-    return fmt_str.format(*args, **kwargs)
-
-
-def find_longest_name(benchmark_list):
-    """
-    Return the length of the longest benchmark name in a given list of
-    benchmark JSON objects
-    """
-    longest_name = 1
-    for bc in benchmark_list:
-        if len(bc['name']) > longest_name:
-            longest_name = len(bc['name'])
-    return longest_name
-
-
-def calculate_change(old_val, new_val):
-    """
-    Return a float representing the decimal change between old_val and new_val.
-    """
-    if old_val == 0 and new_val == 0:
-        return 0.0
-    if old_val == 0:
-        return float(new_val - old_val) / (float(old_val + new_val) / 2)
-    return float(new_val - old_val) / abs(old_val)
-
-
-def generate_difference_report(json1, json2, use_color=True):
-    """
-    Calculate and report the difference between each test of two benchmarks
-    runs specified as 'json1' and 'json2'.
-    """
-    first_col_width = find_longest_name(json1['benchmarks']) + 5
-    def find_test(name):
-        for b in json2['benchmarks']:
-            if b['name'] == name:
-                return b
-        return None
-    first_line = "{:<{}s}     Time           CPU           Old           New".format(
-        'Benchmark', first_col_width)
-    output_strs = [first_line, '-' * len(first_line)]
-
-    gen = (bn for bn in json1['benchmarks'] if 'real_time' in bn and 'cpu_time' in bn)
-    for bn in gen:
-        other_bench = find_test(bn['name'])
-        if not other_bench:
-            continue
-
-        def get_color(res):
-            if res > 0.05:
-                return BC_FAIL
-            elif res > -0.07:
-                return BC_WHITE
-            else:
-                return BC_CYAN
-        fmt_str = "{}{:<{}s}{endc}{}{:+9.2f}{endc}{}{:+14.2f}{endc}{:14d}{:14d}"
-        tres = calculate_change(bn['real_time'], other_bench['real_time'])
-        cpures = calculate_change(bn['cpu_time'], other_bench['cpu_time'])
-        output_strs += [color_format(use_color, fmt_str,
-            BC_HEADER, bn['name'], first_col_width,
-            get_color(tres), tres, get_color(cpures), cpures,
-            bn['cpu_time'], other_bench['cpu_time'],
-            endc=BC_ENDC)]
-    return output_strs
-
-###############################################################################
-# Unit tests
-
-import unittest
-
-class TestReportDifference(unittest.TestCase):
-    def load_results(self):
-        import json
-        testInputs = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'Inputs')
-        testOutput1 = os.path.join(testInputs, 'test1_run1.json')
-        testOutput2 = os.path.join(testInputs, 'test1_run2.json')
-        with open(testOutput1, 'r') as f:
-            json1 = json.load(f)
-        with open(testOutput2, 'r') as f:
-            json2 = json.load(f)
-        return json1, json2
-
-    def test_basic(self):
-        expect_lines = [
-            ['BM_SameTimes', '+0.00', '+0.00', '10', '10'],
-            ['BM_2xFaster', '-0.50', '-0.50', '50', '25'],
-            ['BM_2xSlower', '+1.00', '+1.00', '50', '100'],
-            ['BM_10PercentFaster', '-0.10', '-0.10', '100', '90'],
-            ['BM_10PercentSlower', '+0.10', '+0.10', '100', '110'],
-            ['BM_100xSlower', '+99.00', '+99.00', '100', '10000'],
-            ['BM_100xFaster', '-0.99', '-0.99', '10000', '100'],
-        ]
-        json1, json2 = self.load_results()
-        output_lines_with_header = generate_difference_report(json1, json2, use_color=False)
-        output_lines = output_lines_with_header[2:]
-        print("\n".join(output_lines_with_header))
-        self.assertEqual(len(output_lines), len(expect_lines))
-        for i in xrange(0, len(output_lines)):
-            parts = [x for x in output_lines[i].split(' ') if x]
-            self.assertEqual(len(parts), 5)
-            self.assertEqual(parts, expect_lines[i])
-
-
-if __name__ == '__main__':
-    unittest.main()

Removed: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/util.py
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/util.py?rev=318475&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/util.py (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/util.py (removed)
@@ -1,159 +0,0 @@
-"""util.py - General utilities for running, loading, and processing benchmarks
-"""
-import json
-import os
-import tempfile
-import subprocess
-import sys
-
-# Input file type enumeration
-IT_Invalid    = 0
-IT_JSON       = 1
-IT_Executable = 2
-
-_num_magic_bytes = 2 if sys.platform.startswith('win') else 4
-def is_executable_file(filename):
-    """
-    Return 'True' if 'filename' names a valid file which is likely
-    an executable. A file is considered an executable if it starts with the
-    magic bytes for a EXE, Mach O, or ELF file.
-    """
-    if not os.path.isfile(filename):
-        return False
-    with open(filename, mode='rb') as f:
-        magic_bytes = f.read(_num_magic_bytes)
-    if sys.platform == 'darwin':
-        return magic_bytes in [
-            b'\xfe\xed\xfa\xce',  # MH_MAGIC
-            b'\xce\xfa\xed\xfe',  # MH_CIGAM
-            b'\xfe\xed\xfa\xcf',  # MH_MAGIC_64
-            b'\xcf\xfa\xed\xfe',  # MH_CIGAM_64
-            b'\xca\xfe\xba\xbe',  # FAT_MAGIC
-            b'\xbe\xba\xfe\xca'   # FAT_CIGAM
-        ]
-    elif sys.platform.startswith('win'):
-        return magic_bytes == b'MZ'
-    else:
-        return magic_bytes == b'\x7FELF'
-
-
-def is_json_file(filename):
-    """
-    Returns 'True' if 'filename' names a valid JSON output file.
-    'False' otherwise.
-    """
-    try:
-        with open(filename, 'r') as f:
-            json.load(f)
-        return True
-    except:
-        pass
-    return False
-
-
-def classify_input_file(filename):
-    """
-    Return a tuple (type, msg) where 'type' specifies the classified type
-    of 'filename'. If 'type' is 'IT_Invalid' then 'msg' is a human readable
-    string represeting the error.
-    """
-    ftype = IT_Invalid
-    err_msg = None
-    if not os.path.exists(filename):
-        err_msg = "'%s' does not exist" % filename
-    elif not os.path.isfile(filename):
-        err_msg = "'%s' does not name a file" % filename
-    elif is_executable_file(filename):
-        ftype = IT_Executable
-    elif is_json_file(filename):
-        ftype = IT_JSON
-    else:
-        err_msg = "'%s' does not name a valid benchmark executable or JSON file" % filename
-    return ftype, err_msg
-
-
-def check_input_file(filename):
-    """
-    Classify the file named by 'filename' and return the classification.
-    If the file is classified as 'IT_Invalid' print an error message and exit
-    the program.
-    """
-    ftype, msg = classify_input_file(filename)
-    if ftype == IT_Invalid:
-        print("Invalid input file: %s" % msg)
-        sys.exit(1)
-    return ftype
-
-def find_benchmark_flag(prefix, benchmark_flags):
-    """
-    Search the specified list of flags for a flag matching `<prefix><arg>` and
-    if it is found return the arg it specifies. If specified more than once the
-    last value is returned. If the flag is not found None is returned.
-    """
-    assert prefix.startswith('--') and prefix.endswith('=')
-    result = None
-    for f in benchmark_flags:
-        if f.startswith(prefix):
-            result = f[len(prefix):]
-    return result
-
-def remove_benchmark_flags(prefix, benchmark_flags):
-    """
-    Return a new list containing the specified benchmark_flags except those
-    with the specified prefix.
-    """
-    assert prefix.startswith('--') and prefix.endswith('=')
-    return [f for f in benchmark_flags if not f.startswith(prefix)]
-
-def load_benchmark_results(fname):
-    """
-    Read benchmark output from a file and return the JSON object.
-    REQUIRES: 'fname' names a file containing JSON benchmark output.
-    """
-    with open(fname, 'r') as f:
-        return json.load(f)
-
-
-def run_benchmark(exe_name, benchmark_flags):
-    """
-    Run a benchmark specified by 'exe_name' with the specified
-    'benchmark_flags'. The benchmark is run directly as a subprocess to preserve
-    real time console output.
-    RETURNS: A JSON object representing the benchmark output
-    """
-    output_name = find_benchmark_flag('--benchmark_out=',
-                                      benchmark_flags)
-    is_temp_output = False
-    if output_name is None:
-        is_temp_output = True
-        thandle, output_name = tempfile.mkstemp()
-        os.close(thandle)
-        benchmark_flags = list(benchmark_flags) + \
-                          ['--benchmark_out=%s' % output_name]
-
-    cmd = [exe_name] + benchmark_flags
-    print("RUNNING: %s" % ' '.join(cmd))
-    exitCode = subprocess.call(cmd)
-    if exitCode != 0:
-        print('TEST FAILED...')
-        sys.exit(exitCode)
-    json_res = load_benchmark_results(output_name)
-    if is_temp_output:
-        os.unlink(output_name)
-    return json_res
-
-
-def run_or_load_benchmark(filename, benchmark_flags):
-    """
-    Get the results for a specified benchmark. If 'filename' specifies
-    an executable benchmark then the results are generated by running the
-    benchmark. Otherwise 'filename' must name a valid JSON output file,
-    which is loaded and the result returned.
-    """
-    ftype = check_input_file(filename)
-    if ftype == IT_JSON:
-        return load_benchmark_results(filename)
-    elif ftype == IT_Executable:
-        return run_benchmark(filename, benchmark_flags)
-    else:
-        assert False # This branch is unreachable
\ No newline at end of file

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.clang-format (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.clang-format)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.clang-format?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.clang-format&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.clang-format&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.gitignore (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.gitignore)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.gitignore?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.gitignore&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.gitignore&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.travis-libcxx-setup.sh (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis-libcxx-setup.sh)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.travis-libcxx-setup.sh?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.travis-libcxx-setup.sh&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis-libcxx-setup.sh&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.travis.yml (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis.yml)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.travis.yml?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.travis.yml&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.travis.yml&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.ycm_extra_conf.py (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.ycm_extra_conf.py)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.ycm_extra_conf.py?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/.ycm_extra_conf.py&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/.ycm_extra_conf.py&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/AUTHORS (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/AUTHORS)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/AUTHORS?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/AUTHORS&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/AUTHORS&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/AUTHORS (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/AUTHORS Thu Nov 16 16:16:06 2017
@@ -10,9 +10,11 @@
 
 Albert Pretorius <pretoalb at gmail.com>
 Arne Beer <arne at twobeer.de>
+Carto
 Christopher Seymour <chris.j.seymour at hotmail.com>
 David Coeurjolly <david.coeurjolly at liris.cnrs.fr>
-Dominic Hamon <dma at stripysock.com>
+Dirac Research 
+Dominik Czarnota <dominik.b.czarnota at gmail.com>
 Eric Fiselier <eric at efcs.ca>
 Eugene Zhuk <eugene.zhuk at gmail.com>
 Evgeny Safronov <division494 at gmail.com>
@@ -21,8 +23,8 @@ Google Inc.
 International Business Machines Corporation
 Ismael Jimenez Martinez <ismael.jimenez.martinez at gmail.com>
 Jern-Kuan Leong <jernkuan at gmail.com>
-Joao Paulo Magalhaes <joaoppmagalhaes at gmail.com>
 JianXiong Zhou <zhoujianxiong2 at gmail.com>
+Joao Paulo Magalhaes <joaoppmagalhaes at gmail.com>
 Jussi Knuuttila <jussi.knuuttila at gmail.com>
 Kaito Udagawa <umireon at gmail.com>
 Lei Xu <eddyxu at gmail.com>
@@ -32,9 +34,8 @@ Nick Hutchinson <nshutchinson at gmail.com>
 Oleksandr Sochka <sasha.sochka at gmail.com>
 Paul Redmond <paul.redmond at gmail.com>
 Radoslav Yovchev <radoslav.tm at gmail.com>
+Roman Lebedev <lebedev.ri at gmail.com>
 Shuo Chen <chenshuo at chenshuo.com>
 Yixuan Qiu <yixuanq at gmail.com>
 Yusuke Suzuki <utatane.tea at gmail.com>
-Dirac Research 
 Zbigniew Skowron <zbychs at gmail.com>
-Dominik Czarnota <dominik.b.czarnota at gmail.com>

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CMakeLists.txt (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CMakeLists.txt)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CMakeLists.txt?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CMakeLists.txt&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CMakeLists.txt&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CMakeLists.txt (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CMakeLists.txt Thu Nov 16 16:16:06 2017
@@ -15,7 +15,8 @@ option(BENCHMARK_ENABLE_TESTING "Enable
 option(BENCHMARK_ENABLE_EXCEPTIONS "Enable the use of exceptions in the benchmark library." ON)
 option(BENCHMARK_ENABLE_LTO "Enable link time optimisation of the benchmark library." OFF)
 option(BENCHMARK_USE_LIBCXX "Build and test using libc++ as the standard library." OFF)
-option(BENCHMARK_BUILD_32_BITS "Build a 32 bit version of the library" OFF)
+option(BENCHMARK_BUILD_32_BITS "Build a 32 bit version of the library." OFF)
+option(BENCHMARK_ENABLE_INSTALL "Enable installation of benchmark. (Projects embedding benchmark may want to turn this OFF.)" ON)
 
 # Make sure we can import out CMake functions
 list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
@@ -135,24 +136,21 @@ else()
   endif()
 
   # Coverage build type
-  set(CMAKE_CXX_FLAGS_COVERAGE "${CMAKE_CXX_FLAGS_DEBUG}" CACHE STRING
-    "Flags used by the C++ compiler during coverage builds."
+  set(BENCHMARK_CXX_FLAGS_COVERAGE "${CMAKE_CXX_FLAGS_DEBUG}"
+    CACHE STRING "Flags used by the C++ compiler during coverage builds."
     FORCE)
-  set(CMAKE_EXE_LINKER_FLAGS_COVERAGE
-    "${CMAKE_EXE_LINKER_FLAGS_DEBUG}" CACHE STRING
-    "Flags used for linking binaries during coverage builds."
+  set(BENCHMARK_EXE_LINKER_FLAGS_COVERAGE "${CMAKE_EXE_LINKER_FLAGS_DEBUG}"
+    CACHE STRING "Flags used for linking binaries during coverage builds."
     FORCE)
-  set(CMAKE_SHARED_LINKER_FLAGS_COVERAGE
-    "${CMAKE_SHARED_LINKER_FLAGS_DEBUG}" CACHE STRING
-    "Flags used by the shared libraries linker during coverage builds."
+  set(BENCHMARK_SHARED_LINKER_FLAGS_COVERAGE "${CMAKE_SHARED_LINKER_FLAGS_DEBUG}"
+    CACHE STRING "Flags used by the shared libraries linker during coverage builds."
     FORCE)
   mark_as_advanced(
-    CMAKE_CXX_FLAGS_COVERAGE
-    CMAKE_EXE_LINKER_FLAGS_COVERAGE
-    CMAKE_SHARED_LINKER_FLAGS_COVERAGE)
+    BENCHMARK_CXX_FLAGS_COVERAGE
+    BENCHMARK_EXE_LINKER_FLAGS_COVERAGE
+    BENCHMARK_SHARED_LINKER_FLAGS_COVERAGE)
   set(CMAKE_BUILD_TYPE "${CMAKE_BUILD_TYPE}" CACHE STRING
-    "Choose the type of build, options are: None Debug Release RelWithDebInfo MinSizeRel Coverage."
-    FORCE)
+    "Choose the type of build, options are: None Debug Release RelWithDebInfo MinSizeRel Coverage.")
   add_cxx_compiler_flag(--coverage COVERAGE)
 endif()
 

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CONTRIBUTING.md (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTING.md)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CONTRIBUTING.md?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CONTRIBUTING.md&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTING.md&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CONTRIBUTORS (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTORS)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CONTRIBUTORS?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CONTRIBUTORS&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTORS&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/CONTRIBUTORS (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/CONTRIBUTORS Thu Nov 16 16:16:06 2017
@@ -28,18 +28,19 @@ Billy Robert O'Neal III <billy.oneal at gma
 Chris Kennelly <ckennelly at google.com> <ckennelly at ckennelly.com>
 Christopher Seymour <chris.j.seymour at hotmail.com>
 David Coeurjolly <david.coeurjolly at liris.cnrs.fr>
-Dominic Hamon <dma at stripysock.com>
+Dominic Hamon <dma at stripysock.com> <dominic at google.com>
+Dominik Czarnota <dominik.b.czarnota at gmail.com>
 Eric Fiselier <eric at efcs.ca>
 Eugene Zhuk <eugene.zhuk at gmail.com>
 Evgeny Safronov <division494 at gmail.com>
 Felix Homann <linuxaudio at showlabor.de>
 Ismael Jimenez Martinez <ismael.jimenez.martinez at gmail.com>
 Jern-Kuan Leong <jernkuan at gmail.com>
-Joao Paulo Magalhaes <joaoppmagalhaes at gmail.com>
 JianXiong Zhou <zhoujianxiong2 at gmail.com>
+Joao Paulo Magalhaes <joaoppmagalhaes at gmail.com>
 Jussi Knuuttila <jussi.knuuttila at gmail.com>
-Kaito Udagawa <umireon at gmail.com>
 Kai Wolf <kai.wolf at gmail.com>
+Kaito Udagawa <umireon at gmail.com>
 Lei Xu <eddyxu at gmail.com>
 Matt Clarkson <mattyclarkson at gmail.com>
 Maxim Vafin <maxvafin at gmail.com>
@@ -49,11 +50,12 @@ Pascal Leroy <phl at google.com>
 Paul Redmond <paul.redmond at gmail.com>
 Pierre Phaneuf <pphaneuf at google.com>
 Radoslav Yovchev <radoslav.tm at gmail.com>
+Raul Marin <rmrodriguez at cartodb.com>
 Ray Glover <ray.glover at uk.ibm.com>
+Roman Lebedev <lebedev.ri at gmail.com>
 Shuo Chen <chenshuo at chenshuo.com>
+Tobias Ulvgård <tobias.ulvgard at dirac.se>
 Tom Madams <tom.ej.madams at gmail.com> <tmadams at google.com>
 Yixuan Qiu <yixuanq at gmail.com>
 Yusuke Suzuki <utatane.tea at gmail.com>
-Tobias Ulvgård <tobias.ulvgard at dirac.se>
 Zbigniew Skowron <zbychs at gmail.com>
-Dominik Czarnota <dominik.b.czarnota at gmail.com>

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/LICENSE (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/LICENSE)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/LICENSE?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/LICENSE&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/LICENSE&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/README.md (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/README.md)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/README.md?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/README.md&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/README.md&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/README.md (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/README.md Thu Nov 16 16:16:06 2017
@@ -18,8 +18,10 @@ IRC channel: https://freenode.net #googl
 Define a function that executes the code to be measured.
 
 ```c++
+#include <benchmark/benchmark.h>
+
 static void BM_StringCreation(benchmark::State& state) {
-  while (state.KeepRunning())
+  for (auto _ : state)
     std::string empty_string;
 }
 // Register the function as a benchmark
@@ -28,7 +30,7 @@ BENCHMARK(BM_StringCreation);
 // Define another benchmark
 static void BM_StringCopy(benchmark::State& state) {
   std::string x = "hello";
-  while (state.KeepRunning())
+  for (auto _ : state)
     std::string copy(x);
 }
 BENCHMARK(BM_StringCopy);
@@ -36,6 +38,8 @@ BENCHMARK(BM_StringCopy);
 BENCHMARK_MAIN();
 ```
 
+Don't forget to inform your linker to add benchmark library e.g. through `-lbenchmark` compilation flag.
+
 ### Passing arguments
 Sometimes a family of benchmarks can be implemented with just one routine that
 takes an extra argument to specify which one of the family of benchmarks to
@@ -47,7 +51,7 @@ static void BM_memcpy(benchmark::State&
   char* src = new char[state.range(0)];
   char* dst = new char[state.range(0)];
   memset(src, 'x', state.range(0));
-  while (state.KeepRunning())
+  for (auto _ : state)
     memcpy(dst, src, state.range(0));
   state.SetBytesProcessed(int64_t(state.iterations()) *
                           int64_t(state.range(0)));
@@ -80,22 +84,23 @@ insertion.
 
 ```c++
 static void BM_SetInsert(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  std::set<int> data;
+  for (auto _ : state) {
     state.PauseTiming();
-    std::set<int> data = ConstructRandomSet(state.range(0));
+    data = ConstructRandomSet(state.range(0));
     state.ResumeTiming();
     for (int j = 0; j < state.range(1); ++j)
       data.insert(RandomNumber());
   }
 }
 BENCHMARK(BM_SetInsert)
-    ->Args({1<<10, 1})
-    ->Args({1<<10, 8})
-    ->Args({1<<10, 64})
+    ->Args({1<<10, 128})
+    ->Args({2<<10, 128})
+    ->Args({4<<10, 128})
+    ->Args({8<<10, 128})
     ->Args({1<<10, 512})
-    ->Args({8<<10, 1})
-    ->Args({8<<10, 8})
-    ->Args({8<<10, 64})
+    ->Args({2<<10, 512})
+    ->Args({4<<10, 512})
     ->Args({8<<10, 512});
 ```
 
@@ -105,7 +110,7 @@ product of the two specified ranges and
 pair.
 
 ```c++
-BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {1, 512}});
+BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {128, 512}});
 ```
 
 For more complex patterns of inputs, passing a custom function to `Apply` allows
@@ -131,7 +136,7 @@ running time and the normalized root-mea
 static void BM_StringCompare(benchmark::State& state) {
   std::string s1(state.range(0), '-');
   std::string s2(state.range(0), '-');
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     benchmark::DoNotOptimize(s1.compare(s2));
   }
   state.SetComplexityN(state.range(0));
@@ -165,7 +170,7 @@ absence of multiprogramming.
 template <class Q> int BM_Sequential(benchmark::State& state) {
   Q q;
   typename Q::value_type v;
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     for (int i = state.range(0); i--; )
       q.push(v);
     for (int e = state.range(0); e--; )
@@ -181,7 +186,7 @@ BENCHMARK_TEMPLATE(BM_Sequential, WaitQu
 Three macros are provided for adding benchmark templates.
 
 ```c++
-#if __cplusplus >= 201103L // C++11 and greater.
+#ifdef BENCHMARK_HAS_CXX11
 #define BENCHMARK_TEMPLATE(func, ...) // Takes any number of parameters.
 #else // C++ < C++11
 #define BENCHMARK_TEMPLATE(func, arg1)
@@ -190,6 +195,62 @@ Three macros are provided for adding ben
 #define BENCHMARK_TEMPLATE2(func, arg1, arg2)
 ```
 
+### A Faster KeepRunning loop
+
+In C++11 mode, a ranged-based for loop should be used in preference to
+the `KeepRunning` loop for running the benchmarks. For example:
+
+```c++
+static void BM_Fast(benchmark::State &state) {
+  for (auto _ : state) {
+    FastOperation();
+  }
+}
+BENCHMARK(BM_Fast);
+```
+
+The reason the ranged-for loop is faster than using `KeepRunning`, is
+because `KeepRunning` requires a memory load and store of the iteration count
+ever iteration, whereas the ranged-for variant is able to keep the iteration count
+in a register.
+
+For example, an empty inner loop of using the ranged-based for method looks like:
+
+```asm
+# Loop Init
+  mov rbx, qword ptr [r14 + 104]
+  call benchmark::State::StartKeepRunning()
+  test rbx, rbx
+  je .LoopEnd
+.LoopHeader: # =>This Inner Loop Header: Depth=1
+  add rbx, -1
+  jne .LoopHeader
+.LoopEnd:
+```
+
+Compared to an empty `KeepRunning` loop, which looks like:
+
+```asm
+.LoopHeader: # in Loop: Header=BB0_3 Depth=1
+  cmp byte ptr [rbx], 1
+  jne .LoopInit
+.LoopBody: # =>This Inner Loop Header: Depth=1
+  mov rax, qword ptr [rbx + 8]
+  lea rcx, [rax + 1]
+  mov qword ptr [rbx + 8], rcx
+  cmp rax, qword ptr [rbx + 104]
+  jb .LoopHeader
+  jmp .LoopEnd
+.LoopInit:
+  mov rdi, rbx
+  call benchmark::State::StartKeepRunning()
+  jmp .LoopBody
+.LoopEnd:
+```
+
+Unless C++03 compatibility is required, the ranged-for variant of writing
+the benchmark loop should be preferred.  
+
 ## Passing arbitrary arguments to a benchmark
 In C++11 it is possible to define a benchmark that takes an arbitrary number
 of extra arguments. The `BENCHMARK_CAPTURE(func, test_case_name, ...args)`
@@ -199,11 +260,11 @@ The `test_case_name` is appended to the
 should describe the values passed.
 
 ```c++
-template <class ...ExtraArgs>`
+template <class ...ExtraArgs>
 void BM_takes_args(benchmark::State& state, ExtraArgs&&... extra_args) {
   [...]
 }
-// Registers a benchmark named "BM_takes_args/int_string_test` that passes
+// Registers a benchmark named "BM_takes_args/int_string_test" that passes
 // the specified values to `extra_args`.
 BENCHMARK_CAPTURE(BM_takes_args, int_string_test, 42, std::string("abc"));
 ```
@@ -223,8 +284,7 @@ scope, the `RegisterBenchmark` can be ca
 benchmark tests to be registered programmatically.
 
 Additionally `RegisterBenchmark` allows any callable object to be registered
-as a benchmark. Including capturing lambdas and function objects. This
-allows the creation
+as a benchmark. Including capturing lambdas and function objects.
 
 For Example:
 ```c++
@@ -240,9 +300,10 @@ int main(int argc, char** argv) {
 
 ### Multithreaded benchmarks
 In a multithreaded test (benchmark invoked by multiple threads simultaneously),
-it is guaranteed that none of the threads will start until all have called
-`KeepRunning`, and all will have finished before KeepRunning returns false. As
-such, any global setup or teardown can be wrapped in a check against the thread
+it is guaranteed that none of the threads will start until all have reached
+the start of the benchmark loop, and all will have finished before any thread
+exits the benchmark loop. (This behavior is also provided by the `KeepRunning()`
+API) As such, any global setup or teardown can be wrapped in a check against the thread
 index:
 
 ```c++
@@ -250,7 +311,7 @@ static void BM_MultiThreaded(benchmark::
   if (state.thread_index == 0) {
     // Setup code here.
   }
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     // Run the test as normal.
   }
   if (state.thread_index == 0) {
@@ -274,10 +335,10 @@ Without `UseRealTime`, CPU time is used
 ## Manual timing
 For benchmarking something for which neither CPU time nor real-time are
 correct or accurate enough, completely manual timing is supported using
-the `UseManualTime` function. 
+the `UseManualTime` function.
 
 When `UseManualTime` is used, the benchmarked code must call
-`SetIterationTime` once per iteration of the `KeepRunning` loop to
+`SetIterationTime` once per iteration of the benchmark loop to
 report the manually measured time.
 
 An example use case for this is benchmarking GPU execution (e.g. OpenCL
@@ -293,7 +354,7 @@ static void BM_ManualTiming(benchmark::S
     static_cast<double>(microseconds)
   };
 
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     auto start = std::chrono::high_resolution_clock::now();
     // Simulate some useful workload with a sleep
     std::this_thread::sleep_for(sleep_duration);
@@ -316,7 +377,7 @@ functions can be used.
 
 ```c++
 static void BM_test(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
       int x = 0;
       for (int i=0; i < 64; ++i) {
         benchmark::DoNotOptimize(x += i);
@@ -355,7 +416,7 @@ away.
 
 ```c++
 static void BM_vector_push_back(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     std::vector<int> v;
     v.reserve(1);
     benchmark::DoNotOptimize(v.data()); // Allow v.data() to be clobbered.
@@ -384,7 +445,7 @@ the minimum time, or the wallclock time
 set as a flag `--benchmark_min_time` or per benchmark by calling `MinTime` on
 the registered benchmark object.
 
-## Reporting the mean and standard devation by repeated benchmarks
+## Reporting the mean, median and standard deviation by repeated benchmarks
 By default each benchmark is run once and that single result is reported.
 However benchmarks are often noisy and a single result may not be representative
 of the overall behavior. For this reason it's possible to repeatedly rerun the
@@ -392,19 +453,42 @@ benchmark.
 
 The number of runs of each benchmark is specified globally by the
 `--benchmark_repetitions` flag or on a per benchmark basis by calling
-`Repetitions` on the registered benchmark object. When a benchmark is run
-more than once the mean and standard deviation of the runs will be reported.
+`Repetitions` on the registered benchmark object. When a benchmark is run more
+than once the mean, median and standard deviation of the runs will be reported.
 
 Additionally the `--benchmark_report_aggregates_only={true|false}` flag or
 `ReportAggregatesOnly(bool)` function can be used to change how repeated tests
 are reported. By default the result of each repeated run is reported. When this
-option is 'true' only the mean and standard deviation of the runs is reported.
+option is `true` only the mean, median and standard deviation of the runs is reported.
 Calling `ReportAggregatesOnly(bool)` on a registered benchmark object overrides
 the value of the flag for that benchmark.
 
+## User-defined statistics for repeated benchmarks
+While having mean, median and standard deviation is nice, this may not be
+enough for everyone. For example you may want to know what is the largest
+observation, e.g. because you have some real-time constraints. This is easy.
+The following code will specify a custom statistic to be calculated, defined
+by a lambda function.
+
+```c++
+void BM_spin_empty(benchmark::State& state) {
+  for (auto _ : state) {
+    for (int x = 0; x < state.range(0); ++x) {
+      benchmark::DoNotOptimize(x);
+    }
+  }
+}
+
+BENCHMARK(BM_spin_empty)
+  ->ComputeStatistics("max", [](const std::vector<double>& v) -> double {
+    return *(std::max_element(std::begin(v), std::end(v)));
+  })
+  ->Arg(512);
+```
+
 ## Fixtures
 Fixture tests are created by
-first defining a type that derives from ::benchmark::Fixture and then
+first defining a type that derives from `::benchmark::Fixture` and then
 creating/registering the tests using the following macros:
 
 * `BENCHMARK_F(ClassName, Method)`
@@ -417,13 +501,13 @@ For Example:
 class MyFixture : public benchmark::Fixture {};
 
 BENCHMARK_F(MyFixture, FooTest)(benchmark::State& st) {
-   while (st.KeepRunning()) {
+   for (auto _ : st) {
      ...
   }
 }
 
 BENCHMARK_DEFINE_F(MyFixture, BarTest)(benchmark::State& st) {
-   while (st.KeepRunning()) {
+   for (auto _ : st) {
      ...
   }
 }
@@ -432,6 +516,31 @@ BENCHMARK_REGISTER_F(MyFixture, BarTest)
 /* BarTest is now registered */
 ```
 
+### Templated fixtures
+Also you can create templated fixture by using the following macros:
+
+* `BENCHMARK_TEMPLATE_F(ClassName, Method, ...)`
+* `BENCHMARK_TEMPLATE_DEFINE_F(ClassName, Method, ...)`
+
+For example:
+```c++
+template<typename T>
+class MyFixture : public benchmark::Fixture {};
+
+BENCHMARK_TEMPLATE_F(MyFixture, IntTest, int)(benchmark::State& st) {
+   for (auto _ : st) {
+     ...
+  }
+}
+
+BENCHMARK_TEMPLATE_DEFINE_F(MyFixture, DoubleTest, double)(benchmark::State& st) {
+   for (auto _ : st) {
+     ...
+  }
+}
+
+BENCHMARK_REGISTER_F(MyFixture, DoubleTest)->Threads(2);
+```
 
 ## User-defined counters
 
@@ -441,7 +550,7 @@ will add columns "Foo", "Bar" and "Baz"
 ```c++
 static void UserCountersExample1(benchmark::State& state) {
   double numFoos = 0, numBars = 0, numBazs = 0;
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     // ... count Foo,Bar,Baz events
   }
   state.counters["Foo"] = numFoos;
@@ -564,11 +673,12 @@ When errors caused by external influence
 communication, occur within a benchmark the
 `State::SkipWithError(const char* msg)` function can be used to skip that run
 of benchmark and report the error. Note that only future iterations of the
-`KeepRunning()` are skipped. Users may explicitly return to exit the
-benchmark immediately.
+`KeepRunning()` are skipped. For the ranged-for version of the benchmark loop
+Users must explicitly exit the loop, otherwise all iterations will be performed.
+Users may explicitly return to exit the benchmark immediately.
 
 The `SkipWithError(...)` function may be used at any point within the benchmark,
-including before and after the `KeepRunning()` loop.
+including before and after the benchmark loop.
 
 For example:
 
@@ -579,7 +689,7 @@ static void BM_test(benchmark::State& st
       state.SkipWithError("Resource is not good!");
       // KeepRunning() loop will not be entered.
   }
-  while (state.KeepRunning()) {
+  for (state.KeepRunning()) {
       auto data = resource.read_data();
       if (!resource.good()) {
         state.SkipWithError("Failed to read data!");
@@ -588,6 +698,14 @@ static void BM_test(benchmark::State& st
      do_stuff(data);
   }
 }
+
+static void BM_test_ranged_fo(benchmark::State & state) {
+  state.SkipWithError("test will not be entered");
+  for (auto _ : state) {
+    state.SkipWithError("Failed!");
+    break; // REQUIRED to prevent all further iterations.
+  }
+}
 ```
 
 ## Running a subset of the benchmarks
@@ -614,7 +732,7 @@ The library supports multiple output for
 is the default format.
 
 The Console format is intended to be a human readable format. By default
-the format generates color output. Context is output on stderr and the 
+the format generates color output. Context is output on stderr and the
 tabular data on stdout. Example tabular output looks like:
 ```
 Benchmark                               Time(ns)    CPU(ns) Iterations
@@ -717,6 +835,18 @@ Anything older *may* work.
 Note: Using the library and its headers in C++03 is supported. C++11 is only
 required to build the library.
 
+## Disable CPU frequency scaling
+If you see this error:
+```
+***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
+```
+you might want to disable the CPU frequency scaling while running the benchmark:
+```bash
+sudo cpupower frequency-set --governor performance
+./mybench
+sudo cpupower frequency-set --governor powersave
+```
+
 # Known Issues
 
 ### Windows

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/appveyor.yml (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/appveyor.yml)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/appveyor.yml?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/appveyor.yml&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/appveyor.yml&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/AddCXXCompilerFlag.cmake (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/AddCXXCompilerFlag.cmake)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/AddCXXCompilerFlag.cmake?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/AddCXXCompilerFlag.cmake&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/AddCXXCompilerFlag.cmake&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/AddCXXCompilerFlag.cmake (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/AddCXXCompilerFlag.cmake Thu Nov 16 16:16:06 2017
@@ -38,7 +38,7 @@ function(add_cxx_compiler_flag FLAG)
     if(ARGV1)
       string(TOUPPER "_${VARIANT}" VARIANT)
     endif()
-    set(CMAKE_CXX_FLAGS${VARIANT} "${CMAKE_CXX_FLAGS${VARIANT}} ${FLAG}" PARENT_SCOPE)
+    set(CMAKE_CXX_FLAGS${VARIANT} "${CMAKE_CXX_FLAGS${VARIANT}} ${BENCHMARK_CXX_FLAGS${VARIANT}} ${FLAG}" PARENT_SCOPE)
   endif()
 endfunction()
 

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/CXXFeatureCheck.cmake (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/CXXFeatureCheck.cmake)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/CXXFeatureCheck.cmake?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/CXXFeatureCheck.cmake&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/CXXFeatureCheck.cmake&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/CXXFeatureCheck.cmake (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/CXXFeatureCheck.cmake Thu Nov 16 16:16:06 2017
@@ -22,18 +22,35 @@ function(cxx_feature_check FILE)
   string(TOUPPER ${FILE} VAR)
   string(TOUPPER "HAVE_${VAR}" FEATURE)
   if (DEFINED HAVE_${VAR})
-    set(HAVE_${VAR} 1 CACHE INTERNAL "Feature test for ${FILE}" PARENT_SCOPE)
+    set(HAVE_${VAR} 1 PARENT_SCOPE)
     add_definitions(-DHAVE_${VAR})
     return()
   endif()
+
   message("-- Performing Test ${FEATURE}")
-  try_run(RUN_${FEATURE} COMPILE_${FEATURE}
-          ${CMAKE_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/cmake/${FILE}.cpp
-          CMAKE_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS}
-          LINK_LIBRARIES ${BENCHMARK_CXX_LIBRARIES})
+  if(CMAKE_CROSSCOMPILING)
+    try_compile(COMPILE_${FEATURE}
+            ${CMAKE_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/cmake/${FILE}.cpp
+            CMAKE_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS}
+            LINK_LIBRARIES ${BENCHMARK_CXX_LIBRARIES})
+    if(COMPILE_${FEATURE})
+      message(WARNING
+            "If you see build failures due to cross compilation, try setting HAVE_${VAR} to 0")
+      set(RUN_${FEATURE} 0)
+    else()
+      set(RUN_${FEATURE} 1)
+    endif()
+  else()
+    message("-- Performing Test ${FEATURE}")
+    try_run(RUN_${FEATURE} COMPILE_${FEATURE}
+            ${CMAKE_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/cmake/${FILE}.cpp
+            CMAKE_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS}
+            LINK_LIBRARIES ${BENCHMARK_CXX_LIBRARIES})
+  endif()
+
   if(RUN_${FEATURE} EQUAL 0)
     message("-- Performing Test ${FEATURE} -- success")
-    set(HAVE_${VAR} 1 CACHE INTERNAL "Feature test for ${FILE}" PARENT_SCOPE)
+    set(HAVE_${VAR} 1 PARENT_SCOPE)
     add_definitions(-DHAVE_${VAR})
   else()
     if(NOT COMPILE_${FEATURE})
@@ -43,4 +60,3 @@ function(cxx_feature_check FILE)
     endif()
   endif()
 endfunction()
-

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/Config.cmake.in (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/Config.cmake.in)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/Config.cmake.in?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/Config.cmake.in&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/Config.cmake.in&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/GetGitVersion.cmake (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/GetGitVersion.cmake)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/GetGitVersion.cmake?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/GetGitVersion.cmake&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/GetGitVersion.cmake&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/gnu_posix_regex.cpp (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/gnu_posix_regex.cpp)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/gnu_posix_regex.cpp?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/gnu_posix_regex.cpp&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/gnu_posix_regex.cpp&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/posix_regex.cpp (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/posix_regex.cpp)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/posix_regex.cpp?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/posix_regex.cpp&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/posix_regex.cpp&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/std_regex.cpp (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/std_regex.cpp)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/std_regex.cpp?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/std_regex.cpp&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/std_regex.cpp&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/steady_clock.cpp (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/steady_clock.cpp)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/steady_clock.cpp?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/steady_clock.cpp&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/steady_clock.cpp&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/thread_safety_attributes.cpp (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/thread_safety_attributes.cpp)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/thread_safety_attributes.cpp?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/cmake/thread_safety_attributes.cpp&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/cmake/thread_safety_attributes.cpp&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/docs/tools.md (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/docs/tools.md)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/docs/tools.md?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/docs/tools.md&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/docs/tools.md&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/benchmark.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/benchmark.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/benchmark.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/benchmark.h Thu Nov 16 16:16:06 2017
@@ -18,7 +18,7 @@
 // Define a function that executes the code to be measured a
 // specified number of times:
 static void BM_StringCreation(benchmark::State& state) {
-  while (state.KeepRunning())
+  for (auto _ : state)
     std::string empty_string;
 }
 
@@ -28,7 +28,7 @@ BENCHMARK(BM_StringCreation);
 // Define another benchmark
 static void BM_StringCopy(benchmark::State& state) {
   std::string x = "hello";
-  while (state.KeepRunning())
+  for (auto _ : state)
     std::string copy(x);
 }
 BENCHMARK(BM_StringCopy);
@@ -54,7 +54,7 @@ int main(int argc, char** argv) {
 static void BM_memcpy(benchmark::State& state) {
   char* src = new char[state.range(0)]; char* dst = new char[state.range(0)];
   memset(src, 'x', state.range(0));
-  while (state.KeepRunning())
+  for (auto _ : state)
     memcpy(dst, src, state.range(0));
   state.SetBytesProcessed(int64_t(state.iterations()) *
                           int64_t(state.range(0)));
@@ -72,29 +72,30 @@ BENCHMARK(BM_memcpy)->Range(8, 8<<10);
 // example, the following code defines a family of microbenchmarks for
 // measuring the speed of set insertion.
 static void BM_SetInsert(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  set<int> data;
+  for (auto _ : state) {
     state.PauseTiming();
-    set<int> data = ConstructRandomSet(state.range(0));
+    data = ConstructRandomSet(state.range(0));
     state.ResumeTiming();
     for (int j = 0; j < state.range(1); ++j)
       data.insert(RandomNumber());
   }
 }
 BENCHMARK(BM_SetInsert)
-   ->Args({1<<10, 1})
-   ->Args({1<<10, 8})
-   ->Args({1<<10, 64})
+   ->Args({1<<10, 128})
+   ->Args({2<<10, 128})
+   ->Args({4<<10, 128})
+   ->Args({8<<10, 128})
    ->Args({1<<10, 512})
-   ->Args({8<<10, 1})
-   ->Args({8<<10, 8})
-   ->Args({8<<10, 64})
+   ->Args({2<<10, 512})
+   ->Args({4<<10, 512})
    ->Args({8<<10, 512});
 
 // The preceding code is quite repetitive, and can be replaced with
 // the following short-hand.  The following macro will pick a few
 // appropriate arguments in the product of the two specified ranges
 // and will generate a microbenchmark for each such pair.
-BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {1, 512}});
+BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {128, 512}});
 
 // For more complex patterns of inputs, passing a custom function
 // to Apply allows programmatic specification of an
@@ -114,7 +115,7 @@ BENCHMARK(BM_SetInsert)->Apply(CustomArg
 template <class Q> int BM_Sequential(benchmark::State& state) {
   Q q;
   typename Q::value_type v;
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     for (int i = state.range(0); i--; )
       q.push(v);
     for (int e = state.range(0); e--; )
@@ -135,15 +136,15 @@ void BM_test(benchmark::State& state) {
 BENCHMARK(BM_test)->MinTime(2.0); // Run for at least 2 seconds.
 
 In a multithreaded test, it is guaranteed that none of the threads will start
-until all have called KeepRunning, and all will have finished before KeepRunning
-returns false. As such, any global setup or teardown you want to do can be
-wrapped in a check against the thread index:
+until all have reached the loop start, and all will have finished before any
+thread exits the loop body. As such, any global setup or teardown you want to
+do can be wrapped in a check against the thread index:
 
 static void BM_MultiThreaded(benchmark::State& state) {
   if (state.thread_index == 0) {
     // Setup code here.
   }
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     // Run the test as normal.
   }
   if (state.thread_index == 0) {
@@ -164,7 +165,8 @@ BENCHMARK(BM_test)->Unit(benchmark::kMil
 #define BENCHMARK_BENCHMARK_H_
 
 
-#if __cplusplus >= 201103L
+// The _MSVC_LANG check should detect Visual Studio 2015 Update 3 and newer.
+#if __cplusplus >= 201103L || (defined(_MSVC_LANG) && _MSVC_LANG >= 201103L)
 #define BENCHMARK_HAS_CXX11
 #endif
 
@@ -237,7 +239,6 @@ BENCHMARK(BM_test)->Unit(benchmark::kMil
 #define BENCHMARK_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
 #endif
 
-
 namespace benchmark {
 class BenchmarkReporter;
 
@@ -289,10 +290,12 @@ BENCHMARK_UNUSED static int stream_init_
 }  // namespace internal
 
 
-#if !defined(__GNUC__) || defined(__pnacl__) || defined(EMSCRIPTN)
+#if (!defined(__GNUC__) && !defined(__clang__)) || defined(__pnacl__) || \
+    defined(EMSCRIPTN)
 # define BENCHMARK_HAS_NO_INLINE_ASSEMBLY
 #endif
 
+
 // The DoNotOptimize(...) function can be used to prevent a value or
 // expression from being optimized away by the compiler. This function is
 // intended to add little to no overhead.
@@ -378,6 +381,18 @@ enum BigO { oNone, o1, oN, oNSquared, oN
 // computational complexity for the benchmark.
 typedef double(BigOFunc)(int);
 
+// StatisticsFunc is passed to a benchmark in order to compute some descriptive
+// statistics over all the measurements of some type
+typedef double(StatisticsFunc)(const std::vector<double>&);
+
+struct Statistics {
+  std::string name_;
+  StatisticsFunc* compute_;
+
+  Statistics(std::string name, StatisticsFunc* compute)
+    : name_(name), compute_(compute) {}
+};
+
 namespace internal {
 class ThreadTimer;
 class ThreadManager;
@@ -398,6 +413,19 @@ enum ReportMode
 // benchmark to use.
 class State {
  public:
+  struct StateIterator;
+  friend struct StateIterator;
+
+  // Returns iterators used to run each iteration of a benchmark using a
+  // C++11 ranged-based for loop. These functions should not be called directly.
+  //
+  // REQUIRES: The benchmark has not started running yet. Neither begin nor end
+  // have been called previously.
+  //
+  // NOTE: KeepRunning may not be used after calling either of these functions.
+  BENCHMARK_ALWAYS_INLINE StateIterator begin();
+  BENCHMARK_ALWAYS_INLINE StateIterator end();
+
   // Returns true if the benchmark should continue through another iteration.
   // NOTE: A benchmark may not return from the test until KeepRunning() has
   // returned false.
@@ -405,7 +433,7 @@ class State {
     if (BENCHMARK_BUILTIN_EXPECT(!started_, false)) {
       StartKeepRunning();
     }
-    bool const res = total_iterations_++ < max_iterations;
+    bool const res = --total_iterations_;
     if (BENCHMARK_BUILTIN_EXPECT(!res, false)) {
       FinishKeepRunning();
     }
@@ -415,7 +443,7 @@ class State {
   // REQUIRES: timer is running and 'SkipWithError(...)' has not been called
   //           by the current thread.
   // Stop the benchmark timer.  If not called, the timer will be
-  // automatically stopped after KeepRunning() returns false for the first time.
+  // automatically stopped after the last iteration of the benchmark loop.
   //
   // For threaded benchmarks the PauseTiming() function only pauses the timing
   // for the current thread.
@@ -431,7 +459,8 @@ class State {
   // REQUIRES: timer is not running and 'SkipWithError(...)' has not been called
   //           by the current thread.
   // Start the benchmark timer.  The timer is NOT running on entrance to the
-  // benchmark function. It begins running after the first call to KeepRunning()
+  // benchmark function. It begins running after control flow enters the
+  // benchmark loop.
   //
   // NOTE: PauseTiming()/ResumeTiming() are relatively
   // heavyweight, and so their use should generally be avoided
@@ -440,9 +469,13 @@ class State {
 
   // REQUIRES: 'SkipWithError(...)' has not been called previously by the
   //            current thread.
-  // Skip any future iterations of the 'KeepRunning()' loop in the current
-  // thread and report an error with the specified 'msg'. After this call
-  // the user may explicitly 'return' from the benchmark.
+  // Report the benchmark as resulting in an error with the specified 'msg'.
+  // After this call the user may explicitly 'return' from the benchmark.
+  //
+  // If the ranged-for style of benchmark loop is used, the user must explicitly
+  // break from the loop, otherwise all future iterations will be run.
+  // If the 'KeepRunning()' loop is used the current thread will automatically
+  // exit the loop at the end of the current iteration.
   //
   // For threaded benchmarks only the current thread stops executing and future
   // calls to `KeepRunning()` will block until all threads have completed
@@ -455,7 +488,7 @@ class State {
   // responsibility to exit the scope as needed.
   void SkipWithError(const char* msg);
 
-  // REQUIRES: called exactly once per iteration of the KeepRunning loop.
+  // REQUIRES: called exactly once per iteration of the benchmarking loop.
   // Set the manually measured time for this benchmark iteration, which
   // is used instead of automatically measured time if UseManualTime() was
   // specified.
@@ -470,7 +503,7 @@ class State {
   // value > 0, the report is printed in MB/sec instead of nanoseconds
   // per iteration.
   //
-  // REQUIRES: a benchmark has exited its KeepRunning loop.
+  // REQUIRES: a benchmark has exited its benchmarking loop.
   BENCHMARK_ALWAYS_INLINE
   void SetBytesProcessed(size_t bytes) { bytes_processed_ = bytes; }
 
@@ -493,7 +526,7 @@ class State {
   // executing benchmark. It is typically called at the end of a processing
   // benchmark where a processing items/second output is desired.
   //
-  // REQUIRES: a benchmark has exited its KeepRunning loop.
+  // REQUIRES: a benchmark has exited its benchmarking loop.
   BENCHMARK_ALWAYS_INLINE
   void SetItemsProcessed(size_t items) { items_processed_ = items; }
 
@@ -511,7 +544,7 @@ class State {
   // Produces output that looks like:
   //  BM_Compress   50         50   14115038  compress:27.3%
   //
-  // REQUIRES: a benchmark has exited its KeepRunning loop.
+  // REQUIRES: a benchmark has exited its benchmarking loop.
   void SetLabel(const char* label);
 
   void BENCHMARK_ALWAYS_INLINE SetLabel(const std::string& str) {
@@ -532,7 +565,7 @@ class State {
   int range_y() const { return range(1); }
 
   BENCHMARK_ALWAYS_INLINE
-  size_t iterations() const { return total_iterations_; }
+  size_t iterations() const { return (max_iterations - total_iterations_) + 1; }
 
  private:
   bool started_;
@@ -570,6 +603,53 @@ class State {
   BENCHMARK_DISALLOW_COPY_AND_ASSIGN(State);
 };
 
+struct State::StateIterator {
+  struct BENCHMARK_UNUSED Value {};
+  typedef std::forward_iterator_tag iterator_category;
+  typedef Value value_type;
+  typedef Value reference;
+  typedef Value pointer;
+
+ private:
+  friend class State;
+  BENCHMARK_ALWAYS_INLINE
+  StateIterator() : cached_(0), parent_() {}
+
+  BENCHMARK_ALWAYS_INLINE
+  explicit StateIterator(State* st)
+      : cached_(st->error_occurred_ ? 0 : st->max_iterations), parent_(st) {}
+
+ public:
+  BENCHMARK_ALWAYS_INLINE
+  Value operator*() const { return Value(); }
+
+  BENCHMARK_ALWAYS_INLINE
+  StateIterator& operator++() {
+    assert(cached_ > 0);
+    --cached_;
+    return *this;
+  }
+
+  BENCHMARK_ALWAYS_INLINE
+  bool operator!=(StateIterator const&) const {
+    if (BENCHMARK_BUILTIN_EXPECT(cached_ != 0, true)) return true;
+    parent_->FinishKeepRunning();
+    return false;
+  }
+
+ private:
+  size_t cached_;
+  State* const parent_;
+};
+
+BENCHMARK_ALWAYS_INLINE inline State::StateIterator State::begin() {
+  return StateIterator(this);
+}
+BENCHMARK_ALWAYS_INLINE inline State::StateIterator State::end() {
+  StartKeepRunning();
+  return StateIterator();
+}
+
 namespace internal {
 
 typedef void(Function)(State&);
@@ -698,6 +778,9 @@ class Benchmark {
   // the asymptotic computational complexity will be shown on the output.
   Benchmark* Complexity(BigOFunc* complexity);
 
+  // Add this statistics to be computed over all the values of benchmark run
+  Benchmark* ComputeStatistics(std::string name, StatisticsFunc* statistics);
+
   // Support for running multiple copies of the same benchmark concurrently
   // in multiple threads.  This may be useful when measuring the scaling
   // of some piece of code.
@@ -758,6 +841,7 @@ class Benchmark {
   bool use_manual_time_;
   BigO complexity_;
   BigOFunc* complexity_lambda_;
+  std::vector<Statistics> statistics_;
   std::vector<int> thread_counts_;
 
   Benchmark& operator=(Benchmark const&);
@@ -905,7 +989,7 @@ class Fixture : public internal::Benchma
 #define BENCHMARK_RANGE2(n, l1, h1, l2, h2) \
   BENCHMARK(n)->RangePair({{(l1), (h1)}, {(l2), (h2)}})
 
-#if __cplusplus >= 201103L
+#ifdef BENCHMARK_HAS_CXX11
 
 // Register a benchmark which invokes the function specified by `func`
 // with the additional arguments specified by `...`.
@@ -925,7 +1009,7 @@ class Fixture : public internal::Benchma
               #func "/" #test_case_name,                 \
               [](::benchmark::State& st) { func(st, __VA_ARGS__); })))
 
-#endif  // __cplusplus >= 11
+#endif  // BENCHMARK_HAS_CXX11
 
 // This will register a benchmark for a templatized function.  For example:
 //
@@ -946,7 +1030,7 @@ class Fixture : public internal::Benchma
           new ::benchmark::internal::FunctionBenchmark(#n "<" #a "," #b ">", \
                                                        n<a, b>)))
 
-#if __cplusplus >= 201103L
+#ifdef BENCHMARK_HAS_CXX11
 #define BENCHMARK_TEMPLATE(n, ...)                       \
   BENCHMARK_PRIVATE_DECLARE(n) =                         \
       (::benchmark::internal::RegisterBenchmarkInternal( \
@@ -967,10 +1051,63 @@ class Fixture : public internal::Benchma
     virtual void BenchmarkCase(::benchmark::State&);          \
   };
 
+#define BENCHMARK_TEMPLATE1_PRIVATE_DECLARE_F(BaseClass, Method, a) \
+  class BaseClass##_##Method##_Benchmark : public BaseClass<a> {    \
+   public:                                                          \
+    BaseClass##_##Method##_Benchmark() : BaseClass<a>() {           \
+      this->SetName(#BaseClass"<" #a ">/" #Method);                 \
+    }                                                               \
+                                                                    \
+   protected:                                                       \
+    virtual void BenchmarkCase(::benchmark::State&);                \
+  };
+
+#define BENCHMARK_TEMPLATE2_PRIVATE_DECLARE_F(BaseClass, Method, a, b) \
+  class BaseClass##_##Method##_Benchmark : public BaseClass<a, b> {    \
+   public:                                                             \
+    BaseClass##_##Method##_Benchmark() : BaseClass<a, b>() {           \
+      this->SetName(#BaseClass"<" #a "," #b ">/" #Method);             \
+    }                                                                  \
+                                                                       \
+   protected:                                                          \
+    virtual void BenchmarkCase(::benchmark::State&);                   \
+  };
+
+#ifdef BENCHMARK_HAS_CXX11
+#define BENCHMARK_TEMPLATE_PRIVATE_DECLARE_F(BaseClass, Method, ...)       \
+  class BaseClass##_##Method##_Benchmark : public BaseClass<__VA_ARGS__> { \
+   public:                                                                 \
+    BaseClass##_##Method##_Benchmark() : BaseClass<__VA_ARGS__>() {        \
+      this->SetName(#BaseClass"<" #__VA_ARGS__ ">/" #Method);              \
+    }                                                                      \
+                                                                           \
+   protected:                                                              \
+    virtual void BenchmarkCase(::benchmark::State&);                       \
+  };
+#else
+#define BENCHMARK_TEMPLATE_PRIVATE_DECLARE_F(n, a) BENCHMARK_TEMPLATE1_PRIVATE_DECLARE_F(n, a)
+#endif
+
 #define BENCHMARK_DEFINE_F(BaseClass, Method)    \
   BENCHMARK_PRIVATE_DECLARE_F(BaseClass, Method) \
   void BaseClass##_##Method##_Benchmark::BenchmarkCase
 
+#define BENCHMARK_TEMPLATE1_DEFINE_F(BaseClass, Method, a)    \
+  BENCHMARK_TEMPLATE1_PRIVATE_DECLARE_F(BaseClass, Method, a) \
+  void BaseClass##_##Method##_Benchmark::BenchmarkCase
+
+#define BENCHMARK_TEMPLATE2_DEFINE_F(BaseClass, Method, a, b)    \
+  BENCHMARK_TEMPLATE2_PRIVATE_DECLARE_F(BaseClass, Method, a, b) \
+  void BaseClass##_##Method##_Benchmark::BenchmarkCase
+
+#ifdef BENCHMARK_HAS_CXX11
+#define BENCHMARK_TEMPLATE_DEFINE_F(BaseClass, Method, ...)            \
+  BENCHMARK_TEMPLATE_PRIVATE_DECLARE_F(BaseClass, Method, __VA_ARGS__) \
+  void BaseClass##_##Method##_Benchmark::BenchmarkCase
+#else
+#define BENCHMARK_TEMPLATE_DEFINE_F(BaseClass, Method, a) BENCHMARK_TEMPLATE1_DEFINE_F(BaseClass, Method, a)
+#endif
+
 #define BENCHMARK_REGISTER_F(BaseClass, Method) \
   BENCHMARK_PRIVATE_REGISTER_F(BaseClass##_##Method##_Benchmark)
 
@@ -984,6 +1121,25 @@ class Fixture : public internal::Benchma
   BENCHMARK_REGISTER_F(BaseClass, Method);       \
   void BaseClass##_##Method##_Benchmark::BenchmarkCase
 
+#define BENCHMARK_TEMPLATE1_F(BaseClass, Method, a)           \
+  BENCHMARK_TEMPLATE1_PRIVATE_DECLARE_F(BaseClass, Method, a) \
+  BENCHMARK_REGISTER_F(BaseClass, Method);                    \
+  void BaseClass##_##Method##_Benchmark::BenchmarkCase
+
+#define BENCHMARK_TEMPLATE2_F(BaseClass, Method, a, b)           \
+  BENCHMARK_TEMPLATE2_PRIVATE_DECLARE_F(BaseClass, Method, a, b) \
+  BENCHMARK_REGISTER_F(BaseClass, Method);                       \
+  void BaseClass##_##Method##_Benchmark::BenchmarkCase
+
+#ifdef BENCHMARK_HAS_CXX11
+#define BENCHMARK_TEMPLATE_F(BaseClass, Method, ...)           \
+  BENCHMARK_TEMPLATE_PRIVATE_DECLARE_F(BaseClass, Method, __VA_ARGS__) \
+  BENCHMARK_REGISTER_F(BaseClass, Method);                     \
+  void BaseClass##_##Method##_Benchmark::BenchmarkCase
+#else
+#define BENCHMARK_TEMPLATE_F(BaseClass, Method, a) BENCHMARK_TEMPLATE1_F(BaseClass, Method, a)
+#endif
+
 // Helper macro to create a main routine in a test that runs the benchmarks
 #define BENCHMARK_MAIN()                   \
   int main(int argc, char** argv) {        \
@@ -1065,6 +1221,9 @@ class BenchmarkReporter {
     BigOFunc* complexity_lambda;
     int complexity_n;
 
+    // what statistics to compute from the measurements
+    const std::vector<Statistics>* statistics;
+
     // Inform print function whether the current run is a complexity report
     bool report_big_o;
     bool report_rms;

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/benchmark_api.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark_api.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/benchmark_api.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/benchmark_api.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/benchmark_api.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/reporter.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/reporter.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/reporter.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/include/benchmark/reporter.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/include/benchmark/reporter.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/mingw.py (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/mingw.py)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/mingw.py?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/mingw.py&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/mingw.py&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/CMakeLists.txt (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/CMakeLists.txt)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/CMakeLists.txt?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/CMakeLists.txt&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/CMakeLists.txt&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/CMakeLists.txt (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/CMakeLists.txt Thu Nov 16 16:16:06 2017
@@ -54,25 +54,27 @@ write_basic_package_version_file(
 
 configure_file("${PROJECT_SOURCE_DIR}/cmake/Config.cmake.in" "${project_config}" @ONLY)
 
-# Install target (will install the library to specified CMAKE_INSTALL_PREFIX variable)
-install(
-  TARGETS benchmark
-  EXPORT ${targets_export_name}
-  ARCHIVE DESTINATION ${lib_install_dir}
-  LIBRARY DESTINATION ${lib_install_dir}
-  RUNTIME DESTINATION ${bin_install_dir}
-  INCLUDES DESTINATION ${include_install_dir})
-
-install(
-  DIRECTORY "${PROJECT_SOURCE_DIR}/include/benchmark"
-  DESTINATION ${include_install_dir}
-  FILES_MATCHING PATTERN "*.*h")
-
-install(
-    FILES "${project_config}" "${version_config}"
-    DESTINATION "${config_install_dir}")
-
-install(
-    EXPORT "${targets_export_name}"
-    NAMESPACE "${namespace}"
-    DESTINATION "${config_install_dir}")
+if (BENCHMARK_ENABLE_INSTALL)
+  # Install target (will install the library to specified CMAKE_INSTALL_PREFIX variable)
+  install(
+    TARGETS benchmark
+    EXPORT ${targets_export_name}
+    ARCHIVE DESTINATION ${lib_install_dir}
+    LIBRARY DESTINATION ${lib_install_dir}
+    RUNTIME DESTINATION ${bin_install_dir}
+    INCLUDES DESTINATION ${include_install_dir})
+  
+  install(
+    DIRECTORY "${PROJECT_SOURCE_DIR}/include/benchmark"
+    DESTINATION ${include_install_dir}
+    FILES_MATCHING PATTERN "*.*h")
+  
+  install(
+      FILES "${project_config}" "${version_config}"
+      DESTINATION "${config_install_dir}")
+  
+  install(
+      EXPORT "${targets_export_name}"
+      NAMESPACE "${namespace}"
+      DESTINATION "${config_install_dir}")
+endif()

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/arraysize.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/arraysize.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/arraysize.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/arraysize.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/arraysize.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark.cc Thu Nov 16 16:16:06 2017
@@ -37,11 +37,11 @@
 #include "colorprint.h"
 #include "commandlineflags.h"
 #include "complexity.h"
+#include "statistics.h"
 #include "counter.h"
 #include "log.h"
 #include "mutex.h"
 #include "re.h"
-#include "stat.h"
 #include "string_util.h"
 #include "sysinfo.h"
 #include "timers.h"
@@ -99,20 +99,15 @@ DEFINE_bool(benchmark_counters_tabular,
 DEFINE_int32(v, 0, "The level of verbose logging to output");
 
 namespace benchmark {
-namespace internal {
-
-void UseCharPointer(char const volatile*) {}
-
-}  // end namespace internal
 
 namespace {
-
 static const size_t kMaxIterations = 1000000000;
-
 }  // end namespace
 
 namespace internal {
 
+void UseCharPointer(char const volatile*) {}
+
 class ThreadManager {
  public:
   ThreadManager(int num_threads)
@@ -256,6 +251,7 @@ BenchmarkReporter::Run CreateRunReport(
     report.complexity_n = results.complexity_n;
     report.complexity = b.complexity;
     report.complexity_lambda = b.complexity_lambda;
+    report.statistics = b.statistics;
     report.counters = results.counters;
     internal::Finish(&report.counters, seconds, b.threads);
   }
@@ -401,7 +397,7 @@ State::State(size_t max_iters, const std
              internal::ThreadManager* manager)
     : started_(false),
       finished_(false),
-      total_iterations_(0),
+      total_iterations_(max_iters + 1),
       range_(ranges),
       bytes_processed_(0),
       items_processed_(0),
@@ -414,6 +410,7 @@ State::State(size_t max_iters, const std
       timer_(timer),
       manager_(manager) {
   CHECK(max_iterations != 0) << "At least one iteration must be run";
+  CHECK(total_iterations_ != 0) << "max iterations wrapped around";
   CHECK_LT(thread_index, threads) << "thread_index must be less than threads";
 }
 
@@ -438,7 +435,7 @@ void State::SkipWithError(const char* ms
       manager_->results.has_error_ = true;
     }
   }
-  total_iterations_ = max_iterations;
+  total_iterations_ = 1;
   if (timer_->running()) timer_->StopTimer();
 }
 
@@ -463,8 +460,8 @@ void State::FinishKeepRunning() {
   if (!error_occurred_) {
     PauseTiming();
   }
-  // Total iterations now is one greater than max iterations. Fix this.
-  total_iterations_ = max_iterations;
+  // Total iterations has now wrapped around zero. Fix this.
+  total_iterations_ = 1;
   finished_ = true;
   manager_->StartStopBarrier();
 }
@@ -481,17 +478,21 @@ void RunBenchmarks(const std::vector<Ben
   // Determine the width of the name field using a minimum width of 10.
   bool has_repetitions = FLAGS_benchmark_repetitions > 1;
   size_t name_field_width = 10;
+  size_t stat_field_width = 0;
   for (const Benchmark::Instance& benchmark : benchmarks) {
     name_field_width =
         std::max<size_t>(name_field_width, benchmark.name.size());
     has_repetitions |= benchmark.repetitions > 1;
+
+    for(const auto& Stat : *benchmark.statistics)
+      stat_field_width = std::max<size_t>(stat_field_width, Stat.name_.size());
   }
-  if (has_repetitions) name_field_width += std::strlen("_stddev");
+  if (has_repetitions) name_field_width += 1 + stat_field_width;
 
   // Print header here
   BenchmarkReporter::Context context;
   context.num_cpus = NumCPUs();
-  context.mhz_per_cpu = CyclesPerSecond() / 1000000.0f;
+  context.mhz_per_cpu = CyclesPerSecond() / 1000000.0;
 
   context.cpu_scaling_enabled = CpuScalingEnabled();
   context.name_field_width = name_field_width;

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark_api_internal.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_api_internal.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark_api_internal.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark_api_internal.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_api_internal.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_api_internal.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark_api_internal.h Thu Nov 16 16:16:06 2017
@@ -25,6 +25,7 @@ struct Benchmark::Instance {
   BigO complexity;
   BigOFunc* complexity_lambda;
   UserCounters counters;
+  const std::vector<Statistics>* statistics;
   bool last_benchmark_instance;
   int repetitions;
   double min_time;

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark_register.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_register.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark_register.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark_register.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_register.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/benchmark_register.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/benchmark_register.cc Thu Nov 16 16:16:06 2017
@@ -37,10 +37,10 @@
 #include "check.h"
 #include "commandlineflags.h"
 #include "complexity.h"
+#include "statistics.h"
 #include "log.h"
 #include "mutex.h"
 #include "re.h"
-#include "stat.h"
 #include "string_util.h"
 #include "sysinfo.h"
 #include "timers.h"
@@ -159,6 +159,7 @@ bool BenchmarkFamilies::FindBenchmarks(
         instance.use_manual_time = family->use_manual_time_;
         instance.complexity = family->complexity_;
         instance.complexity_lambda = family->complexity_lambda_;
+        instance.statistics = &family->statistics_;
         instance.threads = num_threads;
 
         // Add arguments to instance name
@@ -236,7 +237,11 @@ Benchmark::Benchmark(const char* name)
       use_real_time_(false),
       use_manual_time_(false),
       complexity_(oNone),
-      complexity_lambda_(nullptr) {}
+      complexity_lambda_(nullptr) {
+  ComputeStatistics("mean", StatisticsMean);
+  ComputeStatistics("median", StatisticsMedian);
+  ComputeStatistics("stddev", StatisticsStdDev);
+}
 
 Benchmark::~Benchmark() {}
 
@@ -409,6 +414,12 @@ Benchmark* Benchmark::Complexity(BigOFun
   return this;
 }
 
+Benchmark* Benchmark::ComputeStatistics(std::string name,
+                                        StatisticsFunc* statistics) {
+  statistics_.emplace_back(name, statistics);
+  return this;
+}
+
 Benchmark* Benchmark::Threads(int t) {
   CHECK_GT(t, 0);
   thread_counts_.push_back(t);

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/check.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/check.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/check.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/check.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/check.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/colorprint.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/colorprint.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/colorprint.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/colorprint.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/colorprint.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/colorprint.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/colorprint.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/commandlineflags.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/commandlineflags.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/commandlineflags.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/commandlineflags.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/commandlineflags.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/commandlineflags.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/commandlineflags.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/complexity.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/complexity.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/complexity.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/complexity.cc Thu Nov 16 16:16:06 2017
@@ -21,7 +21,6 @@
 #include <cmath>
 #include "check.h"
 #include "complexity.h"
-#include "stat.h"
 
 namespace benchmark {
 
@@ -150,109 +149,6 @@ LeastSq MinimalLeastSq(const std::vector
   return best_fit;
 }
 
-std::vector<BenchmarkReporter::Run> ComputeStats(
-    const std::vector<BenchmarkReporter::Run>& reports) {
-  typedef BenchmarkReporter::Run Run;
-  std::vector<Run> results;
-
-  auto error_count =
-      std::count_if(reports.begin(), reports.end(),
-                    [](Run const& run) { return run.error_occurred; });
-
-  if (reports.size() - error_count < 2) {
-    // We don't report aggregated data if there was a single run.
-    return results;
-  }
-  // Accumulators.
-  Stat1_d real_accumulated_time_stat;
-  Stat1_d cpu_accumulated_time_stat;
-  Stat1_d bytes_per_second_stat;
-  Stat1_d items_per_second_stat;
-  // All repetitions should be run with the same number of iterations so we
-  // can take this information from the first benchmark.
-  int64_t const run_iterations = reports.front().iterations;
-  // create stats for user counters
-  struct CounterStat {
-    Counter c;
-    Stat1_d s;
-  };
-  std::map< std::string, CounterStat > counter_stats;
-  for(Run const& r : reports) {
-    for(auto const& cnt : r.counters) {
-      auto it = counter_stats.find(cnt.first);
-      if(it == counter_stats.end()) {
-        counter_stats.insert({cnt.first, {cnt.second, Stat1_d{}}});
-      } else {
-        CHECK_EQ(counter_stats[cnt.first].c.flags, cnt.second.flags);
-      }
-    }
-  }
-
-  // Populate the accumulators.
-  for (Run const& run : reports) {
-    CHECK_EQ(reports[0].benchmark_name, run.benchmark_name);
-    CHECK_EQ(run_iterations, run.iterations);
-    if (run.error_occurred) continue;
-    real_accumulated_time_stat +=
-        Stat1_d(run.real_accumulated_time / run.iterations);
-    cpu_accumulated_time_stat +=
-        Stat1_d(run.cpu_accumulated_time / run.iterations);
-    items_per_second_stat += Stat1_d(run.items_per_second);
-    bytes_per_second_stat += Stat1_d(run.bytes_per_second);
-    // user counters
-    for(auto const& cnt : run.counters) {
-      auto it = counter_stats.find(cnt.first);
-      CHECK_NE(it, counter_stats.end());
-      it->second.s += Stat1_d(cnt.second);
-    }
-  }
-
-  // Get the data from the accumulator to BenchmarkReporter::Run's.
-  Run mean_data;
-  mean_data.benchmark_name = reports[0].benchmark_name + "_mean";
-  mean_data.iterations = run_iterations;
-  mean_data.real_accumulated_time =
-      real_accumulated_time_stat.Mean() * run_iterations;
-  mean_data.cpu_accumulated_time =
-      cpu_accumulated_time_stat.Mean() * run_iterations;
-  mean_data.bytes_per_second = bytes_per_second_stat.Mean();
-  mean_data.items_per_second = items_per_second_stat.Mean();
-  mean_data.time_unit = reports[0].time_unit;
-  // user counters
-  for(auto const& kv : counter_stats) {
-    auto c = Counter(kv.second.s.Mean(), counter_stats[kv.first].c.flags);
-    mean_data.counters[kv.first] = c;
-  }
-
-  // Only add label to mean/stddev if it is same for all runs
-  mean_data.report_label = reports[0].report_label;
-  for (std::size_t i = 1; i < reports.size(); i++) {
-    if (reports[i].report_label != reports[0].report_label) {
-      mean_data.report_label = "";
-      break;
-    }
-  }
-
-  Run stddev_data;
-  stddev_data.benchmark_name = reports[0].benchmark_name + "_stddev";
-  stddev_data.report_label = mean_data.report_label;
-  stddev_data.iterations = 0;
-  stddev_data.real_accumulated_time = real_accumulated_time_stat.StdDev();
-  stddev_data.cpu_accumulated_time = cpu_accumulated_time_stat.StdDev();
-  stddev_data.bytes_per_second = bytes_per_second_stat.StdDev();
-  stddev_data.items_per_second = items_per_second_stat.StdDev();
-  stddev_data.time_unit = reports[0].time_unit;
-  // user counters
-  for(auto const& kv : counter_stats) {
-    auto c = Counter(kv.second.s.StdDev(), counter_stats[kv.first].c.flags);
-    stddev_data.counters[kv.first] = c;
-  }
-
-  results.push_back(mean_data);
-  results.push_back(stddev_data);
-  return results;
-}
-
 std::vector<BenchmarkReporter::Run> ComputeBigO(
     const std::vector<BenchmarkReporter::Run>& reports) {
   typedef BenchmarkReporter::Run Run;

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/complexity.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/complexity.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/complexity.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/complexity.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/complexity.h Thu Nov 16 16:16:06 2017
@@ -25,12 +25,6 @@
 
 namespace benchmark {
 
-// Return a vector containing the mean and standard devation information for
-// the specified list of reports. If 'reports' contains less than two
-// non-errored runs an empty vector is returned
-std::vector<BenchmarkReporter::Run> ComputeStats(
-    const std::vector<BenchmarkReporter::Run>& reports);
-
 // Return a vector containing the bigO and RMS information for the specified
 // list of reports. If 'reports.size() < 2' an empty vector is returned.
 std::vector<BenchmarkReporter::Run> ComputeBigO(
@@ -57,4 +51,5 @@ struct LeastSq {
 std::string GetBigOString(BigO complexity);
 
 }  // end namespace benchmark
+
 #endif  // COMPLEXITY_H_

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/console_reporter.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/console_reporter.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/console_reporter.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/console_reporter.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/console_reporter.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/console_reporter.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/console_reporter.cc Thu Nov 16 16:16:06 2017
@@ -148,7 +148,7 @@ void ConsoleReporter::PrintRunData(const
   }
 
   for (auto& c : result.counters) {
-    auto const& s = HumanReadableNumber(c.second.value);
+    auto const& s = HumanReadableNumber(c.second.value, 1000);
     if (output_options_ & OO_Tabular) {
       if (c.second.flags & Counter::kIsRate) {
         printer(Out, COLOR_DEFAULT, " %8s/s", s.c_str());

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/counter.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/counter.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/counter.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/counter.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/counter.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/counter.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/counter.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/csv_reporter.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/csv_reporter.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/csv_reporter.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/csv_reporter.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/csv_reporter.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/cycleclock.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/cycleclock.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/cycleclock.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/cycleclock.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/cycleclock.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/internal_macros.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/internal_macros.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/internal_macros.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/internal_macros.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/internal_macros.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/json_reporter.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/json_reporter.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/json_reporter.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/json_reporter.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/json_reporter.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/json_reporter.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/json_reporter.cc Thu Nov 16 16:16:06 2017
@@ -21,6 +21,8 @@
 #include <string>
 #include <tuple>
 #include <vector>
+#include <iomanip> // for setprecision
+#include <limits>
 
 #include "string_util.h"
 #include "timers.h"
@@ -48,7 +50,14 @@ std::string FormatKV(std::string const&
 }
 
 std::string FormatKV(std::string const& key, double value) {
-  return StringPrintF("\"%s\": %.2f", key.c_str(), value);
+  std::stringstream ss;
+  ss << '"' << key << "\": ";
+
+  const auto max_digits10 = std::numeric_limits<decltype (value)>::max_digits10;
+  const auto max_fractional_digits10 = max_digits10 - 1;
+
+  ss << std::scientific << std::setprecision(max_fractional_digits10) << value;
+  return ss.str();
 }
 
 int64_t RoundDouble(double v) { return static_cast<int64_t>(v + 0.5); }
@@ -125,18 +134,18 @@ void JSONReporter::PrintRunData(Run cons
   if (!run.report_big_o && !run.report_rms) {
     out << indent << FormatKV("iterations", run.iterations) << ",\n";
     out << indent
-        << FormatKV("real_time", RoundDouble(run.GetAdjustedRealTime()))
+        << FormatKV("real_time", run.GetAdjustedRealTime())
         << ",\n";
     out << indent
-        << FormatKV("cpu_time", RoundDouble(run.GetAdjustedCPUTime()));
+        << FormatKV("cpu_time", run.GetAdjustedCPUTime());
     out << ",\n"
         << indent << FormatKV("time_unit", GetTimeUnitString(run.time_unit));
   } else if (run.report_big_o) {
     out << indent
-        << FormatKV("cpu_coefficient", RoundDouble(run.GetAdjustedCPUTime()))
+        << FormatKV("cpu_coefficient", run.GetAdjustedCPUTime())
         << ",\n";
     out << indent
-        << FormatKV("real_coefficient", RoundDouble(run.GetAdjustedRealTime()))
+        << FormatKV("real_coefficient", run.GetAdjustedRealTime())
         << ",\n";
     out << indent << FormatKV("big_o", GetBigOString(run.complexity)) << ",\n";
     out << indent << FormatKV("time_unit", GetTimeUnitString(run.time_unit));
@@ -147,17 +156,17 @@ void JSONReporter::PrintRunData(Run cons
   if (run.bytes_per_second > 0.0) {
     out << ",\n"
         << indent
-        << FormatKV("bytes_per_second", RoundDouble(run.bytes_per_second));
+        << FormatKV("bytes_per_second", run.bytes_per_second);
   }
   if (run.items_per_second > 0.0) {
     out << ",\n"
         << indent
-        << FormatKV("items_per_second", RoundDouble(run.items_per_second));
+        << FormatKV("items_per_second", run.items_per_second);
   }
   for(auto &c : run.counters) {
     out << ",\n"
         << indent
-        << FormatKV(c.first, RoundDouble(c.second));
+        << FormatKV(c.first, c.second);
   }
   if (!run.report_label.empty()) {
     out << ",\n" << indent << FormatKV("label", run.report_label);

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/log.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/log.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/log.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/log.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/log.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/mutex.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/mutex.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/mutex.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/mutex.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/mutex.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/re.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/re.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/re.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/re.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/re.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/reporter.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/reporter.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/reporter.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/reporter.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/reporter.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/reporter.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/reporter.cc Thu Nov 16 16:16:06 2017
@@ -22,7 +22,6 @@
 #include <vector>
 
 #include "check.h"
-#include "stat.h"
 
 namespace benchmark {
 

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sleep.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sleep.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sleep.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sleep.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sleep.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sleep.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sleep.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Added: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/statistics.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/statistics.cc?rev=318476&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/statistics.cc (added)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/statistics.cc Thu Nov 16 16:16:06 2017
@@ -0,0 +1,175 @@
+// Copyright 2016 Ismael Jimenez Martinez. All rights reserved.
+// Copyright 2017 Roman Lebedev. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+#include "benchmark/benchmark.h"
+
+#include <algorithm>
+#include <cmath>
+#include <string>
+#include <vector>
+#include <numeric>
+#include "check.h"
+#include "statistics.h"
+
+namespace benchmark {
+
+auto StatisticsSum = [](const std::vector<double>& v) {
+  return std::accumulate(v.begin(), v.end(), 0.0);
+};
+
+double StatisticsMean(const std::vector<double>& v) {
+  if (v.size() == 0) return 0.0;
+  return StatisticsSum(v) * (1.0 / v.size());
+}
+
+double StatisticsMedian(const std::vector<double>& v) {
+  if (v.size() < 3) return StatisticsMean(v);
+  std::vector<double> partial;
+  // we need roundDown(count/2)+1 slots
+  partial.resize(1 + (v.size() / 2));
+  std::partial_sort_copy(v.begin(), v.end(), partial.begin(), partial.end());
+  // did we have odd number of samples?
+  // if yes, then the last element of partially-sorted vector is the median
+  // it no, then the average of the last two elements is the median
+  if(v.size() % 2 == 1)
+    return partial.back();
+  return (partial[partial.size() - 2] + partial[partial.size() - 1]) / 2.0;
+}
+
+// Return the sum of the squares of this sample set
+auto SumSquares = [](const std::vector<double>& v) {
+  return std::inner_product(v.begin(), v.end(), v.begin(), 0.0);
+};
+
+auto Sqr = [](const double dat) { return dat * dat; };
+auto Sqrt = [](const double dat) {
+  // Avoid NaN due to imprecision in the calculations
+  if (dat < 0.0) return 0.0;
+  return std::sqrt(dat);
+};
+
+double StatisticsStdDev(const std::vector<double>& v) {
+  const auto mean = StatisticsMean(v);
+  if (v.size() == 0) return mean;
+
+  // Sample standard deviation is undefined for n = 1
+  if (v.size() == 1)
+    return 0.0;
+
+  const double avg_squares = SumSquares(v) * (1.0 / v.size());
+  return Sqrt(v.size() / (v.size() - 1.0) * (avg_squares - Sqr(mean)));
+}
+
+std::vector<BenchmarkReporter::Run> ComputeStats(
+    const std::vector<BenchmarkReporter::Run>& reports) {
+  typedef BenchmarkReporter::Run Run;
+  std::vector<Run> results;
+
+  auto error_count =
+      std::count_if(reports.begin(), reports.end(),
+                    [](Run const& run) { return run.error_occurred; });
+
+  if (reports.size() - error_count < 2) {
+    // We don't report aggregated data if there was a single run.
+    return results;
+  }
+
+  // Accumulators.
+  std::vector<double> real_accumulated_time_stat;
+  std::vector<double> cpu_accumulated_time_stat;
+  std::vector<double> bytes_per_second_stat;
+  std::vector<double> items_per_second_stat;
+
+  real_accumulated_time_stat.reserve(reports.size());
+  cpu_accumulated_time_stat.reserve(reports.size());
+  bytes_per_second_stat.reserve(reports.size());
+  items_per_second_stat.reserve(reports.size());
+
+  // All repetitions should be run with the same number of iterations so we
+  // can take this information from the first benchmark.
+  int64_t const run_iterations = reports.front().iterations;
+  // create stats for user counters
+  struct CounterStat {
+    Counter c;
+    std::vector<double> s;
+  };
+  std::map< std::string, CounterStat > counter_stats;
+  for(Run const& r : reports) {
+    for(auto const& cnt : r.counters) {
+      auto it = counter_stats.find(cnt.first);
+      if(it == counter_stats.end()) {
+        counter_stats.insert({cnt.first, {cnt.second, std::vector<double>{}}});
+        it = counter_stats.find(cnt.first);
+        it->second.s.reserve(reports.size());
+      } else {
+        CHECK_EQ(counter_stats[cnt.first].c.flags, cnt.second.flags);
+      }
+    }
+  }
+
+  // Populate the accumulators.
+  for (Run const& run : reports) {
+    CHECK_EQ(reports[0].benchmark_name, run.benchmark_name);
+    CHECK_EQ(run_iterations, run.iterations);
+    if (run.error_occurred) continue;
+    real_accumulated_time_stat.emplace_back(run.real_accumulated_time);
+    cpu_accumulated_time_stat.emplace_back(run.cpu_accumulated_time);
+    items_per_second_stat.emplace_back(run.items_per_second);
+    bytes_per_second_stat.emplace_back(run.bytes_per_second);
+    // user counters
+    for(auto const& cnt : run.counters) {
+      auto it = counter_stats.find(cnt.first);
+      CHECK_NE(it, counter_stats.end());
+      it->second.s.emplace_back(cnt.second);
+    }
+  }
+
+  // Only add label if it is same for all runs
+  std::string report_label = reports[0].report_label;
+  for (std::size_t i = 1; i < reports.size(); i++) {
+    if (reports[i].report_label != report_label) {
+      report_label = "";
+      break;
+    }
+  }
+
+  for(const auto& Stat : *reports[0].statistics) {
+    // Get the data from the accumulator to BenchmarkReporter::Run's.
+    Run data;
+    data.benchmark_name = reports[0].benchmark_name + "_" + Stat.name_;
+    data.report_label = report_label;
+    data.iterations = run_iterations;
+
+    data.real_accumulated_time = Stat.compute_(real_accumulated_time_stat);
+    data.cpu_accumulated_time = Stat.compute_(cpu_accumulated_time_stat);
+    data.bytes_per_second = Stat.compute_(bytes_per_second_stat);
+    data.items_per_second = Stat.compute_(items_per_second_stat);
+
+    data.time_unit = reports[0].time_unit;
+
+    // user counters
+    for(auto const& kv : counter_stats) {
+      const auto uc_stat = Stat.compute_(kv.second.s);
+      auto c = Counter(uc_stat, counter_stats[kv.first].c.flags);
+      data.counters[kv.first] = c;
+    }
+
+    results.push_back(data);
+  }
+
+  return results;
+}
+
+}  // end namespace benchmark

Added: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/statistics.h
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/statistics.h?rev=318476&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/statistics.h (added)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/statistics.h Thu Nov 16 16:16:06 2017
@@ -0,0 +1,37 @@
+// Copyright 2016 Ismael Jimenez Martinez. All rights reserved.
+// Copyright 2017 Roman Lebedev. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+#ifndef STATISTICS_H_
+#define STATISTICS_H_
+
+#include <vector>
+
+#include "benchmark/benchmark.h"
+
+namespace benchmark {
+
+// Return a vector containing the mean, median and standard devation information
+// (and any user-specified info) for the specified list of reports. If 'reports'
+// contains less than two non-errored runs an empty vector is returned
+std::vector<BenchmarkReporter::Run> ComputeStats(
+    const std::vector<BenchmarkReporter::Run>& reports);
+
+double StatisticsMean(const std::vector<double>& v);
+double StatisticsMedian(const std::vector<double>& v);
+double StatisticsStdDev(const std::vector<double>& v);
+
+}  // end namespace benchmark
+
+#endif  // STATISTICS_H_

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/string_util.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/string_util.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/string_util.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/string_util.cc Thu Nov 16 16:16:06 2017
@@ -27,8 +27,6 @@ static_assert(arraysize(kSmallSIUnits) =
 
 static const int64_t kUnitsSize = arraysize(kBigSIUnits);
 
-}  // end anonymous namespace
-
 void ToExponentAndMantissa(double val, double thresh, int precision,
                            double one_k, std::string* mantissa,
                            int64_t* exponent) {
@@ -100,14 +98,16 @@ std::string ExponentToPrefix(int64_t exp
 }
 
 std::string ToBinaryStringFullySpecified(double value, double threshold,
-                                         int precision) {
+                                         int precision, double one_k = 1024.0) {
   std::string mantissa;
   int64_t exponent;
-  ToExponentAndMantissa(value, threshold, precision, 1024.0, &mantissa,
+  ToExponentAndMantissa(value, threshold, precision, one_k, &mantissa,
                         &exponent);
   return mantissa + ExponentToPrefix(exponent, false);
 }
 
+}  // end namespace
+
 void AppendHumanReadable(int n, std::string* str) {
   std::stringstream ss;
   // Round down to the nearest SI prefix.
@@ -115,11 +115,11 @@ void AppendHumanReadable(int n, std::str
   *str += ss.str();
 }
 
-std::string HumanReadableNumber(double n) {
+std::string HumanReadableNumber(double n, double one_k) {
   // 1.1 means that figures up to 1.1k should be shown with the next unit down;
   // this softens edge effects.
   // 1 means that we should show one decimal place of precision.
-  return ToBinaryStringFullySpecified(n, 1.1, 1);
+  return ToBinaryStringFullySpecified(n, 1.1, 1, one_k);
 }
 
 std::string StringPrintFImp(const char* msg, va_list args) {

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/string_util.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/string_util.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/string_util.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/string_util.h (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/string_util.h Thu Nov 16 16:16:06 2017
@@ -10,7 +10,7 @@ namespace benchmark {
 
 void AppendHumanReadable(int n, std::string* str);
 
-std::string HumanReadableNumber(double n);
+std::string HumanReadableNumber(double n, double one_k = 1024.0);
 
 std::string StringPrintF(const char* format, ...);
 

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sysinfo.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sysinfo.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sysinfo.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sysinfo.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sysinfo.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/sysinfo.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/sysinfo.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/timers.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/timers.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/timers.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/timers.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/timers.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/src/timers.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/src/timers.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/CMakeLists.txt (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/CMakeLists.txt)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/CMakeLists.txt?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/CMakeLists.txt&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/CMakeLists.txt&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/CMakeLists.txt (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/CMakeLists.txt Thu Nov 16 16:16:06 2017
@@ -98,6 +98,9 @@ add_test(multiple_ranges_test multiple_r
 compile_output_test(reporter_output_test)
 add_test(reporter_output_test reporter_output_test --benchmark_min_time=0.01)
 
+compile_output_test(templated_fixture_test)
+add_test(templated_fixture_test templated_fixture_test --benchmark_min_time=0.01)
+
 compile_output_test(user_counters_test)
 add_test(user_counters_test user_counters_test --benchmark_min_time=0.01)
 
@@ -106,13 +109,20 @@ add_test(user_counters_tabular_test user
 
 check_cxx_compiler_flag(-std=c++03 BENCHMARK_HAS_CXX03_FLAG)
 if (BENCHMARK_HAS_CXX03_FLAG)
-  set(CXX03_FLAGS "${CMAKE_CXX_FLAGS}")
-  string(REPLACE "-std=c++11" "-std=c++03" CXX03_FLAGS "${CXX03_FLAGS}")
-  string(REPLACE "-std=c++0x" "-std=c++03" CXX03_FLAGS "${CXX03_FLAGS}")
-
   compile_benchmark_test(cxx03_test)
   set_target_properties(cxx03_test
-      PROPERTIES COMPILE_FLAGS "${CXX03_FLAGS}")
+      PROPERTIES
+      COMPILE_FLAGS "-std=c++03")
+  # libstdc++ provides different definitions within <map> between dialects. When
+  # LTO is enabled and -Werror is specified GCC diagnoses this ODR violation
+  # causing the test to fail to compile. To prevent this we explicitly disable
+  # the warning.
+  check_cxx_compiler_flag(-Wno-odr BENCHMARK_HAS_WNO_ODR)
+  if (BENCHMARK_ENABLE_LTO AND BENCHMARK_HAS_WNO_ODR)
+    set_target_properties(cxx03_test
+        PROPERTIES
+        LINK_FLAGS "-Wno-odr")
+  endif()
   add_test(cxx03 cxx03_test --benchmark_min_time=0.01)
 endif()
 

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/basic_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/basic_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/basic_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/basic_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/basic_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/basic_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/basic_test.cc Thu Nov 16 16:16:06 2017
@@ -4,7 +4,7 @@
 #define BASIC_BENCHMARK_TEST(x) BENCHMARK(x)->Arg(8)->Arg(512)->Arg(8192)
 
 void BM_empty(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     benchmark::DoNotOptimize(state.iterations());
   }
 }
@@ -12,7 +12,7 @@ BENCHMARK(BM_empty);
 BENCHMARK(BM_empty)->ThreadPerCpu();
 
 void BM_spin_empty(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     for (int x = 0; x < state.range(0); ++x) {
       benchmark::DoNotOptimize(x);
     }
@@ -25,7 +25,7 @@ void BM_spin_pause_before(benchmark::Sta
   for (int i = 0; i < state.range(0); ++i) {
     benchmark::DoNotOptimize(i);
   }
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     for (int i = 0; i < state.range(0); ++i) {
       benchmark::DoNotOptimize(i);
     }
@@ -35,7 +35,7 @@ BASIC_BENCHMARK_TEST(BM_spin_pause_befor
 BASIC_BENCHMARK_TEST(BM_spin_pause_before)->ThreadPerCpu();
 
 void BM_spin_pause_during(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     state.PauseTiming();
     for (int i = 0; i < state.range(0); ++i) {
       benchmark::DoNotOptimize(i);
@@ -50,7 +50,7 @@ BASIC_BENCHMARK_TEST(BM_spin_pause_durin
 BASIC_BENCHMARK_TEST(BM_spin_pause_during)->ThreadPerCpu();
 
 void BM_pause_during(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     state.PauseTiming();
     state.ResumeTiming();
   }
@@ -61,7 +61,7 @@ BENCHMARK(BM_pause_during)->UseRealTime(
 BENCHMARK(BM_pause_during)->UseRealTime()->ThreadPerCpu();
 
 void BM_spin_pause_after(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     for (int i = 0; i < state.range(0); ++i) {
       benchmark::DoNotOptimize(i);
     }
@@ -77,7 +77,7 @@ void BM_spin_pause_before_and_after(benc
   for (int i = 0; i < state.range(0); ++i) {
     benchmark::DoNotOptimize(i);
   }
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     for (int i = 0; i < state.range(0); ++i) {
       benchmark::DoNotOptimize(i);
     }
@@ -90,10 +90,29 @@ BASIC_BENCHMARK_TEST(BM_spin_pause_befor
 BASIC_BENCHMARK_TEST(BM_spin_pause_before_and_after)->ThreadPerCpu();
 
 void BM_empty_stop_start(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_empty_stop_start);
 BENCHMARK(BM_empty_stop_start)->ThreadPerCpu();
 
+
+void BM_KeepRunning(benchmark::State& state) {
+  size_t iter_count = 0;
+  while (state.KeepRunning()) {
+    ++iter_count;
+  }
+  assert(iter_count == state.max_iterations);
+}
+BENCHMARK(BM_KeepRunning);
+
+void BM_RangedFor(benchmark::State& state) {
+  size_t iter_count = 0;
+  for (auto _ : state) {
+    ++iter_count;
+  }
+  assert(iter_count == state.max_iterations);
+}
+BENCHMARK(BM_RangedFor);
+
 BENCHMARK_MAIN()

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/benchmark_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/benchmark_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/benchmark_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/benchmark_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/benchmark_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/benchmark_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/benchmark_test.cc Thu Nov 16 16:16:06 2017
@@ -42,7 +42,7 @@ double CalculatePi(int depth) {
 
 std::set<int> ConstructRandomSet(int size) {
   std::set<int> s;
-  for (int i = 0; i < size; ++i) s.insert(i);
+  for (int i = 0; i < size; ++i) s.insert(s.end(), i);
   return s;
 }
 
@@ -53,7 +53,7 @@ std::vector<int>* test_vector = nullptr;
 
 static void BM_Factorial(benchmark::State& state) {
   int fac_42 = 0;
-  while (state.KeepRunning()) fac_42 = Factorial(8);
+  for (auto _ : state) fac_42 = Factorial(8);
   // Prevent compiler optimizations
   std::stringstream ss;
   ss << fac_42;
@@ -64,7 +64,7 @@ BENCHMARK(BM_Factorial)->UseRealTime();
 
 static void BM_CalculatePiRange(benchmark::State& state) {
   double pi = 0.0;
-  while (state.KeepRunning()) pi = CalculatePi(state.range(0));
+  for (auto _ : state) pi = CalculatePi(state.range(0));
   std::stringstream ss;
   ss << pi;
   state.SetLabel(ss.str());
@@ -73,7 +73,7 @@ BENCHMARK_RANGE(BM_CalculatePiRange, 1,
 
 static void BM_CalculatePi(benchmark::State& state) {
   static const int depth = 1024;
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     benchmark::DoNotOptimize(CalculatePi(depth));
   }
 }
@@ -82,22 +82,26 @@ BENCHMARK(BM_CalculatePi)->ThreadRange(1
 BENCHMARK(BM_CalculatePi)->ThreadPerCpu();
 
 static void BM_SetInsert(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  std::set<int> data;
+  for (auto _ : state) {
     state.PauseTiming();
-    std::set<int> data = ConstructRandomSet(state.range(0));
+    data = ConstructRandomSet(state.range(0));
     state.ResumeTiming();
     for (int j = 0; j < state.range(1); ++j) data.insert(rand());
   }
   state.SetItemsProcessed(state.iterations() * state.range(1));
   state.SetBytesProcessed(state.iterations() * state.range(1) * sizeof(int));
 }
-BENCHMARK(BM_SetInsert)->Ranges({{1 << 10, 8 << 10}, {1, 10}});
+
+// Test many inserts at once to reduce the total iterations needed. Otherwise, the slower,
+// non-timed part of each iteration will make the benchmark take forever.
+BENCHMARK(BM_SetInsert)->Ranges({{1 << 10, 8 << 10}, {128, 512}});
 
 template <typename Container,
           typename ValueType = typename Container::value_type>
 static void BM_Sequential(benchmark::State& state) {
   ValueType v = 42;
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     Container c;
     for (int i = state.range(0); --i;) c.push_back(v);
   }
@@ -109,14 +113,14 @@ BENCHMARK_TEMPLATE2(BM_Sequential, std::
     ->Range(1 << 0, 1 << 10);
 BENCHMARK_TEMPLATE(BM_Sequential, std::list<int>)->Range(1 << 0, 1 << 10);
 // Test the variadic version of BENCHMARK_TEMPLATE in C++11 and beyond.
-#if __cplusplus >= 201103L
+#ifdef BENCHMARK_HAS_CXX11
 BENCHMARK_TEMPLATE(BM_Sequential, std::vector<int>, int)->Arg(512);
 #endif
 
 static void BM_StringCompare(benchmark::State& state) {
   std::string s1(state.range(0), '-');
   std::string s2(state.range(0), '-');
-  while (state.KeepRunning()) benchmark::DoNotOptimize(s1.compare(s2));
+  for (auto _ : state) benchmark::DoNotOptimize(s1.compare(s2));
 }
 BENCHMARK(BM_StringCompare)->Range(1, 1 << 20);
 
@@ -126,7 +130,7 @@ static void BM_SetupTeardown(benchmark::
     test_vector = new std::vector<int>();
   }
   int i = 0;
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     std::lock_guard<std::mutex> l(test_vector_mu);
     if (i % 2 == 0)
       test_vector->push_back(i);
@@ -142,7 +146,7 @@ BENCHMARK(BM_SetupTeardown)->ThreadPerCp
 
 static void BM_LongTest(benchmark::State& state) {
   double tracker = 0.0;
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     for (int i = 0; i < state.range(0); ++i)
       benchmark::DoNotOptimize(tracker += i);
   }
@@ -159,7 +163,7 @@ static void BM_ParallelMemset(benchmark:
     test_vector = new std::vector<int>(size);
   }
 
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     for (int i = from; i < to; i++) {
       // No need to lock test_vector_mu as ranges
       // do not overlap between threads.
@@ -179,7 +183,7 @@ static void BM_ManualTiming(benchmark::S
   std::chrono::duration<double, std::micro> sleep_duration{
       static_cast<double>(microseconds)};
 
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     auto start = std::chrono::high_resolution_clock::now();
     // Simulate some useful workload with a sleep
     std::this_thread::sleep_for(
@@ -197,11 +201,11 @@ static void BM_ManualTiming(benchmark::S
 BENCHMARK(BM_ManualTiming)->Range(1, 1 << 14)->UseRealTime();
 BENCHMARK(BM_ManualTiming)->Range(1, 1 << 14)->UseManualTime();
 
-#if __cplusplus >= 201103L
+#ifdef BENCHMARK_HAS_CXX11
 
 template <class... Args>
 void BM_with_args(benchmark::State& state, Args&&...) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK_CAPTURE(BM_with_args, int_test, 42, 43, 44);
@@ -213,7 +217,7 @@ void BM_non_template_args(benchmark::Sta
 }
 BENCHMARK_CAPTURE(BM_non_template_args, basic_test, 0, 0);
 
-#endif  // __cplusplus >= 201103L
+#endif  // BENCHMARK_HAS_CXX11
 
 static void BM_DenseThreadRanges(benchmark::State& st) {
   switch (st.range(0)) {

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/complexity_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/complexity_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/complexity_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/complexity_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/complexity_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/complexity_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/complexity_test.cc Thu Nov 16 16:16:06 2017
@@ -25,8 +25,8 @@ int AddComplexityTest(std::string big_o_
        {"^%bigo_name", MR_Not},  // Assert we we didn't only matched a name.
        {"^%rms_name %rms %rms[ ]*$", MR_Next}});
   AddCases(TC_JSONOut, {{"\"name\": \"%bigo_name\",$"},
-                        {"\"cpu_coefficient\": [0-9]+,$", MR_Next},
-                        {"\"real_coefficient\": [0-9]{1,5},$", MR_Next},
+                        {"\"cpu_coefficient\": %float,$", MR_Next},
+                        {"\"real_coefficient\": %float,$", MR_Next},
                         {"\"big_o\": \"%bigo\",$", MR_Next},
                         {"\"time_unit\": \"ns\"$", MR_Next},
                         {"}", MR_Next},
@@ -46,7 +46,7 @@ int AddComplexityTest(std::string big_o_
 // ========================================================================= //
 
 void BM_Complexity_O1(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     for (int i = 0; i < 1024; ++i) {
       benchmark::DoNotOptimize(&i);
     }
@@ -94,7 +94,7 @@ void BM_Complexity_O_N(benchmark::State&
   auto v = ConstructRandomVector(state.range(0));
   const int item_not_in_vector =
       state.range(0) * 2;  // Test worst case scenario (item not in vector)
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     benchmark::DoNotOptimize(std::find(v.begin(), v.end(), item_not_in_vector));
   }
   state.SetComplexityN(state.range(0));
@@ -129,7 +129,7 @@ ADD_COMPLEXITY_CASES(big_o_n_test_name,
 
 static void BM_Complexity_O_N_log_N(benchmark::State& state) {
   auto v = ConstructRandomVector(state.range(0));
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     std::sort(v.begin(), v.end());
   }
   state.SetComplexityN(state.range(0));

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/cxx03_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/cxx03_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/cxx03_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/cxx03_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/cxx03_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/cxx03_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/cxx03_test.cc Thu Nov 16 16:16:06 2017
@@ -8,6 +8,10 @@
 #error C++11 or greater detected. Should be C++03.
 #endif
 
+#ifdef BENCHMARK_HAS_CXX11
+#error C++11 or greater detected by the library. BENCHMARK_HAS_CXX11 is defined.
+#endif
+
 void BM_empty(benchmark::State& state) {
   while (state.KeepRunning()) {
     volatile std::size_t x = state.iterations();
@@ -39,6 +43,17 @@ void BM_template1(benchmark::State& stat
 BENCHMARK_TEMPLATE(BM_template1, long);
 BENCHMARK_TEMPLATE1(BM_template1, int);
 
+template <class T>
+struct BM_Fixture : public ::benchmark::Fixture {
+};
+
+BENCHMARK_TEMPLATE_F(BM_Fixture, BM_template1, long)(benchmark::State& state) {
+  BM_empty(state);
+}
+BENCHMARK_TEMPLATE1_F(BM_Fixture, BM_template2, int)(benchmark::State& state) {
+  BM_empty(state);
+}
+
 void BM_counters(benchmark::State& state) {
     BM_empty(state);
     state.counters["Foo"] = 2;

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/diagnostics_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/diagnostics_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/diagnostics_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/diagnostics_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/diagnostics_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/diagnostics_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/diagnostics_test.cc Thu Nov 16 16:16:06 2017
@@ -47,7 +47,7 @@ void BM_diagnostic_test(benchmark::State
 
   if (called_once == false) try_invalid_pause_resume(state);
 
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     benchmark::DoNotOptimize(state.iterations());
   }
 
@@ -57,6 +57,22 @@ void BM_diagnostic_test(benchmark::State
 }
 BENCHMARK(BM_diagnostic_test);
 
+
+void BM_diagnostic_test_keep_running(benchmark::State& state) {
+  static bool called_once = false;
+
+  if (called_once == false) try_invalid_pause_resume(state);
+
+  while(state.KeepRunning()) {
+    benchmark::DoNotOptimize(state.iterations());
+  }
+
+  if (called_once == false) try_invalid_pause_resume(state);
+
+  called_once = true;
+}
+BENCHMARK(BM_diagnostic_test_keep_running);
+
 int main(int argc, char* argv[]) {
   benchmark::internal::GetAbortHandler() = &TestHandler;
   benchmark::Initialize(&argc, argv);

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/donotoptimize_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/donotoptimize_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/donotoptimize_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/donotoptimize_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/donotoptimize_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/filter_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/filter_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/filter_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/filter_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/filter_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/filter_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/filter_test.cc Thu Nov 16 16:16:06 2017
@@ -36,31 +36,31 @@ class TestReporter : public benchmark::C
 }  // end namespace
 
 static void NoPrefix(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(NoPrefix);
 
 static void BM_Foo(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_Foo);
 
 static void BM_Bar(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_Bar);
 
 static void BM_FooBar(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_FooBar);
 
 static void BM_FooBa(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_FooBa);

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/fixture_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/fixture_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/fixture_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/fixture_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/fixture_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/fixture_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/fixture_test.cc Thu Nov 16 16:16:06 2017
@@ -28,7 +28,7 @@ class MyFixture : public ::benchmark::Fi
 BENCHMARK_F(MyFixture, Foo)(benchmark::State &st) {
   assert(data.get() != nullptr);
   assert(*data == 42);
-  while (st.KeepRunning()) {
+  for (auto _ : st) {
   }
 }
 
@@ -37,7 +37,7 @@ BENCHMARK_DEFINE_F(MyFixture, Bar)(bench
     assert(data.get() != nullptr);
     assert(*data == 42);
   }
-  while (st.KeepRunning()) {
+  for (auto _ : st) {
     assert(data.get() != nullptr);
     assert(*data == 42);
   }

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/map_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/map_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/map_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/map_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/map_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/map_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/map_test.cc Thu Nov 16 16:16:06 2017
@@ -18,9 +18,10 @@ std::map<int, int> ConstructRandomMap(in
 // Basic version.
 static void BM_MapLookup(benchmark::State& state) {
   const int size = state.range(0);
-  while (state.KeepRunning()) {
+  std::map<int, int> m;
+  for (auto _ : state) {
     state.PauseTiming();
-    std::map<int, int> m = ConstructRandomMap(size);
+    m = ConstructRandomMap(size);
     state.ResumeTiming();
     for (int i = 0; i < size; ++i) {
       benchmark::DoNotOptimize(m.find(rand() % size));
@@ -44,7 +45,7 @@ class MapFixture : public ::benchmark::F
 
 BENCHMARK_DEFINE_F(MapFixture, Lookup)(benchmark::State& state) {
   const int size = state.range(0);
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     for (int i = 0; i < size; ++i) {
       benchmark::DoNotOptimize(m.find(rand() % size));
     }

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/multiple_ranges_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/multiple_ranges_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/multiple_ranges_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/multiple_ranges_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/multiple_ranges_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/multiple_ranges_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/multiple_ranges_test.cc Thu Nov 16 16:16:06 2017
@@ -43,7 +43,7 @@ class MultipleRangesFixture : public ::b
 };
 
 BENCHMARK_DEFINE_F(MultipleRangesFixture, Empty)(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     int product = state.range(0) * state.range(1) * state.range(2);
     for (int x = 0; x < product; x++) {
       benchmark::DoNotOptimize(x);
@@ -60,13 +60,13 @@ void BM_CheckDefaultArgument(benchmark::
   // Test that the 'range()' without an argument is the same as 'range(0)'.
   assert(state.range() == state.range(0));
   assert(state.range() != state.range(1));
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_CheckDefaultArgument)->Ranges({{1, 5}, {6, 10}});
 
 static void BM_MultipleRanges(benchmark::State& st) {
-  while (st.KeepRunning()) {
+  for (auto _ : st) {
   }
 }
 BENCHMARK(BM_MultipleRanges)->Ranges({{5, 5}, {6, 6}});

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/options_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/options_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/options_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/options_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/options_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/options_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/options_test.cc Thu Nov 16 16:16:06 2017
@@ -8,13 +8,13 @@
 #include <cassert>
 
 void BM_basic(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 
 void BM_basic_slow(benchmark::State& state) {
   std::chrono::milliseconds sleep_duration(state.range(0));
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     std::this_thread::sleep_for(
         std::chrono::duration_cast<std::chrono::nanoseconds>(sleep_duration));
   }
@@ -44,7 +44,7 @@ void CustomArgs(benchmark::internal::Ben
 
 BENCHMARK(BM_basic)->Apply(CustomArgs);
 
-void BM_explicit_iteration_count(benchmark::State& st) {
+void BM_explicit_iteration_count(benchmark::State& state) {
   // Test that benchmarks specified with an explicit iteration count are
   // only run once.
   static bool invoked_before = false;
@@ -52,12 +52,12 @@ void BM_explicit_iteration_count(benchma
   invoked_before = true;
 
   // Test that the requested iteration count is respected.
-  assert(st.max_iterations == 42);
+  assert(state.max_iterations == 42);
   size_t actual_iterations = 0;
-  while (st.KeepRunning())
+  for (auto _ : state)
     ++actual_iterations;
-  assert(st.iterations() == st.max_iterations);
-  assert(st.iterations() == 42);
+  assert(state.iterations() == state.max_iterations);
+  assert(state.iterations() == 42);
 
 }
 BENCHMARK(BM_explicit_iteration_count)->Iterations(42);

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/output_test.h (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test.h)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/output_test.h?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/output_test.h&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test.h&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/output_test_helper.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test_helper.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/output_test_helper.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/output_test_helper.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/output_test_helper.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/register_benchmark_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/register_benchmark_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/register_benchmark_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/register_benchmark_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/register_benchmark_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/register_benchmark_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/register_benchmark_test.cc Thu Nov 16 16:16:06 2017
@@ -61,7 +61,7 @@ typedef benchmark::internal::Benchmark*
 // Test RegisterBenchmark with no additional arguments
 //----------------------------------------------------------------------------//
 void BM_function(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_function);
@@ -77,7 +77,7 @@ ADD_CASES({"BM_function"}, {"BM_function
 #ifndef BENCHMARK_HAS_NO_VARIADIC_REGISTER_BENCHMARK
 
 void BM_extra_args(benchmark::State& st, const char* label) {
-  while (st.KeepRunning()) {
+  for (auto _ : st) {
   }
   st.SetLabel(label);
 }
@@ -99,7 +99,7 @@ ADD_CASES({"test1", "One"}, {"test2", "T
 
 struct CustomFixture {
   void operator()(benchmark::State& st) {
-    while (st.KeepRunning()) {
+    for (auto _ : st) {
     }
   }
 };
@@ -116,7 +116,7 @@ void TestRegistrationAtRuntime() {
   {
     const char* x = "42";
     auto capturing_lam = [=](benchmark::State& st) {
-      while (st.KeepRunning()) {
+      for (auto _ : st) {
       }
       st.SetLabel(x);
     };

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/reporter_output_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/reporter_output_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/reporter_output_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/reporter_output_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/reporter_output_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/reporter_output_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/reporter_output_test.cc Thu Nov 16 16:16:06 2017
@@ -20,7 +20,7 @@ ADD_CASES(TC_CSVOut, {{"%csv_header"}});
 // ========================================================================= //
 
 void BM_basic(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_basic);
@@ -28,8 +28,8 @@ BENCHMARK(BM_basic);
 ADD_CASES(TC_ConsoleOut, {{"^BM_basic %console_report$"}});
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_basic\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\"$", MR_Next},
                        {"}", MR_Next}});
 ADD_CASES(TC_CSVOut, {{"^\"BM_basic\",%csv_report$"}});
@@ -39,20 +39,20 @@ ADD_CASES(TC_CSVOut, {{"^\"BM_basic\",%c
 // ========================================================================= //
 
 void BM_bytes_per_second(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   state.SetBytesProcessed(1);
 }
 BENCHMARK(BM_bytes_per_second);
 
 ADD_CASES(TC_ConsoleOut,
-          {{"^BM_bytes_per_second %console_report +%floatB/s$"}});
+          {{"^BM_bytes_per_second %console_report +%float[kM]{0,1}B/s$"}});
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_bytes_per_second\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bytes_per_second\": %int$", MR_Next},
+                       {"\"bytes_per_second\": %float$", MR_Next},
                        {"}", MR_Next}});
 ADD_CASES(TC_CSVOut, {{"^\"BM_bytes_per_second\",%csv_bytes_report$"}});
 
@@ -61,20 +61,20 @@ ADD_CASES(TC_CSVOut, {{"^\"BM_bytes_per_
 // ========================================================================= //
 
 void BM_items_per_second(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   state.SetItemsProcessed(1);
 }
 BENCHMARK(BM_items_per_second);
 
 ADD_CASES(TC_ConsoleOut,
-          {{"^BM_items_per_second %console_report +%float items/s$"}});
+          {{"^BM_items_per_second %console_report +%float[kM]{0,1} items/s$"}});
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_items_per_second\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"items_per_second\": %int$", MR_Next},
+                       {"\"items_per_second\": %float$", MR_Next},
                        {"}", MR_Next}});
 ADD_CASES(TC_CSVOut, {{"^\"BM_items_per_second\",%csv_items_report$"}});
 
@@ -83,7 +83,7 @@ ADD_CASES(TC_CSVOut, {{"^\"BM_items_per_
 // ========================================================================= //
 
 void BM_label(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   state.SetLabel("some label");
 }
@@ -92,8 +92,8 @@ BENCHMARK(BM_label);
 ADD_CASES(TC_ConsoleOut, {{"^BM_label %console_report some label$"}});
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_label\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
                        {"\"label\": \"some label\"$", MR_Next},
                        {"}", MR_Next}});
@@ -106,7 +106,7 @@ ADD_CASES(TC_CSVOut, {{"^\"BM_label\",%c
 
 void BM_error(benchmark::State& state) {
   state.SkipWithError("message");
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_error);
@@ -123,7 +123,7 @@ ADD_CASES(TC_CSVOut, {{"^\"BM_error\",,,
 // ========================================================================= //
 
 void BM_no_arg_name(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_no_arg_name)->Arg(3);
@@ -136,7 +136,7 @@ ADD_CASES(TC_CSVOut, {{"^\"BM_no_arg_nam
 // ========================================================================= //
 
 void BM_arg_name(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_arg_name)->ArgName("first")->Arg(3);
@@ -149,7 +149,7 @@ ADD_CASES(TC_CSVOut, {{"^\"BM_arg_name/f
 // ========================================================================= //
 
 void BM_arg_names(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_arg_names)->Args({2, 5, 4})->ArgNames({"first", "", "third"});
@@ -163,7 +163,7 @@ ADD_CASES(TC_CSVOut, {{"^\"BM_arg_names/
 // ========================================================================= //
 
 void BM_Complexity_O1(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   state.SetComplexityN(state.range(0));
 }
@@ -179,30 +179,74 @@ ADD_CASES(TC_ConsoleOut, {{"^BM_Complexi
 
 // Test that non-aggregate data is printed by default
 void BM_Repeat(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
+// need two repetitions min to be able to output any aggregate output
+BENCHMARK(BM_Repeat)->Repetitions(2);
+ADD_CASES(TC_ConsoleOut, {{"^BM_Repeat/repeats:2 %console_report$"},
+                          {"^BM_Repeat/repeats:2 %console_report$"},
+                          {"^BM_Repeat/repeats:2_mean %console_report$"},
+                          {"^BM_Repeat/repeats:2_median %console_report$"},
+                          {"^BM_Repeat/repeats:2_stddev %console_report$"}});
+ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Repeat/repeats:2\",$"},
+                       {"\"name\": \"BM_Repeat/repeats:2\",$"},
+                       {"\"name\": \"BM_Repeat/repeats:2_mean\",$"},
+                       {"\"name\": \"BM_Repeat/repeats:2_median\",$"},
+                       {"\"name\": \"BM_Repeat/repeats:2_stddev\",$"}});
+ADD_CASES(TC_CSVOut, {{"^\"BM_Repeat/repeats:2\",%csv_report$"},
+                      {"^\"BM_Repeat/repeats:2\",%csv_report$"},
+                      {"^\"BM_Repeat/repeats:2_mean\",%csv_report$"},
+                      {"^\"BM_Repeat/repeats:2_median\",%csv_report$"},
+                      {"^\"BM_Repeat/repeats:2_stddev\",%csv_report$"}});
+// but for two repetitions, mean and median is the same, so let's repeat..
 BENCHMARK(BM_Repeat)->Repetitions(3);
 ADD_CASES(TC_ConsoleOut, {{"^BM_Repeat/repeats:3 %console_report$"},
                           {"^BM_Repeat/repeats:3 %console_report$"},
                           {"^BM_Repeat/repeats:3 %console_report$"},
                           {"^BM_Repeat/repeats:3_mean %console_report$"},
+                          {"^BM_Repeat/repeats:3_median %console_report$"},
                           {"^BM_Repeat/repeats:3_stddev %console_report$"}});
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Repeat/repeats:3\",$"},
                        {"\"name\": \"BM_Repeat/repeats:3\",$"},
                        {"\"name\": \"BM_Repeat/repeats:3\",$"},
                        {"\"name\": \"BM_Repeat/repeats:3_mean\",$"},
+                       {"\"name\": \"BM_Repeat/repeats:3_median\",$"},
                        {"\"name\": \"BM_Repeat/repeats:3_stddev\",$"}});
 ADD_CASES(TC_CSVOut, {{"^\"BM_Repeat/repeats:3\",%csv_report$"},
                       {"^\"BM_Repeat/repeats:3\",%csv_report$"},
                       {"^\"BM_Repeat/repeats:3\",%csv_report$"},
                       {"^\"BM_Repeat/repeats:3_mean\",%csv_report$"},
+                      {"^\"BM_Repeat/repeats:3_median\",%csv_report$"},
                       {"^\"BM_Repeat/repeats:3_stddev\",%csv_report$"}});
+// median differs between even/odd number of repetitions, so just to be sure
+BENCHMARK(BM_Repeat)->Repetitions(4);
+ADD_CASES(TC_ConsoleOut, {{"^BM_Repeat/repeats:4 %console_report$"},
+                          {"^BM_Repeat/repeats:4 %console_report$"},
+                          {"^BM_Repeat/repeats:4 %console_report$"},
+                          {"^BM_Repeat/repeats:4 %console_report$"},
+                          {"^BM_Repeat/repeats:4_mean %console_report$"},
+                          {"^BM_Repeat/repeats:4_median %console_report$"},
+                          {"^BM_Repeat/repeats:4_stddev %console_report$"}});
+ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Repeat/repeats:4\",$"},
+                       {"\"name\": \"BM_Repeat/repeats:4\",$"},
+                       {"\"name\": \"BM_Repeat/repeats:4\",$"},
+                       {"\"name\": \"BM_Repeat/repeats:4\",$"},
+                       {"\"name\": \"BM_Repeat/repeats:4_mean\",$"},
+                       {"\"name\": \"BM_Repeat/repeats:4_median\",$"},
+                       {"\"name\": \"BM_Repeat/repeats:4_stddev\",$"}});
+ADD_CASES(TC_CSVOut, {{"^\"BM_Repeat/repeats:4\",%csv_report$"},
+                      {"^\"BM_Repeat/repeats:4\",%csv_report$"},
+                      {"^\"BM_Repeat/repeats:4\",%csv_report$"},
+                      {"^\"BM_Repeat/repeats:4\",%csv_report$"},
+                      {"^\"BM_Repeat/repeats:4_mean\",%csv_report$"},
+                      {"^\"BM_Repeat/repeats:4_median\",%csv_report$"},
+                      {"^\"BM_Repeat/repeats:4_stddev\",%csv_report$"}});
 
 // Test that a non-repeated test still prints non-aggregate results even when
 // only-aggregate reports have been requested
 void BM_RepeatOnce(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_RepeatOnce)->Repetitions(1)->ReportAggregatesOnly();
@@ -212,23 +256,26 @@ ADD_CASES(TC_CSVOut, {{"^\"BM_RepeatOnce
 
 // Test that non-aggregate data is not reported
 void BM_SummaryRepeat(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_SummaryRepeat)->Repetitions(3)->ReportAggregatesOnly();
 ADD_CASES(TC_ConsoleOut,
           {{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
            {"^BM_SummaryRepeat/repeats:3_mean %console_report$"},
+           {"^BM_SummaryRepeat/repeats:3_median %console_report$"},
            {"^BM_SummaryRepeat/repeats:3_stddev %console_report$"}});
 ADD_CASES(TC_JSONOut, {{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
                        {"\"name\": \"BM_SummaryRepeat/repeats:3_mean\",$"},
+                       {"\"name\": \"BM_SummaryRepeat/repeats:3_median\",$"},
                        {"\"name\": \"BM_SummaryRepeat/repeats:3_stddev\",$"}});
 ADD_CASES(TC_CSVOut, {{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
                       {"^\"BM_SummaryRepeat/repeats:3_mean\",%csv_report$"},
+                      {"^\"BM_SummaryRepeat/repeats:3_median\",%csv_report$"},
                       {"^\"BM_SummaryRepeat/repeats:3_stddev\",%csv_report$"}});
 
 void BM_RepeatTimeUnit(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
 }
 BENCHMARK(BM_RepeatTimeUnit)
@@ -238,18 +285,60 @@ BENCHMARK(BM_RepeatTimeUnit)
 ADD_CASES(TC_ConsoleOut,
           {{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
            {"^BM_RepeatTimeUnit/repeats:3_mean %console_us_report$"},
+           {"^BM_RepeatTimeUnit/repeats:3_median %console_us_report$"},
            {"^BM_RepeatTimeUnit/repeats:3_stddev %console_us_report$"}});
 ADD_CASES(TC_JSONOut, {{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
                        {"\"name\": \"BM_RepeatTimeUnit/repeats:3_mean\",$"},
                        {"\"time_unit\": \"us\",?$"},
+                       {"\"name\": \"BM_RepeatTimeUnit/repeats:3_median\",$"},
+                       {"\"time_unit\": \"us\",?$"},
                        {"\"name\": \"BM_RepeatTimeUnit/repeats:3_stddev\",$"},
                        {"\"time_unit\": \"us\",?$"}});
 ADD_CASES(TC_CSVOut,
           {{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
            {"^\"BM_RepeatTimeUnit/repeats:3_mean\",%csv_us_report$"},
+           {"^\"BM_RepeatTimeUnit/repeats:3_median\",%csv_us_report$"},
            {"^\"BM_RepeatTimeUnit/repeats:3_stddev\",%csv_us_report$"}});
 
 // ========================================================================= //
+// -------------------- Testing user-provided statistics ------------------- //
+// ========================================================================= //
+
+const auto UserStatistics = [](const std::vector<double>& v) {
+  return v.back();
+};
+void BM_UserStats(benchmark::State& state) {
+  for (auto _ : state) {
+  }
+}
+BENCHMARK(BM_UserStats)
+    ->Repetitions(3)
+    ->ComputeStatistics("", UserStatistics);
+// check that user-provided stats is calculated, and is after the default-ones
+// empty string as name is intentional, it would sort before anything else
+ADD_CASES(TC_ConsoleOut, {{"^BM_UserStats/repeats:3 %console_report$"},
+                          {"^BM_UserStats/repeats:3 %console_report$"},
+                          {"^BM_UserStats/repeats:3 %console_report$"},
+                          {"^BM_UserStats/repeats:3_mean %console_report$"},
+                          {"^BM_UserStats/repeats:3_median %console_report$"},
+                          {"^BM_UserStats/repeats:3_stddev %console_report$"},
+                          {"^BM_UserStats/repeats:3_ %console_report$"}});
+ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_UserStats/repeats:3\",$"},
+                       {"\"name\": \"BM_UserStats/repeats:3\",$"},
+                       {"\"name\": \"BM_UserStats/repeats:3\",$"},
+                       {"\"name\": \"BM_UserStats/repeats:3_mean\",$"},
+                       {"\"name\": \"BM_UserStats/repeats:3_median\",$"},
+                       {"\"name\": \"BM_UserStats/repeats:3_stddev\",$"},
+                       {"\"name\": \"BM_UserStats/repeats:3_\",$"}});
+ADD_CASES(TC_CSVOut, {{"^\"BM_UserStats/repeats:3\",%csv_report$"},
+                      {"^\"BM_UserStats/repeats:3\",%csv_report$"},
+                      {"^\"BM_UserStats/repeats:3\",%csv_report$"},
+                      {"^\"BM_UserStats/repeats:3_mean\",%csv_report$"},
+                      {"^\"BM_UserStats/repeats:3_median\",%csv_report$"},
+                      {"^\"BM_UserStats/repeats:3_stddev\",%csv_report$"},
+                      {"^\"BM_UserStats/repeats:3_\",%csv_report$"}});
+
+// ========================================================================= //
 // --------------------------- TEST CASES END ------------------------------ //
 // ========================================================================= //
 

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/skip_with_error_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/skip_with_error_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/skip_with_error_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/skip_with_error_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/skip_with_error_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/skip_with_error_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/skip_with_error_test.cc Thu Nov 16 16:16:06 2017
@@ -70,6 +70,15 @@ void BM_error_before_running(benchmark::
 BENCHMARK(BM_error_before_running);
 ADD_CASES("BM_error_before_running", {{"", true, "error message"}});
 
+void BM_error_before_running_range_for(benchmark::State& state) {
+  state.SkipWithError("error message");
+  for (auto _ : state) {
+    assert(false);
+  }
+}
+BENCHMARK(BM_error_before_running_range_for);
+ADD_CASES("BM_error_before_running_range_for", {{"", true, "error message"}});
+
 void BM_error_during_running(benchmark::State& state) {
   int first_iter = true;
   while (state.KeepRunning()) {
@@ -93,8 +102,31 @@ ADD_CASES("BM_error_during_running", {{"
                                       {"/2/threads:4", false, ""},
                                       {"/2/threads:8", false, ""}});
 
+void BM_error_during_running_ranged_for(benchmark::State& state) {
+  assert(state.max_iterations > 3 && "test requires at least a few iterations");
+  int first_iter = true;
+  // NOTE: Users should not write the for loop explicitly.
+  for (auto It = state.begin(), End = state.end(); It != End; ++It) {
+    if (state.range(0) == 1) {
+      assert(first_iter);
+      first_iter = false;
+      state.SkipWithError("error message");
+      // Test the unfortunate but documented behavior that the ranged-for loop
+      // doesn't automatically terminate when SkipWithError is set.
+      assert(++It != End);
+      break; // Required behavior
+    }
+  }
+}
+BENCHMARK(BM_error_during_running_ranged_for)->Arg(1)->Arg(2)->Iterations(5);
+ADD_CASES("BM_error_during_running_ranged_for",
+          {{"/1/iterations:5", true, "error message"},
+           {"/2/iterations:5", false, ""}});
+
+
+
 void BM_error_after_running(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
     benchmark::DoNotOptimize(state.iterations());
   }
   if (state.thread_index <= (state.threads / 2))

Added: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/templated_fixture_test.cc
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/templated_fixture_test.cc?rev=318476&view=auto
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/templated_fixture_test.cc (added)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/templated_fixture_test.cc Thu Nov 16 16:16:06 2017
@@ -0,0 +1,28 @@
+
+#include "benchmark/benchmark.h"
+
+#include <cassert>
+#include <memory>
+
+template<typename T>
+class MyFixture : public ::benchmark::Fixture {
+public:
+  MyFixture() : data(0) {}
+
+  T data;
+};
+
+BENCHMARK_TEMPLATE_F(MyFixture, Foo, int)(benchmark::State &st) {
+  for (auto _ : st) {
+    data += 1;
+  }
+}
+
+BENCHMARK_TEMPLATE_DEFINE_F(MyFixture, Bar, double)(benchmark::State& st) {
+  for (auto _ : st) {
+    data += 1.0;
+  }
+}
+BENCHMARK_REGISTER_F(MyFixture, Bar);
+
+BENCHMARK_MAIN()

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/user_counters_tabular_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_tabular_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/user_counters_tabular_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/user_counters_tabular_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_tabular_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_tabular_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/user_counters_tabular_test.cc Thu Nov 16 16:16:06 2017
@@ -54,7 +54,7 @@ ADD_CASES(TC_CSVOut, {{"%csv_header,"
 // ========================================================================= //
 
 void BM_Counters_Tabular(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   namespace bm = benchmark;
   state.counters.insert({
@@ -69,8 +69,8 @@ void BM_Counters_Tabular(benchmark::Stat
 BENCHMARK(BM_Counters_Tabular)->ThreadRange(1, 16);
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Tabular/threads:%int\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
                        {"\"Bar\": %float,$", MR_Next},
                        {"\"Bat\": %float,$", MR_Next},
@@ -98,7 +98,7 @@ CHECK_BENCHMARK_RESULTS("BM_Counters_Tab
 // ========================================================================= //
 
 void BM_CounterRates_Tabular(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   namespace bm = benchmark;
   state.counters.insert({
@@ -113,8 +113,8 @@ void BM_CounterRates_Tabular(benchmark::
 BENCHMARK(BM_CounterRates_Tabular)->ThreadRange(1, 16);
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterRates_Tabular/threads:%int\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
                        {"\"Bar\": %float,$", MR_Next},
                        {"\"Bat\": %float,$", MR_Next},
@@ -145,7 +145,7 @@ CHECK_BENCHMARK_RESULTS("BM_CounterRates
 
 // set only some of the counters
 void BM_CounterSet0_Tabular(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   namespace bm = benchmark;
   state.counters.insert({
@@ -157,8 +157,8 @@ void BM_CounterSet0_Tabular(benchmark::S
 BENCHMARK(BM_CounterSet0_Tabular)->ThreadRange(1, 16);
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterSet0_Tabular/threads:%int\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
                        {"\"Bar\": %float,$", MR_Next},
                        {"\"Baz\": %float,$", MR_Next},
@@ -177,7 +177,7 @@ CHECK_BENCHMARK_RESULTS("BM_CounterSet0_
 
 // again.
 void BM_CounterSet1_Tabular(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   namespace bm = benchmark;
   state.counters.insert({
@@ -189,8 +189,8 @@ void BM_CounterSet1_Tabular(benchmark::S
 BENCHMARK(BM_CounterSet1_Tabular)->ThreadRange(1, 16);
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterSet1_Tabular/threads:%int\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
                        {"\"Bar\": %float,$", MR_Next},
                        {"\"Baz\": %float,$", MR_Next},
@@ -213,7 +213,7 @@ CHECK_BENCHMARK_RESULTS("BM_CounterSet1_
 
 // set only some of the counters, different set now.
 void BM_CounterSet2_Tabular(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   namespace bm = benchmark;
   state.counters.insert({
@@ -225,8 +225,8 @@ void BM_CounterSet2_Tabular(benchmark::S
 BENCHMARK(BM_CounterSet2_Tabular)->ThreadRange(1, 16);
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterSet2_Tabular/threads:%int\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
                        {"\"Bat\": %float,$", MR_Next},
                        {"\"Baz\": %float,$", MR_Next},

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/user_counters_test.cc (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_test.cc)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/user_counters_test.cc?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/user_counters_test.cc&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_test.cc&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/test/user_counters_test.cc (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/test/user_counters_test.cc Thu Nov 16 16:16:06 2017
@@ -19,7 +19,7 @@ ADD_CASES(TC_CSVOut, {{"%csv_header,\"ba
 // ========================================================================= //
 
 void BM_Counters_Simple(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   state.counters["foo"] = 1;
   state.counters["bar"] = 2 * (double)state.iterations();
@@ -28,8 +28,8 @@ BENCHMARK(BM_Counters_Simple);
 ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_Simple %console_report bar=%hrfloat foo=%hrfloat$"}});
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Simple\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
                        {"\"bar\": %float,$", MR_Next},
                        {"\"foo\": %float$", MR_Next},
@@ -51,7 +51,7 @@ CHECK_BENCHMARK_RESULTS("BM_Counters_Sim
 
 namespace { int num_calls1 = 0; }
 void BM_Counters_WithBytesAndItemsPSec(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   state.counters["foo"] = 1;
   state.counters["bar"] = ++num_calls1;
@@ -64,11 +64,11 @@ ADD_CASES(TC_ConsoleOut,
             "bar=%hrfloat foo=%hrfloat +%hrfloatB/s +%hrfloat items/s$"}});
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_WithBytesAndItemsPSec\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bytes_per_second\": %int,$", MR_Next},
-                       {"\"items_per_second\": %int,$", MR_Next},
+                       {"\"bytes_per_second\": %float,$", MR_Next},
+                       {"\"items_per_second\": %float,$", MR_Next},
                        {"\"bar\": %float,$", MR_Next},
                        {"\"foo\": %float$", MR_Next},
                        {"}", MR_Next}});
@@ -92,7 +92,7 @@ CHECK_BENCHMARK_RESULTS("BM_Counters_Wit
 // ========================================================================= //
 
 void BM_Counters_Rate(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   namespace bm = benchmark;
   state.counters["foo"] = bm::Counter{1, bm::Counter::kIsRate};
@@ -102,8 +102,8 @@ BENCHMARK(BM_Counters_Rate);
 ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_Rate %console_report bar=%hrfloat/s foo=%hrfloat/s$"}});
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Rate\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
                        {"\"bar\": %float,$", MR_Next},
                        {"\"foo\": %float$", MR_Next},
@@ -124,7 +124,7 @@ CHECK_BENCHMARK_RESULTS("BM_Counters_Rat
 // ========================================================================= //
 
 void BM_Counters_Threads(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   state.counters["foo"] = 1;
   state.counters["bar"] = 2;
@@ -133,8 +133,8 @@ BENCHMARK(BM_Counters_Threads)->ThreadRa
 ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_Threads/threads:%int %console_report bar=%hrfloat foo=%hrfloat$"}});
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Threads/threads:%int\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
                        {"\"bar\": %float,$", MR_Next},
                        {"\"foo\": %float$", MR_Next},
@@ -153,7 +153,7 @@ CHECK_BENCHMARK_RESULTS("BM_Counters_Thr
 // ========================================================================= //
 
 void BM_Counters_AvgThreads(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   namespace bm = benchmark;
   state.counters["foo"] = bm::Counter{1, bm::Counter::kAvgThreads};
@@ -163,8 +163,8 @@ BENCHMARK(BM_Counters_AvgThreads)->Threa
 ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_AvgThreads/threads:%int %console_report bar=%hrfloat foo=%hrfloat$"}});
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_AvgThreads/threads:%int\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
                        {"\"bar\": %float,$", MR_Next},
                        {"\"foo\": %float$", MR_Next},
@@ -184,7 +184,7 @@ CHECK_BENCHMARK_RESULTS("BM_Counters_Avg
 // ========================================================================= //
 
 void BM_Counters_AvgThreadsRate(benchmark::State& state) {
-  while (state.KeepRunning()) {
+  for (auto _ : state) {
   }
   namespace bm = benchmark;
   state.counters["foo"] = bm::Counter{1, bm::Counter::kAvgThreadsRate};
@@ -194,8 +194,8 @@ BENCHMARK(BM_Counters_AvgThreadsRate)->T
 ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_AvgThreadsRate/threads:%int %console_report bar=%hrfloat/s foo=%hrfloat/s$"}});
 ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_AvgThreadsRate/threads:%int\",$"},
                        {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %int,$", MR_Next},
-                       {"\"cpu_time\": %int,$", MR_Next},
+                       {"\"real_time\": %float,$", MR_Next},
+                       {"\"cpu_time\": %float,$", MR_Next},
                        {"\"time_unit\": \"ns\",$", MR_Next},
                        {"\"bar\": %float,$", MR_Next},
                        {"\"foo\": %float$", MR_Next},

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/compare_bench.py (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/compare_bench.py)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/compare_bench.py?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/compare_bench.py&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/compare_bench.py&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/compare_bench.py (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/compare_bench.py Thu Nov 16 16:16:06 2017
@@ -39,21 +39,20 @@ def main():
     parser.add_argument(
         'test2', metavar='test2', type=str, nargs=1,
         help='A benchmark executable or JSON output file')
-    # FIXME this is a dummy argument which will never actually match
-    # any --benchmark flags but it helps generate a better usage message
     parser.add_argument(
-        'benchmark_options', metavar='benchmark_option', nargs='*',
+        'benchmark_options', metavar='benchmark_options', nargs=argparse.REMAINDER,
         help='Arguments to pass when running benchmark executables'
     )
     args, unknown_args = parser.parse_known_args()
     # Parse the command line flags
     test1 = args.test1[0]
     test2 = args.test2[0]
-    if args.benchmark_options:
+    if unknown_args:
+        # should never happen
         print("Unrecognized positional argument arguments: '%s'"
-              % args.benchmark_options)
+              % unknown_args)
         exit(1)
-    benchmark_options = unknown_args
+    benchmark_options = args.benchmark_options
     check_inputs(test1, test2, benchmark_options)
     # Run the benchmarks and report the results
     json1 = gbench.util.run_or_load_benchmark(test1, benchmark_options)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/Inputs/test1_run1.json (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run1.json)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/Inputs/test1_run1.json?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/Inputs/test1_run1.json&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run1.json&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run1.json (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/Inputs/test1_run1.json Thu Nov 16 16:16:06 2017
@@ -29,6 +29,20 @@
       "time_unit": "ns"
     },
     {
+      "name": "BM_1PercentFaster",
+      "iterations": 1000,
+      "real_time": 100,
+      "cpu_time": 100,
+      "time_unit": "ns"
+    },
+    {
+      "name": "BM_1PercentSlower",
+      "iterations": 1000,
+      "real_time": 100,
+      "cpu_time": 100,
+      "time_unit": "ns"
+    },
+    {
       "name": "BM_10PercentFaster",
       "iterations": 1000,
       "real_time": 100,
@@ -55,6 +69,34 @@
       "real_time": 10000,
       "cpu_time": 10000,
       "time_unit": "ns"
+    },
+    {
+      "name": "BM_10PercentCPUToTime",
+      "iterations": 1000,
+      "real_time": 100,
+      "cpu_time": 100,
+      "time_unit": "ns"
+    },
+    {
+      "name": "BM_ThirdFaster",
+      "iterations": 1000,
+      "real_time": 100,
+      "cpu_time": 100,
+      "time_unit": "ns"
+    },
+    {
+      "name": "BM_BadTimeUnit",
+      "iterations": 1000,
+      "real_time": 0.4,
+      "cpu_time": 0.5,
+      "time_unit": "s"
+    },
+    {
+      "name": "BM_DifferentTimeUnit",
+      "iterations": 1,
+      "real_time": 1,
+      "cpu_time": 1,
+      "time_unit": "s"
     }
   ]
-}
\ No newline at end of file
+}

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/Inputs/test1_run2.json (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run2.json)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/Inputs/test1_run2.json?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/Inputs/test1_run2.json&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run2.json&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/Inputs/test1_run2.json (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/Inputs/test1_run2.json Thu Nov 16 16:16:06 2017
@@ -29,6 +29,20 @@
       "time_unit": "ns"
     },
     {
+      "name": "BM_1PercentFaster",
+      "iterations": 1000,
+      "real_time": 98.9999999,
+      "cpu_time": 98.9999999,
+      "time_unit": "ns"
+    },
+    {
+      "name": "BM_1PercentSlower",
+      "iterations": 1000,
+      "real_time": 100.9999999,
+      "cpu_time": 100.9999999,
+      "time_unit": "ns"
+    },
+    {
       "name": "BM_10PercentFaster",
       "iterations": 1000,
       "real_time": 90,
@@ -45,8 +59,8 @@
     {
       "name": "BM_100xSlower",
       "iterations": 1000,
-      "real_time": 10000,
-      "cpu_time": 10000,
+      "real_time": 1.0000e+04,
+      "cpu_time": 1.0000e+04,
       "time_unit": "ns"
     },
     {
@@ -55,6 +69,34 @@
       "real_time": 100,
       "cpu_time": 100,
       "time_unit": "ns"
+    },
+    {
+      "name": "BM_10PercentCPUToTime",
+      "iterations": 1000,
+      "real_time": 110,
+      "cpu_time": 90,
+      "time_unit": "ns"
+    },
+    {
+      "name": "BM_ThirdFaster",
+      "iterations": 1000,
+      "real_time": 66.665,
+      "cpu_time": 66.664,
+      "time_unit": "ns"
+    },
+    {
+      "name": "BM_BadTimeUnit",
+      "iterations": 1000,
+      "real_time": 0.04,
+      "cpu_time": 0.6,
+      "time_unit": "s"
+    },
+    {
+      "name": "BM_DifferentTimeUnit",
+      "iterations": 1,
+      "real_time": 1,
+      "cpu_time": 1,
+      "time_unit": "ns"
     }
   ]
-}
\ No newline at end of file
+}

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/__init__.py (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/__init__.py)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/__init__.py?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/__init__.py&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/__init__.py&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/report.py (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/report.py)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/report.py?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/report.py&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/report.py&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
--- test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/report.py (original)
+++ test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/report.py Thu Nov 16 16:16:06 2017
@@ -71,13 +71,13 @@ def generate_difference_report(json1, js
     Calculate and report the difference between each test of two benchmarks
     runs specified as 'json1' and 'json2'.
     """
-    first_col_width = find_longest_name(json1['benchmarks']) + 5
+    first_col_width = find_longest_name(json1['benchmarks'])
     def find_test(name):
         for b in json2['benchmarks']:
             if b['name'] == name:
                 return b
         return None
-    first_line = "{:<{}s}     Time           CPU           Old           New".format(
+    first_line = "{:<{}s}            Time             CPU      Time Old      Time New       CPU Old       CPU New".format(
         'Benchmark', first_col_width)
     output_strs = [first_line, '-' * len(first_line)]
 
@@ -87,6 +87,9 @@ def generate_difference_report(json1, js
         if not other_bench:
             continue
 
+        if bn['time_unit'] != other_bench['time_unit']:
+            continue
+
         def get_color(res):
             if res > 0.05:
                 return BC_FAIL
@@ -94,12 +97,13 @@ def generate_difference_report(json1, js
                 return BC_WHITE
             else:
                 return BC_CYAN
-        fmt_str = "{}{:<{}s}{endc}{}{:+9.2f}{endc}{}{:+14.2f}{endc}{:14d}{:14d}"
+        fmt_str = "{}{:<{}s}{endc}{}{:+16.4f}{endc}{}{:+16.4f}{endc}{:14.0f}{:14.0f}{endc}{:14.0f}{:14.0f}"
         tres = calculate_change(bn['real_time'], other_bench['real_time'])
         cpures = calculate_change(bn['cpu_time'], other_bench['cpu_time'])
         output_strs += [color_format(use_color, fmt_str,
             BC_HEADER, bn['name'], first_col_width,
             get_color(tres), tres, get_color(cpures), cpures,
+            bn['real_time'], other_bench['real_time'],
             bn['cpu_time'], other_bench['cpu_time'],
             endc=BC_ENDC)]
     return output_strs
@@ -123,22 +127,27 @@ class TestReportDifference(unittest.Test
 
     def test_basic(self):
         expect_lines = [
-            ['BM_SameTimes', '+0.00', '+0.00', '10', '10'],
-            ['BM_2xFaster', '-0.50', '-0.50', '50', '25'],
-            ['BM_2xSlower', '+1.00', '+1.00', '50', '100'],
-            ['BM_10PercentFaster', '-0.10', '-0.10', '100', '90'],
-            ['BM_10PercentSlower', '+0.10', '+0.10', '100', '110'],
-            ['BM_100xSlower', '+99.00', '+99.00', '100', '10000'],
-            ['BM_100xFaster', '-0.99', '-0.99', '10000', '100'],
+            ['BM_SameTimes', '+0.0000', '+0.0000', '10', '10', '10', '10'],
+            ['BM_2xFaster', '-0.5000', '-0.5000', '50', '25', '50', '25'],
+            ['BM_2xSlower', '+1.0000', '+1.0000', '50', '100', '50', '100'],
+            ['BM_1PercentFaster', '-0.0100', '-0.0100', '100', '99', '100', '99'],
+            ['BM_1PercentSlower', '+0.0100', '+0.0100', '100', '101', '100', '101'],
+            ['BM_10PercentFaster', '-0.1000', '-0.1000', '100', '90', '100', '90'],
+            ['BM_10PercentSlower', '+0.1000', '+0.1000', '100', '110', '100', '110'],
+            ['BM_100xSlower', '+99.0000', '+99.0000', '100', '10000', '100', '10000'],
+            ['BM_100xFaster', '-0.9900', '-0.9900', '10000', '100', '10000', '100'],
+            ['BM_10PercentCPUToTime', '+0.1000', '-0.1000', '100', '110', '100', '90'],
+            ['BM_ThirdFaster', '-0.3333', '-0.3334', '100', '67', '100', '67'],
+            ['BM_BadTimeUnit', '-0.9000', '+0.2000', '0', '0', '0', '1'],
         ]
         json1, json2 = self.load_results()
         output_lines_with_header = generate_difference_report(json1, json2, use_color=False)
         output_lines = output_lines_with_header[2:]
         print("\n".join(output_lines_with_header))
         self.assertEqual(len(output_lines), len(expect_lines))
-        for i in xrange(0, len(output_lines)):
+        for i in range(0, len(output_lines)):
             parts = [x for x in output_lines[i].split(' ') if x]
-            self.assertEqual(len(parts), 5)
+            self.assertEqual(len(parts), 7)
             self.assertEqual(parts, expect_lines[i])
 
 

Copied: test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/util.py (from r318473, test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/util.py)
URL: http://llvm.org/viewvc/llvm-project/test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/util.py?p2=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.3.0/tools/gbench/util.py&p1=test-suite/trunk/MicroBenchmarks/libs/benchmark-1.2.0/tools/gbench/util.py&r1=318473&r2=318476&rev=318476&view=diff
==============================================================================
    (empty)




More information about the llvm-commits mailing list