[libcxx-commits] [libcxx] 5dda2ef - Re-Reland "[benchmarks] Move libcxx's fork of google/benchmark and llvm/utils'"

Mircea Trofin via libcxx-commits libcxx-commits at lists.llvm.org
Tue Dec 7 17:11:05 PST 2021


Author: Mircea Trofin
Date: 2021-12-07T17:10:41-08:00
New Revision: 5dda2efde574d3a200d04c371f561a77ee9f4aff

URL: https://github.com/llvm/llvm-project/commit/5dda2efde574d3a200d04c371f561a77ee9f4aff
DIFF: https://github.com/llvm/llvm-project/commit/5dda2efde574d3a200d04c371f561a77ee9f4aff.diff

LOG: Re-Reland "[benchmarks] Move libcxx's fork of google/benchmark and llvm/utils'"

This reverts commit b2fbd45d2395f1f6ef39db72b7156724fc101e40. D114922
fixed the reason of the 2nd revert.

This patch also re-applies 39e9f5d3685f3cfca0df072928ad96d973704dff.

Differential Revision: https://reviews.llvm.org/D112012

Added: 
    third-party/benchmark/AUTHORS
    third-party/benchmark/BUILD.bazel
    third-party/benchmark/CMakeLists.txt
    third-party/benchmark/CONTRIBUTING.md
    third-party/benchmark/CONTRIBUTORS
    third-party/benchmark/LICENSE
    third-party/benchmark/README.md
    third-party/benchmark/WORKSPACE
    third-party/benchmark/_config.yml
    third-party/benchmark/appveyor.yml
    third-party/benchmark/bindings/python/BUILD
    third-party/benchmark/bindings/python/build_defs.bzl
    third-party/benchmark/bindings/python/google_benchmark/BUILD
    third-party/benchmark/bindings/python/google_benchmark/__init__.py
    third-party/benchmark/bindings/python/google_benchmark/benchmark.cc
    third-party/benchmark/bindings/python/google_benchmark/example.py
    third-party/benchmark/bindings/python/pybind11.BUILD
    third-party/benchmark/bindings/python/python_headers.BUILD
    third-party/benchmark/bindings/python/requirements.txt
    third-party/benchmark/cmake/AddCXXCompilerFlag.cmake
    third-party/benchmark/cmake/CXXFeatureCheck.cmake
    third-party/benchmark/cmake/Config.cmake.in
    third-party/benchmark/cmake/GetGitVersion.cmake
    third-party/benchmark/cmake/GoogleTest.cmake
    third-party/benchmark/cmake/GoogleTest.cmake.in
    third-party/benchmark/cmake/benchmark.pc.in
    third-party/benchmark/cmake/gnu_posix_regex.cpp
    third-party/benchmark/cmake/llvm-toolchain.cmake
    third-party/benchmark/cmake/posix_regex.cpp
    third-party/benchmark/cmake/split_list.cmake
    third-party/benchmark/cmake/std_regex.cpp
    third-party/benchmark/cmake/steady_clock.cpp
    third-party/benchmark/cmake/thread_safety_attributes.cpp
    third-party/benchmark/dependencies.md
    third-party/benchmark/docs/AssemblyTests.md
    third-party/benchmark/docs/_config.yml
    third-party/benchmark/docs/perf_counters.md
    third-party/benchmark/docs/random_interleaving.md
    third-party/benchmark/docs/releasing.md
    third-party/benchmark/docs/tools.md
    third-party/benchmark/include/benchmark/benchmark.h
    third-party/benchmark/requirements.txt
    third-party/benchmark/setup.py
    third-party/benchmark/src/CMakeLists.txt
    third-party/benchmark/src/arraysize.h
    third-party/benchmark/src/benchmark.cc
    third-party/benchmark/src/benchmark_api_internal.cc
    third-party/benchmark/src/benchmark_api_internal.h
    third-party/benchmark/src/benchmark_main.cc
    third-party/benchmark/src/benchmark_name.cc
    third-party/benchmark/src/benchmark_register.cc
    third-party/benchmark/src/benchmark_register.h
    third-party/benchmark/src/benchmark_runner.cc
    third-party/benchmark/src/benchmark_runner.h
    third-party/benchmark/src/check.h
    third-party/benchmark/src/colorprint.cc
    third-party/benchmark/src/colorprint.h
    third-party/benchmark/src/commandlineflags.cc
    third-party/benchmark/src/commandlineflags.h
    third-party/benchmark/src/complexity.cc
    third-party/benchmark/src/complexity.h
    third-party/benchmark/src/console_reporter.cc
    third-party/benchmark/src/counter.cc
    third-party/benchmark/src/counter.h
    third-party/benchmark/src/csv_reporter.cc
    third-party/benchmark/src/cycleclock.h
    third-party/benchmark/src/internal_macros.h
    third-party/benchmark/src/json_reporter.cc
    third-party/benchmark/src/log.h
    third-party/benchmark/src/mutex.h
    third-party/benchmark/src/perf_counters.cc
    third-party/benchmark/src/perf_counters.h
    third-party/benchmark/src/re.h
    third-party/benchmark/src/reporter.cc
    third-party/benchmark/src/sleep.cc
    third-party/benchmark/src/sleep.h
    third-party/benchmark/src/statistics.cc
    third-party/benchmark/src/statistics.h
    third-party/benchmark/src/string_util.cc
    third-party/benchmark/src/string_util.h
    third-party/benchmark/src/sysinfo.cc
    third-party/benchmark/src/thread_manager.h
    third-party/benchmark/src/thread_timer.h
    third-party/benchmark/src/timers.cc
    third-party/benchmark/src/timers.h
    third-party/benchmark/test/AssemblyTests.cmake
    third-party/benchmark/test/BUILD
    third-party/benchmark/test/CMakeLists.txt
    third-party/benchmark/test/args_product_test.cc
    third-party/benchmark/test/basic_test.cc
    third-party/benchmark/test/benchmark_gtest.cc
    third-party/benchmark/test/benchmark_name_gtest.cc
    third-party/benchmark/test/benchmark_random_interleaving_gtest.cc
    third-party/benchmark/test/benchmark_test.cc
    third-party/benchmark/test/clobber_memory_assembly_test.cc
    third-party/benchmark/test/commandlineflags_gtest.cc
    third-party/benchmark/test/complexity_test.cc
    third-party/benchmark/test/cxx03_test.cc
    third-party/benchmark/test/diagnostics_test.cc
    third-party/benchmark/test/display_aggregates_only_test.cc
    third-party/benchmark/test/donotoptimize_assembly_test.cc
    third-party/benchmark/test/donotoptimize_test.cc
    third-party/benchmark/test/filter_test.cc
    third-party/benchmark/test/fixture_test.cc
    third-party/benchmark/test/internal_threading_test.cc
    third-party/benchmark/test/link_main_test.cc
    third-party/benchmark/test/map_test.cc
    third-party/benchmark/test/memory_manager_test.cc
    third-party/benchmark/test/multiple_ranges_test.cc
    third-party/benchmark/test/options_test.cc
    third-party/benchmark/test/output_test.h
    third-party/benchmark/test/output_test_helper.cc
    third-party/benchmark/test/perf_counters_gtest.cc
    third-party/benchmark/test/perf_counters_test.cc
    third-party/benchmark/test/register_benchmark_test.cc
    third-party/benchmark/test/repetitions_test.cc
    third-party/benchmark/test/report_aggregates_only_test.cc
    third-party/benchmark/test/reporter_output_test.cc
    third-party/benchmark/test/skip_with_error_test.cc
    third-party/benchmark/test/state_assembly_test.cc
    third-party/benchmark/test/statistics_gtest.cc
    third-party/benchmark/test/string_util_gtest.cc
    third-party/benchmark/test/templated_fixture_test.cc
    third-party/benchmark/test/user_counters_tabular_test.cc
    third-party/benchmark/test/user_counters_test.cc
    third-party/benchmark/test/user_counters_thousands_test.cc
    third-party/benchmark/tools/BUILD.bazel
    third-party/benchmark/tools/compare.py
    third-party/benchmark/tools/gbench/Inputs/test1_run1.json
    third-party/benchmark/tools/gbench/Inputs/test1_run2.json
    third-party/benchmark/tools/gbench/Inputs/test2_run.json
    third-party/benchmark/tools/gbench/Inputs/test3_run0.json
    third-party/benchmark/tools/gbench/Inputs/test3_run1.json
    third-party/benchmark/tools/gbench/Inputs/test4_run.json
    third-party/benchmark/tools/gbench/__init__.py
    third-party/benchmark/tools/gbench/report.py
    third-party/benchmark/tools/gbench/util.py
    third-party/benchmark/tools/requirements.txt
    third-party/benchmark/tools/strip_asm.py

Modified: 
    libc/benchmarks/CMakeLists.txt
    libc/benchmarks/LibcBenchmark.cpp
    libcxx/benchmarks/CMakeLists.txt
    llvm/CMakeLists.txt
    runtimes/CMakeLists.txt

Removed: 
    libcxx/utils/google-benchmark/.clang-format
    libcxx/utils/google-benchmark/.github/.libcxx-setup.sh
    libcxx/utils/google-benchmark/.github/ISSUE_TEMPLATE/bug_report.md
    libcxx/utils/google-benchmark/.github/ISSUE_TEMPLATE/feature_request.md
    libcxx/utils/google-benchmark/.github/workflows/bazel.yml
    libcxx/utils/google-benchmark/.github/workflows/build-and-test-perfcounters.yml
    libcxx/utils/google-benchmark/.github/workflows/build-and-test.yml
    libcxx/utils/google-benchmark/.github/workflows/pylint.yml
    libcxx/utils/google-benchmark/.github/workflows/sanitizer.yml
    libcxx/utils/google-benchmark/.github/workflows/test_bindings.yml
    libcxx/utils/google-benchmark/.gitignore
    libcxx/utils/google-benchmark/.travis.yml
    libcxx/utils/google-benchmark/.ycm_extra_conf.py
    libcxx/utils/google-benchmark/AUTHORS
    libcxx/utils/google-benchmark/BUILD.bazel
    libcxx/utils/google-benchmark/CMakeLists.txt
    libcxx/utils/google-benchmark/CONTRIBUTING.md
    libcxx/utils/google-benchmark/CONTRIBUTORS
    libcxx/utils/google-benchmark/LICENSE
    libcxx/utils/google-benchmark/README.md
    libcxx/utils/google-benchmark/WORKSPACE
    libcxx/utils/google-benchmark/_config.yml
    libcxx/utils/google-benchmark/appveyor.yml
    libcxx/utils/google-benchmark/bindings/python/BUILD
    libcxx/utils/google-benchmark/bindings/python/build_defs.bzl
    libcxx/utils/google-benchmark/bindings/python/google_benchmark/BUILD
    libcxx/utils/google-benchmark/bindings/python/google_benchmark/__init__.py
    libcxx/utils/google-benchmark/bindings/python/google_benchmark/benchmark.cc
    libcxx/utils/google-benchmark/bindings/python/google_benchmark/example.py
    libcxx/utils/google-benchmark/bindings/python/pybind11.BUILD
    libcxx/utils/google-benchmark/bindings/python/python_headers.BUILD
    libcxx/utils/google-benchmark/bindings/python/requirements.txt
    libcxx/utils/google-benchmark/cmake/AddCXXCompilerFlag.cmake
    libcxx/utils/google-benchmark/cmake/CXXFeatureCheck.cmake
    libcxx/utils/google-benchmark/cmake/Config.cmake.in
    libcxx/utils/google-benchmark/cmake/GetGitVersion.cmake
    libcxx/utils/google-benchmark/cmake/GoogleTest.cmake
    libcxx/utils/google-benchmark/cmake/GoogleTest.cmake.in
    libcxx/utils/google-benchmark/cmake/benchmark.pc.in
    libcxx/utils/google-benchmark/cmake/gnu_posix_regex.cpp
    libcxx/utils/google-benchmark/cmake/llvm-toolchain.cmake
    libcxx/utils/google-benchmark/cmake/posix_regex.cpp
    libcxx/utils/google-benchmark/cmake/split_list.cmake
    libcxx/utils/google-benchmark/cmake/std_regex.cpp
    libcxx/utils/google-benchmark/cmake/steady_clock.cpp
    libcxx/utils/google-benchmark/cmake/thread_safety_attributes.cpp
    libcxx/utils/google-benchmark/dependencies.md
    libcxx/utils/google-benchmark/docs/AssemblyTests.md
    libcxx/utils/google-benchmark/docs/_config.yml
    libcxx/utils/google-benchmark/docs/perf_counters.md
    libcxx/utils/google-benchmark/docs/random_interleaving.md
    libcxx/utils/google-benchmark/docs/releasing.md
    libcxx/utils/google-benchmark/docs/tools.md
    libcxx/utils/google-benchmark/include/benchmark/benchmark.h
    libcxx/utils/google-benchmark/requirements.txt
    libcxx/utils/google-benchmark/setup.py
    libcxx/utils/google-benchmark/src/CMakeLists.txt
    libcxx/utils/google-benchmark/src/arraysize.h
    libcxx/utils/google-benchmark/src/benchmark.cc
    libcxx/utils/google-benchmark/src/benchmark_api_internal.cc
    libcxx/utils/google-benchmark/src/benchmark_api_internal.h
    libcxx/utils/google-benchmark/src/benchmark_main.cc
    libcxx/utils/google-benchmark/src/benchmark_name.cc
    libcxx/utils/google-benchmark/src/benchmark_register.cc
    libcxx/utils/google-benchmark/src/benchmark_register.h
    libcxx/utils/google-benchmark/src/benchmark_runner.cc
    libcxx/utils/google-benchmark/src/benchmark_runner.h
    libcxx/utils/google-benchmark/src/check.h
    libcxx/utils/google-benchmark/src/colorprint.cc
    libcxx/utils/google-benchmark/src/colorprint.h
    libcxx/utils/google-benchmark/src/commandlineflags.cc
    libcxx/utils/google-benchmark/src/commandlineflags.h
    libcxx/utils/google-benchmark/src/complexity.cc
    libcxx/utils/google-benchmark/src/complexity.h
    libcxx/utils/google-benchmark/src/console_reporter.cc
    libcxx/utils/google-benchmark/src/counter.cc
    libcxx/utils/google-benchmark/src/counter.h
    libcxx/utils/google-benchmark/src/csv_reporter.cc
    libcxx/utils/google-benchmark/src/cycleclock.h
    libcxx/utils/google-benchmark/src/internal_macros.h
    libcxx/utils/google-benchmark/src/json_reporter.cc
    libcxx/utils/google-benchmark/src/log.h
    libcxx/utils/google-benchmark/src/mutex.h
    libcxx/utils/google-benchmark/src/perf_counters.cc
    libcxx/utils/google-benchmark/src/perf_counters.h
    libcxx/utils/google-benchmark/src/re.h
    libcxx/utils/google-benchmark/src/reporter.cc
    libcxx/utils/google-benchmark/src/sleep.cc
    libcxx/utils/google-benchmark/src/sleep.h
    libcxx/utils/google-benchmark/src/statistics.cc
    libcxx/utils/google-benchmark/src/statistics.h
    libcxx/utils/google-benchmark/src/string_util.cc
    libcxx/utils/google-benchmark/src/string_util.h
    libcxx/utils/google-benchmark/src/sysinfo.cc
    libcxx/utils/google-benchmark/src/thread_manager.h
    libcxx/utils/google-benchmark/src/thread_timer.h
    libcxx/utils/google-benchmark/src/timers.cc
    libcxx/utils/google-benchmark/src/timers.h
    libcxx/utils/google-benchmark/test/AssemblyTests.cmake
    libcxx/utils/google-benchmark/test/BUILD
    libcxx/utils/google-benchmark/test/CMakeLists.txt
    libcxx/utils/google-benchmark/test/args_product_test.cc
    libcxx/utils/google-benchmark/test/basic_test.cc
    libcxx/utils/google-benchmark/test/benchmark_gtest.cc
    libcxx/utils/google-benchmark/test/benchmark_name_gtest.cc
    libcxx/utils/google-benchmark/test/benchmark_random_interleaving_gtest.cc
    libcxx/utils/google-benchmark/test/benchmark_test.cc
    libcxx/utils/google-benchmark/test/clobber_memory_assembly_test.cc
    libcxx/utils/google-benchmark/test/commandlineflags_gtest.cc
    libcxx/utils/google-benchmark/test/complexity_test.cc
    libcxx/utils/google-benchmark/test/cxx03_test.cc
    libcxx/utils/google-benchmark/test/diagnostics_test.cc
    libcxx/utils/google-benchmark/test/display_aggregates_only_test.cc
    libcxx/utils/google-benchmark/test/donotoptimize_assembly_test.cc
    libcxx/utils/google-benchmark/test/donotoptimize_test.cc
    libcxx/utils/google-benchmark/test/filter_test.cc
    libcxx/utils/google-benchmark/test/fixture_test.cc
    libcxx/utils/google-benchmark/test/internal_threading_test.cc
    libcxx/utils/google-benchmark/test/link_main_test.cc
    libcxx/utils/google-benchmark/test/map_test.cc
    libcxx/utils/google-benchmark/test/memory_manager_test.cc
    libcxx/utils/google-benchmark/test/multiple_ranges_test.cc
    libcxx/utils/google-benchmark/test/options_test.cc
    libcxx/utils/google-benchmark/test/output_test.h
    libcxx/utils/google-benchmark/test/output_test_helper.cc
    libcxx/utils/google-benchmark/test/perf_counters_gtest.cc
    libcxx/utils/google-benchmark/test/perf_counters_test.cc
    libcxx/utils/google-benchmark/test/register_benchmark_test.cc
    libcxx/utils/google-benchmark/test/repetitions_test.cc
    libcxx/utils/google-benchmark/test/report_aggregates_only_test.cc
    libcxx/utils/google-benchmark/test/reporter_output_test.cc
    libcxx/utils/google-benchmark/test/skip_with_error_test.cc
    libcxx/utils/google-benchmark/test/state_assembly_test.cc
    libcxx/utils/google-benchmark/test/statistics_gtest.cc
    libcxx/utils/google-benchmark/test/string_util_gtest.cc
    libcxx/utils/google-benchmark/test/templated_fixture_test.cc
    libcxx/utils/google-benchmark/test/user_counters_tabular_test.cc
    libcxx/utils/google-benchmark/test/user_counters_test.cc
    libcxx/utils/google-benchmark/test/user_counters_thousands_test.cc
    libcxx/utils/google-benchmark/tools/BUILD.bazel
    libcxx/utils/google-benchmark/tools/compare.py
    libcxx/utils/google-benchmark/tools/gbench/Inputs/test1_run1.json
    libcxx/utils/google-benchmark/tools/gbench/Inputs/test1_run2.json
    libcxx/utils/google-benchmark/tools/gbench/Inputs/test2_run.json
    libcxx/utils/google-benchmark/tools/gbench/Inputs/test3_run0.json
    libcxx/utils/google-benchmark/tools/gbench/Inputs/test3_run1.json
    libcxx/utils/google-benchmark/tools/gbench/Inputs/test4_run.json
    libcxx/utils/google-benchmark/tools/gbench/__init__.py
    libcxx/utils/google-benchmark/tools/gbench/report.py
    libcxx/utils/google-benchmark/tools/gbench/util.py
    libcxx/utils/google-benchmark/tools/requirements.txt
    libcxx/utils/google-benchmark/tools/strip_asm.py
    llvm/utils/benchmark/AUTHORS
    llvm/utils/benchmark/CMakeLists.txt
    llvm/utils/benchmark/CONTRIBUTING.md
    llvm/utils/benchmark/CONTRIBUTORS
    llvm/utils/benchmark/LICENSE
    llvm/utils/benchmark/README.LLVM
    llvm/utils/benchmark/README.md
    llvm/utils/benchmark/WORKSPACE
    llvm/utils/benchmark/appveyor.yml
    llvm/utils/benchmark/cmake/AddCXXCompilerFlag.cmake
    llvm/utils/benchmark/cmake/CXXFeatureCheck.cmake
    llvm/utils/benchmark/cmake/Config.cmake.in
    llvm/utils/benchmark/cmake/GetGitVersion.cmake
    llvm/utils/benchmark/cmake/HandleGTest.cmake
    llvm/utils/benchmark/cmake/Modules/FindLLVMAr.cmake
    llvm/utils/benchmark/cmake/Modules/FindLLVMNm.cmake
    llvm/utils/benchmark/cmake/Modules/FindLLVMRanLib.cmake
    llvm/utils/benchmark/cmake/benchmark.pc.in
    llvm/utils/benchmark/cmake/gnu_posix_regex.cpp
    llvm/utils/benchmark/cmake/llvm-toolchain.cmake
    llvm/utils/benchmark/cmake/posix_regex.cpp
    llvm/utils/benchmark/cmake/split_list.cmake
    llvm/utils/benchmark/cmake/std_regex.cpp
    llvm/utils/benchmark/cmake/steady_clock.cpp
    llvm/utils/benchmark/cmake/thread_safety_attributes.cpp
    llvm/utils/benchmark/docs/AssemblyTests.md
    llvm/utils/benchmark/docs/tools.md
    llvm/utils/benchmark/include/benchmark/benchmark.h
    llvm/utils/benchmark/mingw.py
    llvm/utils/benchmark/releasing.md
    llvm/utils/benchmark/src/CMakeLists.txt
    llvm/utils/benchmark/src/arraysize.h
    llvm/utils/benchmark/src/benchmark.cc
    llvm/utils/benchmark/src/benchmark_api_internal.h
    llvm/utils/benchmark/src/benchmark_main.cc
    llvm/utils/benchmark/src/benchmark_register.cc
    llvm/utils/benchmark/src/benchmark_register.h
    llvm/utils/benchmark/src/check.h
    llvm/utils/benchmark/src/colorprint.cc
    llvm/utils/benchmark/src/colorprint.h
    llvm/utils/benchmark/src/commandlineflags.cc
    llvm/utils/benchmark/src/commandlineflags.h
    llvm/utils/benchmark/src/complexity.cc
    llvm/utils/benchmark/src/complexity.h
    llvm/utils/benchmark/src/console_reporter.cc
    llvm/utils/benchmark/src/counter.cc
    llvm/utils/benchmark/src/counter.h
    llvm/utils/benchmark/src/csv_reporter.cc
    llvm/utils/benchmark/src/cycleclock.h
    llvm/utils/benchmark/src/internal_macros.h
    llvm/utils/benchmark/src/json_reporter.cc
    llvm/utils/benchmark/src/log.h
    llvm/utils/benchmark/src/mutex.h
    llvm/utils/benchmark/src/re.h
    llvm/utils/benchmark/src/reporter.cc
    llvm/utils/benchmark/src/sleep.cc
    llvm/utils/benchmark/src/sleep.h
    llvm/utils/benchmark/src/statistics.cc
    llvm/utils/benchmark/src/statistics.h
    llvm/utils/benchmark/src/string_util.cc
    llvm/utils/benchmark/src/string_util.h
    llvm/utils/benchmark/src/sysinfo.cc
    llvm/utils/benchmark/src/thread_manager.h
    llvm/utils/benchmark/src/thread_timer.h
    llvm/utils/benchmark/src/timers.cc
    llvm/utils/benchmark/src/timers.h
    llvm/utils/benchmark/test/AssemblyTests.cmake
    llvm/utils/benchmark/test/CMakeLists.txt
    llvm/utils/benchmark/test/basic_test.cc
    llvm/utils/benchmark/test/benchmark_gtest.cc
    llvm/utils/benchmark/test/benchmark_test.cc
    llvm/utils/benchmark/test/clobber_memory_assembly_test.cc
    llvm/utils/benchmark/test/complexity_test.cc
    llvm/utils/benchmark/test/cxx03_test.cc
    llvm/utils/benchmark/test/diagnostics_test.cc
    llvm/utils/benchmark/test/donotoptimize_assembly_test.cc
    llvm/utils/benchmark/test/donotoptimize_test.cc
    llvm/utils/benchmark/test/filter_test.cc
    llvm/utils/benchmark/test/fixture_test.cc
    llvm/utils/benchmark/test/link_main_test.cc
    llvm/utils/benchmark/test/map_test.cc
    llvm/utils/benchmark/test/multiple_ranges_test.cc
    llvm/utils/benchmark/test/options_test.cc
    llvm/utils/benchmark/test/output_test.h
    llvm/utils/benchmark/test/output_test_helper.cc
    llvm/utils/benchmark/test/register_benchmark_test.cc
    llvm/utils/benchmark/test/reporter_output_test.cc
    llvm/utils/benchmark/test/skip_with_error_test.cc
    llvm/utils/benchmark/test/state_assembly_test.cc
    llvm/utils/benchmark/test/statistics_gtest.cc
    llvm/utils/benchmark/test/templated_fixture_test.cc
    llvm/utils/benchmark/test/user_counters_tabular_test.cc
    llvm/utils/benchmark/test/user_counters_test.cc
    llvm/utils/benchmark/tools/compare.py
    llvm/utils/benchmark/tools/gbench/Inputs/test1_run1.json
    llvm/utils/benchmark/tools/gbench/Inputs/test1_run2.json
    llvm/utils/benchmark/tools/gbench/Inputs/test2_run.json
    llvm/utils/benchmark/tools/gbench/__init__.py
    llvm/utils/benchmark/tools/gbench/report.py
    llvm/utils/benchmark/tools/gbench/util.py
    llvm/utils/benchmark/tools/strip_asm.py


################################################################################
diff  --git a/libc/benchmarks/CMakeLists.txt b/libc/benchmarks/CMakeLists.txt
index 9f01afecef122..7a0170a9c056b 100644
--- a/libc/benchmarks/CMakeLists.txt
+++ b/libc/benchmarks/CMakeLists.txt
@@ -17,7 +17,7 @@ string(REPLACE ";" " " GOOGLE_BENCHMARK_TARGET_FLAGS "${GOOGLE_BENCHMARK_TARGET_
 ExternalProject_Add(google-benchmark
     EXCLUDE_FROM_ALL ON
     PREFIX google-benchmark
-    SOURCE_DIR ${LIBC_SOURCE_DIR}/../llvm/utils/benchmark
+    SOURCE_DIR ${LLVM_THIRD_PARTY_DIR}/benchmark
     INSTALL_DIR ${CMAKE_CURRENT_BINARY_DIR}/google-benchmark
     CMAKE_CACHE_ARGS
         -DBUILD_SHARED_LIBS:BOOL=OFF

diff  --git a/libc/benchmarks/LibcBenchmark.cpp b/libc/benchmarks/LibcBenchmark.cpp
index cef595d75e0d9..621e0468db68d 100644
--- a/libc/benchmarks/LibcBenchmark.cpp
+++ b/libc/benchmarks/LibcBenchmark.cpp
@@ -15,7 +15,7 @@ namespace libc_benchmarks {
 
 void checkRequirements() {
   const auto &CpuInfo = benchmark::CPUInfo::Get();
-  if (CpuInfo.scaling_enabled)
+  if (CpuInfo.scaling == benchmark::CPUInfo::ENABLED)
     report_fatal_error(
         "CPU scaling is enabled, the benchmark real time measurements may be "
         "noisy and will incur extra overhead.");

diff  --git a/libcxx/benchmarks/CMakeLists.txt b/libcxx/benchmarks/CMakeLists.txt
index 8758c6b7534b0..8c8c9e4f186e0 100644
--- a/libcxx/benchmarks/CMakeLists.txt
+++ b/libcxx/benchmarks/CMakeLists.txt
@@ -41,7 +41,7 @@ ExternalProject_Add(google-benchmark-libcxx
         EXCLUDE_FROM_ALL ON
         DEPENDS cxx cxx-headers
         PREFIX benchmark-libcxx
-        SOURCE_DIR ${LIBCXX_SOURCE_DIR}/utils/google-benchmark
+        SOURCE_DIR ${LLVM_THIRD_PARTY_DIR}/benchmark
         INSTALL_DIR ${CMAKE_CURRENT_BINARY_DIR}/benchmark-libcxx
         CMAKE_CACHE_ARGS
           -DCMAKE_C_COMPILER:STRING=${CMAKE_C_COMPILER}
@@ -66,7 +66,7 @@ if (LIBCXX_BENCHMARK_NATIVE_STDLIB)
   ExternalProject_Add(google-benchmark-native
         EXCLUDE_FROM_ALL ON
         PREFIX benchmark-native
-        SOURCE_DIR ${LIBCXX_SOURCE_DIR}/utils/google-benchmark
+        SOURCE_DIR ${LLVM_THIRD_PARTY_DIR}/benchmark
         INSTALL_DIR ${CMAKE_CURRENT_BINARY_DIR}/benchmark-native
         CMAKE_CACHE_ARGS
           -DCMAKE_C_COMPILER:STRING=${CMAKE_C_COMPILER}

diff  --git a/libcxx/utils/google-benchmark/.clang-format b/libcxx/utils/google-benchmark/.clang-format
deleted file mode 100644
index e7d00feaa08a9..0000000000000
--- a/libcxx/utils/google-benchmark/.clang-format
+++ /dev/null
@@ -1,5 +0,0 @@
----
-Language:        Cpp
-BasedOnStyle:  Google
-PointerAlignment: Left
-...

diff  --git a/libcxx/utils/google-benchmark/.github/.libcxx-setup.sh b/libcxx/utils/google-benchmark/.github/.libcxx-setup.sh
deleted file mode 100755
index 56008403ae921..0000000000000
--- a/libcxx/utils/google-benchmark/.github/.libcxx-setup.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/usr/bin/env bash
-
-# Checkout LLVM sources
-git clone --depth=1 https://github.com/llvm/llvm-project.git llvm-project
-
-# Setup libc++ options
-if [ -z "$BUILD_32_BITS" ]; then
-  export BUILD_32_BITS=OFF && echo disabling 32 bit build
-fi
-
-# Build and install libc++ (Use unstable ABI for better sanitizer coverage)
-cd ./llvm-project
-cmake -DCMAKE_C_COMPILER=${C_COMPILER}          \
-      -DCMAKE_CXX_COMPILER=${COMPILER}          \
-      -DCMAKE_BUILD_TYPE=RelWithDebInfo         \
-      -DCMAKE_INSTALL_PREFIX=/usr               \
-      -DLIBCXX_ABI_UNSTABLE=OFF                 \
-      -DLLVM_USE_SANITIZER=${LIBCXX_SANITIZER}  \
-      -DLLVM_BUILD_32_BITS=${BUILD_32_BITS}     \
-      -DLLVM_ENABLE_PROJECTS='libcxx;libcxxabi' \
-      -S llvm -B llvm-build -G "Unix Makefiles"
-make -C llvm-build -j3 cxx cxxabi
-sudo make -C llvm-build install-cxx install-cxxabi
-cd ..

diff  --git a/libcxx/utils/google-benchmark/.github/ISSUE_TEMPLATE/bug_report.md b/libcxx/utils/google-benchmark/.github/ISSUE_TEMPLATE/bug_report.md
deleted file mode 100644
index 6c2ced9b2ec5b..0000000000000
--- a/libcxx/utils/google-benchmark/.github/ISSUE_TEMPLATE/bug_report.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-name: Bug report
-about: Create a report to help us improve
-title: "[BUG]"
-labels: ''
-assignees: ''
-
----
-
-**Describe the bug**
-A clear and concise description of what the bug is.
-
-**System**
-Which OS, compiler, and compiler version are you using:
-  - OS: 
-  - Compiler and version: 
-
-**To reproduce**
-Steps to reproduce the behavior:
-1. sync to commit ...
-2. cmake/bazel...
-3. make ...
-4. See error
-
-**Expected behavior**
-A clear and concise description of what you expected to happen.
-
-**Screenshots**
-If applicable, add screenshots to help explain your problem.
-
-**Additional context**
-Add any other context about the problem here.

diff  --git a/libcxx/utils/google-benchmark/.github/ISSUE_TEMPLATE/feature_request.md b/libcxx/utils/google-benchmark/.github/ISSUE_TEMPLATE/feature_request.md
deleted file mode 100644
index 9e8ab6a673f6b..0000000000000
--- a/libcxx/utils/google-benchmark/.github/ISSUE_TEMPLATE/feature_request.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-name: Feature request
-about: Suggest an idea for this project
-title: "[FR]"
-labels: ''
-assignees: ''
-
----
-
-**Is your feature request related to a problem? Please describe.**
-A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-
-**Describe the solution you'd like**
-A clear and concise description of what you want to happen.
-
-**Describe alternatives you've considered**
-A clear and concise description of any alternative solutions or features you've considered.
-
-**Additional context**
-Add any other context or screenshots about the feature request here.

diff  --git a/libcxx/utils/google-benchmark/.github/workflows/bazel.yml b/libcxx/utils/google-benchmark/.github/workflows/bazel.yml
deleted file mode 100644
index a53661b2f9b11..0000000000000
--- a/libcxx/utils/google-benchmark/.github/workflows/bazel.yml
+++ /dev/null
@@ -1,30 +0,0 @@
-name: bazel
-
-on:
-  push: {}
-  pull_request: {}
-
-jobs:
-  build-and-test:
-    runs-on: ubuntu-latest
-    
-    steps:
-    - uses: actions/checkout at v1
-
-    - name: mount bazel cache
-      uses: actions/cache at v2.0.0
-      env:
-        cache-name: bazel-cache
-      with:
-        path: "~/.cache/bazel"
-        key: ${{ env.cache-name }}-${{ runner.os }}-${{ github.ref }}
-        restore-keys: |
-          ${{ env.cache-name }}-${{ runner.os }}-main
-
-    - name: build
-      run: |
-        bazel build //:benchmark //:benchmark_main //test/...
-
-    - name: test
-      run: |
-        bazel test --test_output=all //test/...

diff  --git a/libcxx/utils/google-benchmark/.github/workflows/build-and-test-perfcounters.yml b/libcxx/utils/google-benchmark/.github/workflows/build-and-test-perfcounters.yml
deleted file mode 100644
index b2b541919766f..0000000000000
--- a/libcxx/utils/google-benchmark/.github/workflows/build-and-test-perfcounters.yml
+++ /dev/null
@@ -1,44 +0,0 @@
-name: build-and-test-perfcounters
-
-on:
-  push:
-    branches: [ main ]
-  pull_request:
-    branches: [ main ]
-
-jobs:
-  job:
-    # TODO(dominic): Extend this to include compiler and set through env: CC/CXX.
-    name: ${{ matrix.os }}.${{ matrix.build_type }}
-    runs-on: ${{ matrix.os }}
-    strategy:
-      fail-fast: false
-      matrix:
-        os: [ubuntu-latest, ubuntu-16.04, ubuntu-20.04]
-        build_type: ['Release', 'Debug']
-    steps:
-    - uses: actions/checkout at v2
-
-    - name: install libpfm
-      run: sudo apt install libpfm4-dev
-
-    - name: create build environment
-      run: cmake -E make_directory ${{ runner.workspace }}/_build
-
-    - name: configure cmake
-      shell: bash
-      working-directory: ${{ runner.workspace }}/_build
-      run: cmake  -DBENCHMARK_ENABLE_LIBPFM=1 -DBENCHMARK_DOWNLOAD_DEPENDENCIES=ON $GITHUB_WORKSPACE -DCMAKE_BUILD_TYPE=${{ matrix.build_type }}
-
-    - name: build
-      shell: bash
-      working-directory: ${{ runner.workspace }}/_build
-      run: cmake --build . --config ${{ matrix.build_type }}
-
-    # Skip testing, for now. It seems perf_event_open does not succeed on the
-    # hosting machine, very likely a permissions issue.
-    # TODO(mtrofin): Enable test.
-    # - name: test
-    #   shell: bash
-    #   working-directory: ${{ runner.workspace }}/_build
-    #   run: sudo ctest -C ${{ matrix.build_type }} --rerun-failed --output-on-failure

diff  --git a/libcxx/utils/google-benchmark/.github/workflows/build-and-test.yml b/libcxx/utils/google-benchmark/.github/workflows/build-and-test.yml
deleted file mode 100644
index 9e5be3b1dc172..0000000000000
--- a/libcxx/utils/google-benchmark/.github/workflows/build-and-test.yml
+++ /dev/null
@@ -1,110 +0,0 @@
-name: build-and-test
-
-on:
-  push: {}
-  pull_request: {}
-
-jobs:
-  # TODO: add 32-bit builds (g++ and clang++) for ubuntu
-  #   (requires g++-multilib and libc6:i386)
-  # TODO: add coverage build (requires lcov)
-  # TODO: add clang + libc++ builds for ubuntu
-  # TODO: add clang + ubsan/asan/msan + libc++ builds for ubuntu
-  job:
-    name: ${{ matrix.os }}.${{ matrix.build_type }}.${{ matrix.compiler }}
-    runs-on: ${{ matrix.os }}
-    strategy:
-      fail-fast: false
-      matrix:
-        os: [ubuntu-latest, ubuntu-16.04, ubuntu-20.04, macos-latest]
-        build_type: ['Release', 'Debug']
-        compiler: [g++, clang++]
-        include:
-          - displayTargetName: windows-latest-release
-            os: windows-latest
-            build_type: 'Release'
-          - displayTargetName: windows-latest-debug
-            os: windows-latest
-            build_type: 'Debug'
-    steps:
-      - uses: actions/checkout at v2
-
-      - name: create build environment
-        run: cmake -E make_directory ${{ runner.workspace }}/_build
-
-      - name: configure cmake
-        env:
-          CXX: ${{ matrix.compiler }}
-        shell: bash
-        working-directory: ${{ runner.workspace }}/_build
-        run: >
-          cmake $GITHUB_WORKSPACE
-          -DBENCHMARK_DOWNLOAD_DEPENDENCIES=ON
-          -DCMAKE_BUILD_TYPE=${{ matrix.build_type }}
-
-      - name: build
-        shell: bash
-        working-directory: ${{ runner.workspace }}/_build
-        run: cmake --build . --config ${{ matrix.build_type }}
-
-      - name: test
-        shell: bash
-        working-directory: ${{ runner.workspace }}/_build
-        run: ctest -C ${{ matrix.build_type }} -VV
-
-  ubuntu-14_04:
-    name: ubuntu-14.04.${{ matrix.build_type }}.${{ matrix.compiler }}
-    runs-on: [ubuntu-latest]
-    strategy:
-      fail-fast: false
-      matrix:
-        build_type: ['Release', 'Debug']
-        compiler: [g++-4.8, clang++-3.6]
-        include:
-          - compiler: g++-6
-            build_type: 'Debug'
-            run_tests: true
-          - compiler: g++-6
-            build_type: 'Release'
-            run_tests: true
-    container: ubuntu:14.04
-    steps:
-      - uses: actions/checkout at v2
-
-      - name: install required bits
-        run: |
-          sudo apt update
-          sudo apt -y install clang-3.6 cmake3 g++-4.8 git
-
-      - name: install other bits
-        if: ${{ matrix.compiler }} == g++-6
-        run: |
-          sudo apt -y install software-properties-common
-          sudo add-apt-repository -y "ppa:ubuntu-toolchain-r/test"
-          sudo apt update
-          sudo apt -y install g++-6
-
-      - name: create build environment
-        run: cmake -E make_directory $GITHUB_WORKSPACE/_build
-
-      - name: configure cmake
-        env:
-          CXX: ${{ matrix.compiler }}
-        shell: bash
-        working-directory: ${{ github.workspace }}/_build
-        run: >
-          cmake $GITHUB_WORKSPACE
-          -DBENCHMARK_ENABLE_TESTING=${{ matrix.run_tests }}
-          -DCMAKE_BUILD_TYPE=${{ matrix.build_type }}
-          -DBENCHMARK_DOWNLOAD_DEPENDENCIES=${{ matrix.run_tests }}
-
-      - name: build
-        shell: bash
-        working-directory: ${{ github.workspace }}/_build
-        run: cmake --build . --config ${{ matrix.build_type }}
-
-      - name: test
-        if: ${{ matrix.run_tests }}
-        shell: bash
-        working-directory: ${{ github.workspace }}/_build
-        run: ctest -C ${{ matrix.build_type }} -VV

diff  --git a/libcxx/utils/google-benchmark/.github/workflows/pylint.yml b/libcxx/utils/google-benchmark/.github/workflows/pylint.yml
deleted file mode 100644
index 0f73a5823206e..0000000000000
--- a/libcxx/utils/google-benchmark/.github/workflows/pylint.yml
+++ /dev/null
@@ -1,26 +0,0 @@
-name: pylint
-
-on:
-  push:
-    branches: [ main ]
-  pull_request:
-    branches: [ main ]
-
-jobs:
-  pylint:
-
-    runs-on: ubuntu-latest
-
-    steps:
-    - uses: actions/checkout at v2
-    - name: Set up Python 3.8
-      uses: actions/setup-python at v1
-      with:
-        python-version: 3.8
-    - name: Install dependencies
-      run: |
-        python -m pip install --upgrade pip
-        pip install pylint pylint-exit conan
-    - name: Run pylint
-      run: |
-        pylint `find . -name '*.py'|xargs` || pylint-exit $?

diff  --git a/libcxx/utils/google-benchmark/.github/workflows/sanitizer.yml b/libcxx/utils/google-benchmark/.github/workflows/sanitizer.yml
deleted file mode 100644
index fbc984492df68..0000000000000
--- a/libcxx/utils/google-benchmark/.github/workflows/sanitizer.yml
+++ /dev/null
@@ -1,78 +0,0 @@
-name: sanitizer
-
-on:
-  push: {}
-  pull_request: {}
-
-env:
-  CC: clang
-  CXX: clang++
-  EXTRA_CXX_FLAGS: "-stdlib=libc++"
-  UBSAN_OPTIONS: "print_stacktrace=1"
-
-jobs:
-  job:
-    name: ${{ matrix.sanitizer }}.${{ matrix.build_type }}
-    runs-on: ubuntu-latest
-    strategy:
-      fail-fast: false
-      matrix:
-        build_type: ['Debug', 'RelWithDebInfo']
-        sanitizer: ['asan', 'ubsan', 'tsan']
-        # TODO: add 'msan' above. currently failing and needs investigation.
-    steps:
-    - uses: actions/checkout at v2
-
-    - name: configure msan env
-      if: matrix.sanitizer == 'msan'
-      run: |
-        echo "EXTRA_FLAGS=-g -O2 -fno-omit-frame-pointer -fsanitize=memory -fsanitize-memory-track-origins" >> $GITHUB_ENV
-        echo "LIBCXX_SANITIZER=MemoryWithOrigins" >> $GITHUB_ENV
-
-    - name: configure ubsan env
-      if: matrix.sanitizer == 'ubsan'
-      run: |
-        echo "EXTRA_FLAGS=-g -O2 -fno-omit-frame-pointer -fsanitize=undefined -fno-sanitize-recover=all" >> $GITHUB_ENV
-        echo "LIBCXX_SANITIZER=Undefined" >> $GITHUB_ENV
-
-    - name: configure asan env
-      if: matrix.sanitizer == 'asan'
-      run: |
-        echo "EXTRA_FLAGS=-g -O2 -fno-omit-frame-pointer -fsanitize=address -fno-sanitize-recover=all" >> $GITHUB_ENV
-        echo "LIBCXX_SANITIZER=Address" >> $GITHUB_ENV
-
-    - name: configure tsan env
-      if: matrix.sanitizer == 'tsan'
-      run: |
-        echo "EXTRA_FLAGS=-g -O2 -fno-omit-frame-pointer -fsanitize=thread -fno-sanitize-recover=all" >> $GITHUB_ENV
-        echo "LIBCXX_SANITIZER=Thread" >> $GITHUB_ENV
-
-    - name: install llvm stuff
-      run: "${GITHUB_WORKSPACE}/.github/.libcxx-setup.sh"
-
-    - name: create build environment
-      run: cmake -E make_directory ${{ runner.workspace }}/_build
-
-    - name: configure cmake
-      shell: bash
-      working-directory: ${{ runner.workspace }}/_build
-      run: >
-        cmake $GITHUB_WORKSPACE
-        -DBENCHMARK_ENABLE_ASSEMBLY_TESTS=OFF
-        -DBENCHMARK_ENABLE_LIBPFM=OFF
-        -DBENCHMARK_DOWNLOAD_DEPENDENCIES=ON
-        -DCMAKE_C_COMPILER=${{ env.CC }}
-        -DCMAKE_CXX_COMPILER=${{ env.CXX }}
-        -DCMAKE_C_FLAGS="${{ env.EXTRA_FLAGS }}"
-        -DCMAKE_CXX_FLAGS="${{ env.EXTRA_FLAGS }} ${{ env.EXTRA_CXX_FLAGS }}"
-        -DCMAKE_BUILD_TYPE=${{ matrix.build_type }}
-
-    - name: build
-      shell: bash
-      working-directory: ${{ runner.workspace }}/_build
-      run: cmake --build . --config ${{ matrix.build_type }}
-
-    - name: test
-      shell: bash
-      working-directory: ${{ runner.workspace }}/_build
-      run: ctest -C ${{ matrix.build_type }} -VV

diff  --git a/libcxx/utils/google-benchmark/.github/workflows/test_bindings.yml b/libcxx/utils/google-benchmark/.github/workflows/test_bindings.yml
deleted file mode 100644
index 4a580ebe047a4..0000000000000
--- a/libcxx/utils/google-benchmark/.github/workflows/test_bindings.yml
+++ /dev/null
@@ -1,24 +0,0 @@
-name: test-bindings
-
-on:
-  push:
-    branches: [main]
-  pull_request:
-    branches: [main]
-
-jobs:
-  python_bindings:
-    runs-on: ubuntu-latest
-
-    steps:
-      - uses: actions/checkout at v2
-      - name: Set up Python
-        uses: actions/setup-python at v1
-        with:
-          python-version: 3.8
-      - name: Install benchmark
-        run:
-          python setup.py install
-      - name: Run example bindings
-        run:
-          python bindings/python/google_benchmark/example.py

diff  --git a/libcxx/utils/google-benchmark/.gitignore b/libcxx/utils/google-benchmark/.gitignore
deleted file mode 100644
index be55d774e21bd..0000000000000
--- a/libcxx/utils/google-benchmark/.gitignore
+++ /dev/null
@@ -1,66 +0,0 @@
-*.a
-*.so
-*.so.?*
-*.dll
-*.exe
-*.dylib
-*.cmake
-!/cmake/*.cmake
-!/test/AssemblyTests.cmake
-*~
-*.swp
-*.pyc
-__pycache__
-
-# lcov
-*.lcov
-/lcov
-
-# cmake files.
-/Testing
-CMakeCache.txt
-CMakeFiles/
-cmake_install.cmake
-
-# makefiles.
-Makefile
-
-# in-source build.
-bin/
-lib/
-/test/*_test
-
-# exuberant ctags.
-tags
-
-# YouCompleteMe configuration.
-.ycm_extra_conf.pyc
-
-# ninja generated files.
-.ninja_deps
-.ninja_log
-build.ninja
-install_manifest.txt
-rules.ninja
-
-# bazel output symlinks.
-bazel-*
-
-# out-of-source build top-level folders.
-build/
-_build/
-build*/
-
-# in-source dependencies
-/googletest/
-
-# Visual Studio 2015/2017 cache/options directory
-.vs/
-CMakeSettings.json
-
-# Visual Studio Code cache/options directory
-.vscode/
-
-# Python build stuff
-dist/
-*.egg-info*

diff  --git a/libcxx/utils/google-benchmark/.travis.yml b/libcxx/utils/google-benchmark/.travis.yml
deleted file mode 100644
index 8cfed3d10dab5..0000000000000
--- a/libcxx/utils/google-benchmark/.travis.yml
+++ /dev/null
@@ -1,208 +0,0 @@
-sudo: required
-dist: trusty
-language: cpp
-
-matrix:
-  include:
-    - compiler: gcc
-      addons:
-        apt:
-          packages:
-            - lcov
-      env: COMPILER=g++ C_COMPILER=gcc BUILD_TYPE=Coverage
-    - compiler: gcc
-      addons:
-        apt:
-          packages:
-            - g++-multilib
-            - libc6:i386
-      env:
-        - COMPILER=g++
-        - C_COMPILER=gcc
-        - BUILD_TYPE=Debug
-        - BUILD_32_BITS=ON
-        - EXTRA_FLAGS="-m32"
-    - compiler: gcc
-      addons:
-        apt:
-          packages:
-            - g++-multilib
-            - libc6:i386
-      env:
-        - COMPILER=g++
-        - C_COMPILER=gcc
-        - BUILD_TYPE=Release
-        - BUILD_32_BITS=ON
-        - EXTRA_FLAGS="-m32"
-    - compiler: gcc
-      env:
-        - INSTALL_GCC6_FROM_PPA=1
-        - COMPILER=g++-6 C_COMPILER=gcc-6  BUILD_TYPE=Debug
-        - ENABLE_SANITIZER=1
-        - EXTRA_FLAGS="-fno-omit-frame-pointer -g -O2 -fsanitize=undefined,address -fuse-ld=gold"
-    # Clang w/ libc++
-    - compiler: clang
-      dist: xenial
-      addons:
-        apt:
-          packages:
-            clang-3.8
-      env:
-        - INSTALL_GCC6_FROM_PPA=1
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Debug
-        - LIBCXX_BUILD=1
-        - EXTRA_CXX_FLAGS="-stdlib=libc++"
-    - compiler: clang
-      dist: xenial
-      addons:
-        apt:
-          packages:
-            clang-3.8
-      env:
-        - INSTALL_GCC6_FROM_PPA=1
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Release
-        - LIBCXX_BUILD=1
-        - EXTRA_CXX_FLAGS="-stdlib=libc++"
-    # Clang w/ 32bit libc++
-    - compiler: clang
-      dist: xenial
-      addons:
-        apt:
-          packages:
-            - clang-3.8
-            - g++-multilib
-            - libc6:i386
-      env:
-        - INSTALL_GCC6_FROM_PPA=1
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Debug
-        - LIBCXX_BUILD=1
-        - BUILD_32_BITS=ON
-        - EXTRA_FLAGS="-m32"
-        - EXTRA_CXX_FLAGS="-stdlib=libc++"
-    # Clang w/ 32bit libc++
-    - compiler: clang
-      dist: xenial
-      addons:
-        apt:
-          packages:
-            - clang-3.8
-            - g++-multilib
-            - libc6:i386
-      env:
-        - INSTALL_GCC6_FROM_PPA=1
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Release
-        - LIBCXX_BUILD=1
-        - BUILD_32_BITS=ON
-        - EXTRA_FLAGS="-m32"
-        - EXTRA_CXX_FLAGS="-stdlib=libc++"
-    # Clang w/ libc++, ASAN, UBSAN
-    - compiler: clang
-      dist: xenial
-      addons:
-        apt:
-          packages:
-            clang-3.8
-      env:
-        - INSTALL_GCC6_FROM_PPA=1
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Debug
-        - LIBCXX_BUILD=1 LIBCXX_SANITIZER="Undefined;Address"
-        - ENABLE_SANITIZER=1
-        - EXTRA_FLAGS="-g -O2 -fno-omit-frame-pointer -fsanitize=undefined,address -fno-sanitize-recover=all"
-        - EXTRA_CXX_FLAGS="-stdlib=libc++"
-        - UBSAN_OPTIONS=print_stacktrace=1
-    # Clang w/ libc++ and MSAN
-    - compiler: clang
-      dist: xenial
-      addons:
-        apt:
-          packages:
-            clang-3.8
-      env:
-        - INSTALL_GCC6_FROM_PPA=1
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=Debug
-        - LIBCXX_BUILD=1 LIBCXX_SANITIZER=MemoryWithOrigins
-        - ENABLE_SANITIZER=1
-        - EXTRA_FLAGS="-g -O2 -fno-omit-frame-pointer -fsanitize=memory -fsanitize-memory-track-origins"
-        - EXTRA_CXX_FLAGS="-stdlib=libc++"
-    # Clang w/ libc++ and MSAN
-    - compiler: clang
-      dist: xenial
-      addons:
-        apt:
-          packages:
-            clang-3.8
-      env:
-        - INSTALL_GCC6_FROM_PPA=1
-        - COMPILER=clang++-3.8 C_COMPILER=clang-3.8 BUILD_TYPE=RelWithDebInfo
-        - LIBCXX_BUILD=1 LIBCXX_SANITIZER=Thread
-        - ENABLE_SANITIZER=1
-        - EXTRA_FLAGS="-g -O2 -fno-omit-frame-pointer -fsanitize=thread -fno-sanitize-recover=all"
-        - EXTRA_CXX_FLAGS="-stdlib=libc++"
-    - os: osx
-      osx_image: xcode8.3
-      compiler: clang
-      env:
-        - COMPILER=clang++
-        - BUILD_TYPE=Release
-        - BUILD_32_BITS=ON
-        - EXTRA_FLAGS="-m32"
-
-before_script:
-  - if [ -n "${LIBCXX_BUILD}" ]; then
-      source .libcxx-setup.sh;
-    fi
-  - if [ -n "${ENABLE_SANITIZER}" ]; then
-      export EXTRA_OPTIONS="-DBENCHMARK_ENABLE_ASSEMBLY_TESTS=OFF";
-    else
-      export EXTRA_OPTIONS="";
-    fi
-  - mkdir -p build && cd build
-
-before_install:
-  - if [ -z "$BUILD_32_BITS" ]; then
-      export BUILD_32_BITS=OFF && echo disabling 32 bit build;
-    fi
-  - if [ -n "${INSTALL_GCC6_FROM_PPA}" ]; then
-      sudo add-apt-repository -y "ppa:ubuntu-toolchain-r/test";
-      sudo apt-get update --option Acquire::Retries=100 --option Acquire::http::Timeout="60";
-    fi
-
-install:
-  - if [ -n "${INSTALL_GCC6_FROM_PPA}" ]; then
-      travis_wait sudo -E apt-get -yq --no-install-suggests --no-install-recommends install g++-6;
-    fi
-  - if [ "${TRAVIS_OS_NAME}" == "linux" -a "${BUILD_32_BITS}" == "OFF" ]; then
-      travis_wait sudo -E apt-get -y --no-install-suggests --no-install-recommends install llvm-3.9-tools;
-      sudo cp /usr/lib/llvm-3.9/bin/FileCheck /usr/local/bin/;
-    fi
-  - if [ "${BUILD_TYPE}" == "Coverage" -a "${TRAVIS_OS_NAME}" == "linux" ]; then
-      PATH=~/.local/bin:${PATH};
-      pip install --user --upgrade pip;
-      travis_wait pip install --user cpp-coveralls;
-    fi
-  - if [ "${C_COMPILER}" == "gcc-7" -a "${TRAVIS_OS_NAME}" == "osx" ]; then
-      rm -f /usr/local/include/c++;
-      brew update;
-      travis_wait brew install gcc at 7;
-    fi
-  - if [ "${TRAVIS_OS_NAME}" == "linux" ]; then
-      sudo apt-get update -qq;
-      sudo apt-get install -qq unzip cmake3;
-      wget https://github.com/bazelbuild/bazel/releases/download/3.2.0/bazel-3.2.0-installer-linux-x86_64.sh --output-document bazel-installer.sh;
-      travis_wait sudo bash bazel-installer.sh;
-    fi
-  - if [ "${TRAVIS_OS_NAME}" == "osx" ]; then
-      curl -L -o bazel-installer.sh https://github.com/bazelbuild/bazel/releases/download/3.2.0/bazel-3.2.0-installer-darwin-x86_64.sh;
-      travis_wait sudo bash bazel-installer.sh;
-    fi
-
-script:
-  - cmake -DCMAKE_C_COMPILER=${C_COMPILER} -DCMAKE_CXX_COMPILER=${COMPILER} -DCMAKE_BUILD_TYPE=${BUILD_TYPE} -DCMAKE_C_FLAGS="${EXTRA_FLAGS}" -DCMAKE_CXX_FLAGS="${EXTRA_FLAGS} ${EXTRA_CXX_FLAGS}" -DBENCHMARK_DOWNLOAD_DEPENDENCIES=ON -DBENCHMARK_BUILD_32_BITS=${BUILD_32_BITS} ${EXTRA_OPTIONS} ..
-  - make
-  - ctest -C ${BUILD_TYPE} --output-on-failure
-  - bazel test -c dbg --define google_benchmark.have_regex=posix --announce_rc --verbose_failures --test_output=errors --keep_going //test/...
-
-after_success:
-  - if [ "${BUILD_TYPE}" == "Coverage" -a "${TRAVIS_OS_NAME}" == "linux" ]; then
-      coveralls --include src --include include --gcov-options '\-lp' --root .. --build-root .;
-    fi

diff  --git a/libcxx/utils/google-benchmark/.ycm_extra_conf.py b/libcxx/utils/google-benchmark/.ycm_extra_conf.py
deleted file mode 100644
index 5649ddcc749f0..0000000000000
--- a/libcxx/utils/google-benchmark/.ycm_extra_conf.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import os
-import ycm_core
-
-# These are the compilation flags that will be used in case there's no
-# compilation database set (by default, one is not set).
-# CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.
-flags = [
-'-Wall',
-'-Werror',
-'-pedantic-errors',
-'-std=c++0x',
-'-fno-strict-aliasing',
-'-O3',
-'-DNDEBUG',
-# ...and the same thing goes for the magic -x option which specifies the
-# language that the files to be compiled are written in. This is mostly
-# relevant for c++ headers.
-# For a C project, you would set this to 'c' instead of 'c++'.
-'-x', 'c++',
-'-I', 'include',
-'-isystem', '/usr/include',
-'-isystem', '/usr/local/include',
-]
-
-
-# Set this to the absolute path to the folder (NOT the file!) containing the
-# compile_commands.json file to use that instead of 'flags'. See here for
-# more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html
-#
-# Most projects will NOT need to set this to anything; you can just change the
-# 'flags' list of compilation flags. Notice that YCM itself uses that approach.
-compilation_database_folder = ''
-
-if os.path.exists( compilation_database_folder ):
-  database = ycm_core.CompilationDatabase( compilation_database_folder )
-else:
-  database = None
-
-SOURCE_EXTENSIONS = [ '.cc' ]
-
-def DirectoryOfThisScript():
-  return os.path.dirname( os.path.abspath( __file__ ) )
-
-
-def MakeRelativePathsInFlagsAbsolute( flags, working_directory ):
-  if not working_directory:
-    return list( flags )
-  new_flags = []
-  make_next_absolute = False
-  path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ]
-  for flag in flags:
-    new_flag = flag
-
-    if make_next_absolute:
-      make_next_absolute = False
-      if not flag.startswith( '/' ):
-        new_flag = os.path.join( working_directory, flag )
-
-    for path_flag in path_flags:
-      if flag == path_flag:
-        make_next_absolute = True
-        break
-
-      if flag.startswith( path_flag ):
-        path = flag[ len( path_flag ): ]
-        new_flag = path_flag + os.path.join( working_directory, path )
-        break
-
-    if new_flag:
-      new_flags.append( new_flag )
-  return new_flags
-
-
-def IsHeaderFile( filename ):
-  extension = os.path.splitext( filename )[ 1 ]
-  return extension in [ '.h', '.hxx', '.hpp', '.hh' ]
-
-
-def GetCompilationInfoForFile( filename ):
-  # The compilation_commands.json file generated by CMake does not have entries
-  # for header files. So we do our best by asking the db for flags for a
-  # corresponding source file, if any. If one exists, the flags for that file
-  # should be good enough.
-  if IsHeaderFile( filename ):
-    basename = os.path.splitext( filename )[ 0 ]
-    for extension in SOURCE_EXTENSIONS:
-      replacement_file = basename + extension
-      if os.path.exists( replacement_file ):
-        compilation_info = database.GetCompilationInfoForFile(
-          replacement_file )
-        if compilation_info.compiler_flags_:
-          return compilation_info
-    return None
-  return database.GetCompilationInfoForFile( filename )
-
-
-def FlagsForFile( filename, **kwargs ):
-  if database:
-    # Bear in mind that compilation_info.compiler_flags_ does NOT return a
-    # python list, but a "list-like" StringVec object
-    compilation_info = GetCompilationInfoForFile( filename )
-    if not compilation_info:
-      return None
-
-    final_flags = MakeRelativePathsInFlagsAbsolute(
-      compilation_info.compiler_flags_,
-      compilation_info.compiler_working_dir_ )
-  else:
-    relative_to = DirectoryOfThisScript()
-    final_flags = MakeRelativePathsInFlagsAbsolute( flags, relative_to )
-
-  return {
-    'flags': final_flags,
-    'do_cache': True
-  }

diff  --git a/llvm/CMakeLists.txt b/llvm/CMakeLists.txt
index c06143e679bf0..d12befc1a5389 100644
--- a/llvm/CMakeLists.txt
+++ b/llvm/CMakeLists.txt
@@ -307,6 +307,8 @@ set(LLVM_MAIN_SRC_DIR     ${CMAKE_CURRENT_SOURCE_DIR}  ) # --src-root
 set(LLVM_MAIN_INCLUDE_DIR ${LLVM_MAIN_SRC_DIR}/include ) # --includedir
 set(LLVM_BINARY_DIR       ${CMAKE_CURRENT_BINARY_DIR}  ) # --prefix
 
+set(LLVM_THIRD_PARTY_DIR  ${CMAKE_CURRENT_SOURCE_DIR}/../third-party)
+
 # Note: LLVM_CMAKE_DIR does not include generated files
 set(LLVM_CMAKE_DIR ${LLVM_MAIN_SRC_DIR}/cmake/modules)
 set(LLVM_EXAMPLES_BINARY_DIR ${LLVM_BINARY_DIR}/examples)
@@ -1209,8 +1211,8 @@ if (LLVM_INCLUDE_BENCHMARKS)
   set(BENCHMARK_ENABLE_GTEST_TESTS OFF CACHE BOOL "Disable Google Test in benchmark" FORCE)
   # Since LLVM requires C++11 it is safe to assume that std::regex is available.
   set(HAVE_STD_REGEX ON CACHE BOOL "OK" FORCE)
-
-  add_subdirectory(utils/benchmark)
+  add_subdirectory(${LLVM_THIRD_PARTY_DIR}/benchmark 
+    ${CMAKE_CURRENT_BINARY_DIR}/third-party/benchmark)
   add_subdirectory(benchmarks)
 endif()
 

diff  --git a/llvm/utils/benchmark/AUTHORS b/llvm/utils/benchmark/AUTHORS
deleted file mode 100644
index 052a383f77cdf..0000000000000
--- a/llvm/utils/benchmark/AUTHORS
+++ /dev/null
@@ -1,47 +0,0 @@
-# This is the official list of benchmark authors for copyright purposes.
-# This file is distinct from the CONTRIBUTORS files.
-# See the latter for an explanation.
-#
-# Names should be added to this file as:
-#	Name or Organization <email address>
-# The email address is not required for organizations.
-#
-# Please keep the list sorted.
-
-Albert Pretorius <pretoalb at gmail.com>
-Arne Beer <arne at twobeer.de>
-Carto
-Christopher Seymour <chris.j.seymour at hotmail.com>
-David Coeurjolly <david.coeurjolly at liris.cnrs.fr>
-Deniz Evrenci <denizevrenci at gmail.com>
-Dirac Research
-Dominik Czarnota <dominik.b.czarnota at gmail.com>
-Eric Fiselier <eric at efcs.ca>
-Eugene Zhuk <eugene.zhuk at gmail.com>
-Evgeny Safronov <division494 at gmail.com>
-Felix Homann <linuxaudio at showlabor.de>
-Google Inc.
-International Business Machines Corporation
-Ismael Jimenez Martinez <ismael.jimenez.martinez at gmail.com>
-Jern-Kuan Leong <jernkuan at gmail.com>
-JianXiong Zhou <zhoujianxiong2 at gmail.com>
-Joao Paulo Magalhaes <joaoppmagalhaes at gmail.com>
-Jussi Knuuttila <jussi.knuuttila at gmail.com>
-Kaito Udagawa <umireon at gmail.com>
-Kishan Kumar <kumar.kishan at outlook.com>
-Lei Xu <eddyxu at gmail.com>
-Matt Clarkson <mattyclarkson at gmail.com>
-Maxim Vafin <maxvafin at gmail.com>
-MongoDB Inc.
-Nick Hutchinson <nshutchinson at gmail.com>
-Oleksandr Sochka <sasha.sochka at gmail.com>
-Paul Redmond <paul.redmond at gmail.com>
-Radoslav Yovchev <radoslav.tm at gmail.com>
-Roman Lebedev <lebedev.ri at gmail.com>
-Shuo Chen <chenshuo at chenshuo.com>
-Steinar H. Gunderson <sgunderson at bigfoot.com>
-Stripe, Inc.
-Yixuan Qiu <yixuanq at gmail.com>
-Yusuke Suzuki <utatane.tea at gmail.com>
-Zbigniew Skowron <zbychs at gmail.com>
-Min-Yih Hsu <yihshyng223 at gmail.com>

diff  --git a/llvm/utils/benchmark/CMakeLists.txt b/llvm/utils/benchmark/CMakeLists.txt
deleted file mode 100644
index 3a80b906d5cc7..0000000000000
--- a/llvm/utils/benchmark/CMakeLists.txt
+++ /dev/null
@@ -1,266 +0,0 @@
-cmake_minimum_required(VERSION 3.13.4)
-
-# Tell cmake 3.0+ that it's safe to clear the PROJECT_VERSION variable in the
-# call to project() below.
-if(POLICY CMP0048)
-  cmake_policy(SET CMP0048 NEW)
-endif()
-
-project (benchmark)
-
-foreach(p
-    CMP0054 # CMake 3.1
-    CMP0056 # export EXE_LINKER_FLAGS to try_run
-    CMP0057 # Support no if() IN_LIST operator
-    )
-  if(POLICY ${p})
-    cmake_policy(SET ${p} NEW)
-  endif()
-endforeach()
-
-option(BENCHMARK_ENABLE_TESTING "Enable testing of the benchmark library." ON)
-option(BENCHMARK_ENABLE_EXCEPTIONS "Enable the use of exceptions in the benchmark library." ON)
-option(BENCHMARK_ENABLE_LTO "Enable link time optimisation of the benchmark library." OFF)
-option(BENCHMARK_USE_LIBCXX "Build and test using libc++ as the standard library." OFF)
-option(BENCHMARK_BUILD_32_BITS "Build a 32 bit version of the library." OFF)
-option(BENCHMARK_ENABLE_INSTALL "Enable installation of benchmark. (Projects embedding benchmark may want to turn this OFF.)" ON)
-
-# Allow unmet dependencies to be met using CMake's ExternalProject mechanics, which
-# may require downloading the source code.
-option(BENCHMARK_DOWNLOAD_DEPENDENCIES "Allow the downloading and in-tree building of unmet dependencies" OFF)
-
-# This option can be used to disable building and running unit tests which depend on gtest
-# in cases where it is not possible to build or find a valid version of gtest.
-option(BENCHMARK_ENABLE_GTEST_TESTS "Enable building the unit tests which depend on gtest" OFF)
-
-set(ENABLE_ASSEMBLY_TESTS_DEFAULT OFF)
-function(should_enable_assembly_tests)
-  if(CMAKE_BUILD_TYPE)
-    string(TOLOWER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_LOWER)
-    if (${CMAKE_BUILD_TYPE_LOWER} MATCHES "coverage")
-      # FIXME: The --coverage flag needs to be removed when building assembly
-      # tests for this to work.
-      return()
-    endif()
-  endif()
-  if (MSVC)
-    return()
-  elseif(NOT CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64")
-    return()
-  elseif(NOT CMAKE_SIZEOF_VOID_P EQUAL 8)
-    # FIXME: Make these work on 32 bit builds
-    return()
-  elseif(BENCHMARK_BUILD_32_BITS)
-     # FIXME: Make these work on 32 bit builds
-    return()
-  endif()
-  find_program(LLVM_FILECHECK_EXE FileCheck)
-  if (LLVM_FILECHECK_EXE)
-    set(LLVM_FILECHECK_EXE "${LLVM_FILECHECK_EXE}" CACHE PATH "llvm filecheck" FORCE)
-    message(STATUS "LLVM FileCheck Found: ${LLVM_FILECHECK_EXE}")
-  else()
-    message(STATUS "Failed to find LLVM FileCheck")
-    return()
-  endif()
-  set(ENABLE_ASSEMBLY_TESTS_DEFAULT ON PARENT_SCOPE)
-endfunction()
-should_enable_assembly_tests()
-
-# This option disables the building and running of the assembly verification tests
-option(BENCHMARK_ENABLE_ASSEMBLY_TESTS "Enable building and running the assembly tests"
-    ${ENABLE_ASSEMBLY_TESTS_DEFAULT})
-
-# Make sure we can import out CMake functions
-list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules")
-list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
-
-
-# Read the git tags to determine the project version
-# WARNING: This is meaningless for when the benchmark library is being built in-tree,
-# so disable it and hardcode a null version.
-# include(GetGitVersion)
-# get_git_version(GIT_VERSION)
-set(GIT_VERSION "v0.0.0")
-
-# Tell the user what versions we are using
-string(REGEX MATCH "[0-9]+\\.[0-9]+\\.[0-9]+" VERSION ${GIT_VERSION})
-message("-- Version: ${VERSION}")
-
-# The version of the libraries
-set(GENERIC_LIB_VERSION ${VERSION})
-string(SUBSTRING ${VERSION} 0 1 GENERIC_LIB_SOVERSION)
-
-# Import our CMake modules
-include(CheckCXXCompilerFlag)
-include(AddCXXCompilerFlag)
-include(CXXFeatureCheck)
-
-if (BENCHMARK_BUILD_32_BITS)
-  add_required_cxx_compiler_flag(-m32)
-endif()
-
-if (MSVC)
-  # Turn compiler warnings up to 11
-  string(REGEX REPLACE "[-/]W[1-4]" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
-  set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /W4")
-  add_definitions(-D_CRT_SECURE_NO_WARNINGS)
-
-  if (NOT BENCHMARK_ENABLE_EXCEPTIONS)
-    add_cxx_compiler_flag(-EHs-)
-    add_cxx_compiler_flag(-EHa-)
-    add_definitions(-D_HAS_EXCEPTIONS=0)
-  endif()
-  # Link time optimisation
-  if (BENCHMARK_ENABLE_LTO)
-    set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /GL")
-    set(CMAKE_STATIC_LINKER_FLAGS_RELEASE "${CMAKE_STATIC_LINKER_FLAGS_RELEASE} /LTCG")
-    set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /LTCG")
-    set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} /LTCG")
-
-    set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} /GL")
-    string(REGEX REPLACE "[-/]INCREMENTAL" "/INCREMENTAL:NO" CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO}")
-    set(CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO} /LTCG")
-    string(REGEX REPLACE "[-/]INCREMENTAL" "/INCREMENTAL:NO" CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO}")
-    set(CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO} /LTCG")
-    string(REGEX REPLACE "[-/]INCREMENTAL" "/INCREMENTAL:NO" CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO}")
-    set(CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO "${CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO} /LTCG")
-
-    set(CMAKE_CXX_FLAGS_MINSIZEREL "${CMAKE_CXX_FLAGS_MINSIZEREL} /GL")
-    set(CMAKE_STATIC_LINKER_FLAGS_MINSIZEREL "${CMAKE_STATIC_LINKER_FLAGS_MINSIZEREL} /LTCG")
-    set(CMAKE_SHARED_LINKER_FLAGS_MINSIZEREL "${CMAKE_SHARED_LINKER_FLAGS_MINSIZEREL} /LTCG")
-    set(CMAKE_EXE_LINKER_FLAGS_MINSIZEREL "${CMAKE_EXE_LINKER_FLAGS_MINSIZEREL} /LTCG")
-  endif()
-else()
-  # Try and enable C++11. Don't use C++14 because it doesn't work in some
-  # configurations.
-  add_cxx_compiler_flag(-std=c++11)
-  if (NOT HAVE_CXX_FLAG_STD_CXX11)
-    add_cxx_compiler_flag(-std=c++0x)
-  endif()
-
-  # Turn compiler warnings up to 11
-  add_cxx_compiler_flag(-Wall)
-
-  add_cxx_compiler_flag(-Wextra)
-  add_cxx_compiler_flag(-Wshadow)
-  # FIXME(kbobyrev): Document this change.
-  # add_cxx_compiler_flag(-Werror RELEASE)
-  # add_cxx_compiler_flag(-Werror RELWITHDEBINFO)
-  # add_cxx_compiler_flag(-Werror MINSIZEREL)
-  add_cxx_compiler_flag(-pedantic)
-  add_cxx_compiler_flag(-pedantic-errors)
-  add_cxx_compiler_flag(-Wshorten-64-to-32)
-  add_cxx_compiler_flag(-Wfloat-equal)
-  add_cxx_compiler_flag(-fstrict-aliasing)
-  if (NOT BENCHMARK_ENABLE_EXCEPTIONS)
-    add_cxx_compiler_flag(-fno-exceptions)
-  endif()
-
-  if (CXX_SUPPORTS_SUGGEST_OVERRIDE_FLAG)
-    add_cxx_compiler_flag(-Wno-suggest-override)
-  endif()
-
-  if (HAVE_CXX_FLAG_FSTRICT_ALIASING)
-    if (NOT CMAKE_CXX_COMPILER_ID STREQUAL "Intel") #ICC17u2: Many false positives for Wstrict-aliasing
-      add_cxx_compiler_flag(-Wstrict-aliasing)
-    endif()
-  endif()
-  # ICC17u2: overloaded virtual function "benchmark::Fixture::SetUp" is only partially overridden
-  # (because of deprecated overload)
-  add_cxx_compiler_flag(-wd654)
-  add_cxx_compiler_flag(-Wthread-safety)
-  if (HAVE_CXX_FLAG_WTHREAD_SAFETY)
-    cxx_feature_check(THREAD_SAFETY_ATTRIBUTES)
-  endif()
-
-  # On most UNIX like platforms g++ and clang++ define _GNU_SOURCE as a
-  # predefined macro, which turns on all of the wonderful libc extensions.
-  # However g++ doesn't do this in Cygwin so we have to define it ourselfs
-  # since we depend on GNU/POSIX/BSD extensions.
-  if (CYGWIN)
-    add_definitions(-D_GNU_SOURCE=1)
-  endif()
-
-  # Link time optimisation
-  if (BENCHMARK_ENABLE_LTO)
-    add_cxx_compiler_flag(-flto)
-    if ("${CMAKE_C_COMPILER_ID}" STREQUAL "GNU")
-      find_program(GCC_AR gcc-ar)
-      if (GCC_AR)
-        set(CMAKE_AR ${GCC_AR})
-      endif()
-      find_program(GCC_RANLIB gcc-ranlib)
-      if (GCC_RANLIB)
-        set(CMAKE_RANLIB ${GCC_RANLIB})
-      endif()
-    elseif("${CMAKE_C_COMPILER_ID}" STREQUAL "Clang")
-      include(llvm-toolchain)
-    endif()
-  endif()
-
-  # Coverage build type
-  set(BENCHMARK_CXX_FLAGS_COVERAGE "${CMAKE_CXX_FLAGS_DEBUG}"
-    CACHE STRING "Flags used by the C++ compiler during coverage builds."
-    FORCE)
-  set(BENCHMARK_EXE_LINKER_FLAGS_COVERAGE "${CMAKE_EXE_LINKER_FLAGS_DEBUG}"
-    CACHE STRING "Flags used for linking binaries during coverage builds."
-    FORCE)
-  set(BENCHMARK_SHARED_LINKER_FLAGS_COVERAGE "${CMAKE_SHARED_LINKER_FLAGS_DEBUG}"
-    CACHE STRING "Flags used by the shared libraries linker during coverage builds."
-    FORCE)
-  mark_as_advanced(
-    BENCHMARK_CXX_FLAGS_COVERAGE
-    BENCHMARK_EXE_LINKER_FLAGS_COVERAGE
-    BENCHMARK_SHARED_LINKER_FLAGS_COVERAGE)
-  set(CMAKE_BUILD_TYPE "${CMAKE_BUILD_TYPE}" CACHE STRING
-    "Choose the type of build, options are: None Debug Release RelWithDebInfo MinSizeRel Coverage.")
-  add_cxx_compiler_flag(--coverage COVERAGE)
-endif()
-
-if (BENCHMARK_USE_LIBCXX)
-  if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
-    add_cxx_compiler_flag(-stdlib=libc++)
-  elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU" OR
-          "${CMAKE_CXX_COMPILER_ID}" STREQUAL "Intel")
-    add_cxx_compiler_flag(-nostdinc++)
-    message("libc++ header path must be manually specified using CMAKE_CXX_FLAGS")
-    # Adding -nodefaultlibs directly to CMAKE_<TYPE>_LINKER_FLAGS will break
-    # configuration checks such as 'find_package(Threads)'
-    list(APPEND BENCHMARK_CXX_LINKER_FLAGS -nodefaultlibs)
-    # -lc++ cannot be added directly to CMAKE_<TYPE>_LINKER_FLAGS because
-    # linker flags appear before all linker inputs and -lc++ must appear after.
-    list(APPEND BENCHMARK_CXX_LIBRARIES c++)
-  else()
-    message(FATAL_ERROR "-DBENCHMARK_USE_LIBCXX:BOOL=ON is not supported for compiler")
-  endif()
-endif(BENCHMARK_USE_LIBCXX)
-
-# C++ feature checks
-# Determine the correct regular expression engine to use
-cxx_feature_check(STD_REGEX)
-cxx_feature_check(GNU_POSIX_REGEX)
-cxx_feature_check(POSIX_REGEX)
-if(NOT HAVE_STD_REGEX AND NOT HAVE_GNU_POSIX_REGEX AND NOT HAVE_POSIX_REGEX)
-  message(FATAL_ERROR "Failed to determine the source files for the regular expression backend")
-endif()
-if (NOT BENCHMARK_ENABLE_EXCEPTIONS AND HAVE_STD_REGEX
-        AND NOT HAVE_GNU_POSIX_REGEX AND NOT HAVE_POSIX_REGEX)
-  message(WARNING "Using std::regex with exceptions disabled is not fully supported")
-endif()
-cxx_feature_check(STEADY_CLOCK)
-# Ensure we have pthreads
-find_package(Threads REQUIRED)
-
-# Set up directories
-include_directories(${PROJECT_SOURCE_DIR}/include)
-
-# Build the targets
-add_subdirectory(src)
-
-if (BENCHMARK_ENABLE_TESTING)
-  enable_testing()
-  if (BENCHMARK_ENABLE_GTEST_TESTS)
-    include(HandleGTest)
-  endif()
-  add_subdirectory(test)
-endif()

diff  --git a/llvm/utils/benchmark/CONTRIBUTING.md b/llvm/utils/benchmark/CONTRIBUTING.md
deleted file mode 100644
index 43de4c9d4709a..0000000000000
--- a/llvm/utils/benchmark/CONTRIBUTING.md
+++ /dev/null
@@ -1,58 +0,0 @@
-# How to contribute #
-
-We'd love to accept your patches and contributions to this project.  There are
-a just a few small guidelines you need to follow.
-
-
-## Contributor License Agreement ##
-
-Contributions to any Google project must be accompanied by a Contributor
-License Agreement.  This is not a copyright **assignment**, it simply gives
-Google permission to use and redistribute your contributions as part of the
-project.
-
-  * If you are an individual writing original source code and you're sure you
-    own the intellectual property, then you'll need to sign an [individual
-    CLA][].
-
-  * If you work for a company that wants to allow you to contribute your work,
-    then you'll need to sign a [corporate CLA][].
-
-You generally only need to submit a CLA once, so if you've already submitted
-one (even if it was for a 
diff erent project), you probably don't need to do it
-again.
-
-[individual CLA]: https://developers.google.com/open-source/cla/individual
-[corporate CLA]: https://developers.google.com/open-source/cla/corporate
-
-Once your CLA is submitted (or if you already submitted one for
-another Google project), make a commit adding yourself to the
-[AUTHORS][] and [CONTRIBUTORS][] files. This commit can be part
-of your first [pull request][].
-
-[AUTHORS]: AUTHORS
-[CONTRIBUTORS]: CONTRIBUTORS
-
-
-## Submitting a patch ##
-
-  1. It's generally best to start by opening a new issue describing the bug or
-     feature you're intending to fix.  Even if you think it's relatively minor,
-     it's helpful to know what people are working on.  Mention in the initial
-     issue that you are planning to work on that bug or feature so that it can
-     be assigned to you.
-
-  1. Follow the normal process of [forking][] the project, and setup a new
-     branch to work in.  It's important that each group of changes be done in
-     separate branches in order to ensure that a pull request only includes the
-     commits related to that bug or feature.
-
-  1. Do your best to have [well-formed commit messages][] for each change.
-     This provides consistency throughout the project, and ensures that commit
-     messages are able to be formatted properly by various git tools.
-
-  1. Finally, push the commits to your fork and submit a [pull request][].
-
-[forking]: https://help.github.com/articles/fork-a-repo
-[well-formed commit messages]: http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html
-[pull request]: https://help.github.com/articles/creating-a-pull-request

diff  --git a/llvm/utils/benchmark/CONTRIBUTORS b/llvm/utils/benchmark/CONTRIBUTORS
deleted file mode 100644
index 073d6767bc487..0000000000000
--- a/llvm/utils/benchmark/CONTRIBUTORS
+++ /dev/null
@@ -1,67 +0,0 @@
-# People who have agreed to one of the CLAs and can contribute patches.
-# The AUTHORS file lists the copyright holders; this file
-# lists people.  For example, Google employees are listed here
-# but not in AUTHORS, because Google holds the copyright.
-#
-# Names should be added to this file only after verifying that
-# the individual or the individual's organization has agreed to
-# the appropriate Contributor License Agreement, found here:
-#
-# https://developers.google.com/open-source/cla/individual
-# https://developers.google.com/open-source/cla/corporate
-#
-# The agreement for individuals can be filled out on the web.
-#
-# When adding J Random Contributor's name to this file,
-# either J's name or J's organization's name should be
-# added to the AUTHORS file, depending on whether the
-# individual or corporate CLA was used.
-#
-# Names should be added to this file as:
-#     Name <email address>
-#
-# Please keep the list sorted.
-
-Albert Pretorius <pretoalb at gmail.com>
-Arne Beer <arne at twobeer.de>
-Billy Robert O'Neal III <billy.oneal at gmail.com> <bion at microsoft.com>
-Chris Kennelly <ckennelly at google.com> <ckennelly at ckennelly.com>
-Christopher Seymour <chris.j.seymour at hotmail.com>
-David Coeurjolly <david.coeurjolly at liris.cnrs.fr>
-Deniz Evrenci <denizevrenci at gmail.com>
-Dominic Hamon <dma at stripysock.com> <dominic at google.com>
-Dominik Czarnota <dominik.b.czarnota at gmail.com>
-Eric Fiselier <eric at efcs.ca>
-Eugene Zhuk <eugene.zhuk at gmail.com>
-Evgeny Safronov <division494 at gmail.com>
-Felix Homann <linuxaudio at showlabor.de>
-Ismael Jimenez Martinez <ismael.jimenez.martinez at gmail.com>
-Jern-Kuan Leong <jernkuan at gmail.com>
-JianXiong Zhou <zhoujianxiong2 at gmail.com>
-Joao Paulo Magalhaes <joaoppmagalhaes at gmail.com>
-John Millikin <jmillikin at stripe.com>
-Jussi Knuuttila <jussi.knuuttila at gmail.com>
-Kai Wolf <kai.wolf at gmail.com>
-Kishan Kumar <kumar.kishan at outlook.com>
-Kaito Udagawa <umireon at gmail.com>
-Lei Xu <eddyxu at gmail.com>
-Matt Clarkson <mattyclarkson at gmail.com>
-Maxim Vafin <maxvafin at gmail.com>
-Nick Hutchinson <nshutchinson at gmail.com>
-Oleksandr Sochka <sasha.sochka at gmail.com>
-Pascal Leroy <phl at google.com>
-Paul Redmond <paul.redmond at gmail.com>
-Pierre Phaneuf <pphaneuf at google.com>
-Radoslav Yovchev <radoslav.tm at gmail.com>
-Raul Marin <rmrodriguez at cartodb.com>
-Ray Glover <ray.glover at uk.ibm.com>
-Robert Guo <robert.guo at mongodb.com>
-Roman Lebedev <lebedev.ri at gmail.com>
-Shuo Chen <chenshuo at chenshuo.com>
-Steven Wan <wan.yu at ibm.com>
-Tobias Ulvgård <tobias.ulvgard at dirac.se>
-Tom Madams <tom.ej.madams at gmail.com> <tmadams at google.com>
-Yixuan Qiu <yixuanq at gmail.com>
-Yusuke Suzuki <utatane.tea at gmail.com>
-Zbigniew Skowron <zbychs at gmail.com>
-Min-Yih Hsu <yihshyng223 at gmail.com>

diff  --git a/llvm/utils/benchmark/LICENSE b/llvm/utils/benchmark/LICENSE
deleted file mode 100644
index d645695673349..0000000000000
--- a/llvm/utils/benchmark/LICENSE
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or 
diff erent license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.

diff  --git a/llvm/utils/benchmark/README.LLVM b/llvm/utils/benchmark/README.LLVM
deleted file mode 100644
index d77da3b620010..0000000000000
--- a/llvm/utils/benchmark/README.LLVM
+++ /dev/null
@@ -1,39 +0,0 @@
-LLVM notes
-----------
-
-This directory contains the Google Benchmark source code. Currently, the checked
-Benchmark library version is v1.4.1.
-
-This directory is under a 
diff erent license than LLVM.
-
-Changes:
-
-* Bazel BUILD files are removed from the library
-* https://github.com/google/benchmark/commit/f85304e4e3a0e4e1bf15b91720df4a19e90b589f
-  is applied on top of the v1.4.1 to silence compiler warnings
-* https://github.com/google/benchmark/commit/505be96ab23056580a3a2315abba048f4428b04e
-  is applied to comply with the LLVM's required CMake version
-* https://github.com/google/benchmark/commit/f0901417c89d123474e6b91365029cfe32cf89dc
-  is applied to fix 32-bit build failure on macOS
-* https://github.com/google/benchmark/commit/52613079824ac58d06c070aa9fbbb186a5859e2c
-  is applied to fix cross compilation with MinGW headers
-* https://github.com/google/benchmark/commit/439d6b1c2a6da5cb6adc4c4dfc555af235722396
-  is applied to fix building with MinGW headers for ARM
-* https://github.com/google/benchmark/commit/a9b31c51b1ee7ec7b31438c647123c2cbac5d956
-  is applied to disable exceptions in Microsoft STL when exceptions are disabled
-* Disabled CMake get_git_version as it is meaningless for this in-tree build,
-  and hardcoded a null version
-* https://github.com/google/benchmark/commit/4abdfbb802d1b514703223f5f852ce4a507d32d2
-  is applied on top of v1.4.1 to add RISC-V timer support.
-* https://github.com/google/benchmark/commit/8e48105d465c586068dd8e248fe75a8971c6ba3a
-  is applied on top of v1.4.1 to fix cross-build from linux to windows via MinGW.
-* https://github.com/google/benchmark/commit/a77d5f70efaebe2b7e8c10134526a23a7ce7ef35
-  and
-  https://github.com/google/benchmark/commit/ecc1685340f58f7fe6b707036bc0bb1fccabb0c1
-  are applied on top of the previous cherrypick to fix timestamp-related inline
-  asm issues and 32-bit RISC-V build failures. The second cherrypicked commit
-  fixes formatting issues introduced by the preceding change.
-* https://github.com/google/benchmark/commit/ffe1342eb2faa7d2e7c35b4db2ccf99fab81ec20
-  is applied to add the CycleTimer implementation for M68k
-* https://github.com/google/benchmark/commit/d9abf017632be4a00b92cf4289539b353fcea5d2
-  is applied to rename 'mftbl' to 'mftb'.

diff  --git a/llvm/utils/benchmark/README.md b/llvm/utils/benchmark/README.md
deleted file mode 100644
index bba89afd0c002..0000000000000
--- a/llvm/utils/benchmark/README.md
+++ /dev/null
@@ -1,950 +0,0 @@
-# benchmark
-[![Build Status](https://travis-ci.org/google/benchmark.svg?branch=master)](https://travis-ci.org/google/benchmark)
-[![Build status](https://ci.appveyor.com/api/projects/status/u0qsyp7t1tk7cpxs/branch/master?svg=true)](https://ci.appveyor.com/project/google/benchmark/branch/master)
-[![Coverage Status](https://coveralls.io/repos/google/benchmark/badge.svg)](https://coveralls.io/r/google/benchmark)
-[![slackin](https://slackin-iqtfqnpzxd.now.sh/badge.svg)](https://slackin-iqtfqnpzxd.now.sh/)
-
-A library to support the benchmarking of functions, similar to unit-tests.
-
-Discussion group: https://groups.google.com/d/forum/benchmark-discuss
-
-IRC channel: https://freenode.net #googlebenchmark
-
-[Known issues and common problems](#known-issues)
-
-[Additional Tooling Documentation](docs/tools.md)
-
-[Assembly Testing Documentation](docs/AssemblyTests.md)
-
-
-## Building
-
-The basic steps for configuring and building the library look like this:
-
-```bash
-$ git clone https://github.com/google/benchmark.git
-# Benchmark requires Google Test as a dependency. Add the source tree as a subdirectory.
-$ git clone https://github.com/google/googletest.git benchmark/googletest
-$ mkdir build && cd build
-$ cmake -G <generator> [options] ../benchmark
-# Assuming a makefile generator was used
-$ make
-```
-
-Note that Google Benchmark requires Google Test to build and run the tests. This
-dependency can be provided two ways:
-
-* Checkout the Google Test sources into `benchmark/googletest` as above.
-* Otherwise, if `-DBENCHMARK_DOWNLOAD_DEPENDENCIES=ON` is specified during
-  configuration, the library will automatically download and build any required
-  dependencies.
-
-If you do not wish to build and run the tests, add `-DBENCHMARK_ENABLE_GTEST_TESTS=OFF`
-to `CMAKE_ARGS`.
-
-
-## Installation Guide
-
-For Ubuntu and Debian Based System
-
-First make sure you have git and cmake installed (If not please install it)
-
-```
-sudo apt-get install git
-sudo apt-get install cmake
-```
-
-Now, let's clone the repository and build it
-
-```
-git clone https://github.com/google/benchmark.git
-cd benchmark
-git clone https://github.com/google/googletest.git
-mkdir build
-cd build
-cmake .. -DCMAKE_BUILD_TYPE=RELEASE
-make
-```
-
-We need to install the library globally now
-
-```
-sudo make install
-```
-
-Now you have google/benchmark installed in your machine
-Note: Don't forget to link to pthread library while building
-
-## Stable and Experimental Library Versions
-
-The main branch contains the latest stable version of the benchmarking library;
-the API of which can be considered largely stable, with source breaking changes
-being made only upon the release of a new major version.
-
-Newer, experimental, features are implemented and tested on the
-[`v2` branch](https://github.com/google/benchmark/tree/v2). Users who wish
-to use, test, and provide feedback on the new features are encouraged to try
-this branch. However, this branch provides no stability guarantees and reserves
-the right to change and break the API at any time.
-
-##Prerequisite knowledge
-
-Before attempting to understand this framework one should ideally have some familiarity with the structure and format of the Google Test framework, upon which it is based. Documentation for Google Test, including a "Getting Started" (primer) guide, is available here:
-https://github.com/google/googletest/blob/master/googletest/docs/Documentation.md
-
-
-## Example usage
-### Basic usage
-Define a function that executes the code to be measured.
-
-```c++
-#include <benchmark/benchmark.h>
-
-static void BM_StringCreation(benchmark::State& state) {
-  for (auto _ : state)
-    std::string empty_string;
-}
-// Register the function as a benchmark
-BENCHMARK(BM_StringCreation);
-
-// Define another benchmark
-static void BM_StringCopy(benchmark::State& state) {
-  std::string x = "hello";
-  for (auto _ : state)
-    std::string copy(x);
-}
-BENCHMARK(BM_StringCopy);
-
-BENCHMARK_MAIN();
-```
-
-Don't forget to inform your linker to add benchmark library e.g. through
-`-lbenchmark` compilation flag. Alternatively, you may leave out the
-`BENCHMARK_MAIN();` at the end of the source file and link against
-`-lbenchmark_main` to get the same default behavior.
-
-The benchmark library will reporting the timing for the code within the `for(...)` loop.
-
-### Passing arguments
-Sometimes a family of benchmarks can be implemented with just one routine that
-takes an extra argument to specify which one of the family of benchmarks to
-run. For example, the following code defines a family of benchmarks for
-measuring the speed of `memcpy()` calls of 
diff erent lengths:
-
-```c++
-static void BM_memcpy(benchmark::State& state) {
-  char* src = new char[state.range(0)];
-  char* dst = new char[state.range(0)];
-  memset(src, 'x', state.range(0));
-  for (auto _ : state)
-    memcpy(dst, src, state.range(0));
-  state.SetBytesProcessed(int64_t(state.iterations()) *
-                          int64_t(state.range(0)));
-  delete[] src;
-  delete[] dst;
-}
-BENCHMARK(BM_memcpy)->Arg(8)->Arg(64)->Arg(512)->Arg(1<<10)->Arg(8<<10);
-```
-
-The preceding code is quite repetitive, and can be replaced with the following
-short-hand. The following invocation will pick a few appropriate arguments in
-the specified range and will generate a benchmark for each such argument.
-
-```c++
-BENCHMARK(BM_memcpy)->Range(8, 8<<10);
-```
-
-By default the arguments in the range are generated in multiples of eight and
-the command above selects [ 8, 64, 512, 4k, 8k ]. In the following code the
-range multiplier is changed to multiples of two.
-
-```c++
-BENCHMARK(BM_memcpy)->RangeMultiplier(2)->Range(8, 8<<10);
-```
-Now arguments generated are [ 8, 16, 32, 64, 128, 256, 512, 1024, 2k, 4k, 8k ].
-
-You might have a benchmark that depends on two or more inputs. For example, the
-following code defines a family of benchmarks for measuring the speed of set
-insertion.
-
-```c++
-static void BM_SetInsert(benchmark::State& state) {
-  std::set<int> data;
-  for (auto _ : state) {
-    state.PauseTiming();
-    data = ConstructRandomSet(state.range(0));
-    state.ResumeTiming();
-    for (int j = 0; j < state.range(1); ++j)
-      data.insert(RandomNumber());
-  }
-}
-BENCHMARK(BM_SetInsert)
-    ->Args({1<<10, 128})
-    ->Args({2<<10, 128})
-    ->Args({4<<10, 128})
-    ->Args({8<<10, 128})
-    ->Args({1<<10, 512})
-    ->Args({2<<10, 512})
-    ->Args({4<<10, 512})
-    ->Args({8<<10, 512});
-```
-
-The preceding code is quite repetitive, and can be replaced with the following
-short-hand. The following macro will pick a few appropriate arguments in the
-product of the two specified ranges and will generate a benchmark for each such
-pair.
-
-```c++
-BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {128, 512}});
-```
-
-For more complex patterns of inputs, passing a custom function to `Apply` allows
-programmatic specification of an arbitrary set of arguments on which to run the
-benchmark. The following example enumerates a dense range on one parameter,
-and a sparse range on the second.
-
-```c++
-static void CustomArguments(benchmark::internal::Benchmark* b) {
-  for (int i = 0; i <= 10; ++i)
-    for (int j = 32; j <= 1024*1024; j *= 8)
-      b->Args({i, j});
-}
-BENCHMARK(BM_SetInsert)->Apply(CustomArguments);
-```
-
-### Calculate asymptotic complexity (Big O)
-Asymptotic complexity might be calculated for a family of benchmarks. The
-following code will calculate the coefficient for the high-order term in the
-running time and the normalized root-mean square error of string comparison.
-
-```c++
-static void BM_StringCompare(benchmark::State& state) {
-  std::string s1(state.range(0), '-');
-  std::string s2(state.range(0), '-');
-  for (auto _ : state) {
-    benchmark::DoNotOptimize(s1.compare(s2));
-  }
-  state.SetComplexityN(state.range(0));
-}
-BENCHMARK(BM_StringCompare)
-    ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity(benchmark::oN);
-```
-
-As shown in the following invocation, asymptotic complexity might also be
-calculated automatically.
-
-```c++
-BENCHMARK(BM_StringCompare)
-    ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity();
-```
-
-The following code will specify asymptotic complexity with a lambda function,
-that might be used to customize high-order term calculation.
-
-```c++
-BENCHMARK(BM_StringCompare)->RangeMultiplier(2)
-    ->Range(1<<10, 1<<18)->Complexity([](int n)->double{return n; });
-```
-
-### Templated benchmarks
-Templated benchmarks work the same way: This example produces and consumes
-messages of size `sizeof(v)` `range_x` times. It also outputs throughput in the
-absence of multiprogramming.
-
-```c++
-template <class Q> int BM_Sequential(benchmark::State& state) {
-  Q q;
-  typename Q::value_type v;
-  for (auto _ : state) {
-    for (int i = state.range(0); i--; )
-      q.push(v);
-    for (int e = state.range(0); e--; )
-      q.Wait(&v);
-  }
-  // actually messages, not bytes:
-  state.SetBytesProcessed(
-      static_cast<int64_t>(state.iterations())*state.range(0));
-}
-BENCHMARK_TEMPLATE(BM_Sequential, WaitQueue<int>)->Range(1<<0, 1<<10);
-```
-
-Three macros are provided for adding benchmark templates.
-
-```c++
-#ifdef BENCHMARK_HAS_CXX11
-#define BENCHMARK_TEMPLATE(func, ...) // Takes any number of parameters.
-#else // C++ < C++11
-#define BENCHMARK_TEMPLATE(func, arg1)
-#endif
-#define BENCHMARK_TEMPLATE1(func, arg1)
-#define BENCHMARK_TEMPLATE2(func, arg1, arg2)
-```
-
-### A Faster KeepRunning loop
-
-In C++11 mode, a ranged-based for loop should be used in preference to
-the `KeepRunning` loop for running the benchmarks. For example:
-
-```c++
-static void BM_Fast(benchmark::State &state) {
-  for (auto _ : state) {
-    FastOperation();
-  }
-}
-BENCHMARK(BM_Fast);
-```
-
-The reason the ranged-for loop is faster than using `KeepRunning`, is
-because `KeepRunning` requires a memory load and store of the iteration count
-ever iteration, whereas the ranged-for variant is able to keep the iteration count
-in a register.
-
-For example, an empty inner loop of using the ranged-based for method looks like:
-
-```asm
-# Loop Init
-  mov rbx, qword ptr [r14 + 104]
-  call benchmark::State::StartKeepRunning()
-  test rbx, rbx
-  je .LoopEnd
-.LoopHeader: # =>This Inner Loop Header: Depth=1
-  add rbx, -1
-  jne .LoopHeader
-.LoopEnd:
-```
-
-Compared to an empty `KeepRunning` loop, which looks like:
-
-```asm
-.LoopHeader: # in Loop: Header=BB0_3 Depth=1
-  cmp byte ptr [rbx], 1
-  jne .LoopInit
-.LoopBody: # =>This Inner Loop Header: Depth=1
-  mov rax, qword ptr [rbx + 8]
-  lea rcx, [rax + 1]
-  mov qword ptr [rbx + 8], rcx
-  cmp rax, qword ptr [rbx + 104]
-  jb .LoopHeader
-  jmp .LoopEnd
-.LoopInit:
-  mov rdi, rbx
-  call benchmark::State::StartKeepRunning()
-  jmp .LoopBody
-.LoopEnd:
-```
-
-Unless C++03 compatibility is required, the ranged-for variant of writing
-the benchmark loop should be preferred.
-
-## Passing arbitrary arguments to a benchmark
-In C++11 it is possible to define a benchmark that takes an arbitrary number
-of extra arguments. The `BENCHMARK_CAPTURE(func, test_case_name, ...args)`
-macro creates a benchmark that invokes `func`  with the `benchmark::State` as
-the first argument followed by the specified `args...`.
-The `test_case_name` is appended to the name of the benchmark and
-should describe the values passed.
-
-```c++
-template <class ...ExtraArgs>
-void BM_takes_args(benchmark::State& state, ExtraArgs&&... extra_args) {
-  [...]
-}
-// Registers a benchmark named "BM_takes_args/int_string_test" that passes
-// the specified values to `extra_args`.
-BENCHMARK_CAPTURE(BM_takes_args, int_string_test, 42, std::string("abc"));
-```
-Note that elements of `...args` may refer to global variables. Users should
-avoid modifying global state inside of a benchmark.
-
-## Using RegisterBenchmark(name, fn, args...)
-
-The `RegisterBenchmark(name, func, args...)` function provides an alternative
-way to create and register benchmarks.
-`RegisterBenchmark(name, func, args...)` creates, registers, and returns a
-pointer to a new benchmark with the specified `name` that invokes
-`func(st, args...)` where `st` is a `benchmark::State` object.
-
-Unlike the `BENCHMARK` registration macros, which can only be used at the global
-scope, the `RegisterBenchmark` can be called anywhere. This allows for
-benchmark tests to be registered programmatically.
-
-Additionally `RegisterBenchmark` allows any callable object to be registered
-as a benchmark. Including capturing lambdas and function objects.
-
-For Example:
-```c++
-auto BM_test = [](benchmark::State& st, auto Inputs) { /* ... */ };
-
-int main(int argc, char** argv) {
-  for (auto& test_input : { /* ... */ })
-      benchmark::RegisterBenchmark(test_input.name(), BM_test, test_input);
-  benchmark::Initialize(&argc, argv);
-  benchmark::RunSpecifiedBenchmarks();
-}
-```
-
-### Multithreaded benchmarks
-In a multithreaded test (benchmark invoked by multiple threads simultaneously),
-it is guaranteed that none of the threads will start until all have reached
-the start of the benchmark loop, and all will have finished before any thread
-exits the benchmark loop. (This behavior is also provided by the `KeepRunning()`
-API) As such, any global setup or teardown can be wrapped in a check against the thread
-index:
-
-```c++
-static void BM_MultiThreaded(benchmark::State& state) {
-  if (state.thread_index == 0) {
-    // Setup code here.
-  }
-  for (auto _ : state) {
-    // Run the test as normal.
-  }
-  if (state.thread_index == 0) {
-    // Teardown code here.
-  }
-}
-BENCHMARK(BM_MultiThreaded)->Threads(2);
-```
-
-If the benchmarked code itself uses threads and you want to compare it to
-single-threaded code, you may want to use real-time ("wallclock") measurements
-for latency comparisons:
-
-```c++
-BENCHMARK(BM_test)->Range(8, 8<<10)->UseRealTime();
-```
-
-Without `UseRealTime`, CPU time is used by default.
-
-
-## Manual timing
-For benchmarking something for which neither CPU time nor real-time are
-correct or accurate enough, completely manual timing is supported using
-the `UseManualTime` function.
-
-When `UseManualTime` is used, the benchmarked code must call
-`SetIterationTime` once per iteration of the benchmark loop to
-report the manually measured time.
-
-An example use case for this is benchmarking GPU execution (e.g. OpenCL
-or CUDA kernels, OpenGL or Vulkan or Direct3D draw calls), which cannot
-be accurately measured using CPU time or real-time. Instead, they can be
-measured accurately using a dedicated API, and these measurement results
-can be reported back with `SetIterationTime`.
-
-```c++
-static void BM_ManualTiming(benchmark::State& state) {
-  int microseconds = state.range(0);
-  std::chrono::duration<double, std::micro> sleep_duration {
-    static_cast<double>(microseconds)
-  };
-
-  for (auto _ : state) {
-    auto start = std::chrono::high_resolution_clock::now();
-    // Simulate some useful workload with a sleep
-    std::this_thread::sleep_for(sleep_duration);
-    auto end   = std::chrono::high_resolution_clock::now();
-
-    auto elapsed_seconds =
-      std::chrono::duration_cast<std::chrono::duration<double>>(
-        end - start);
-
-    state.SetIterationTime(elapsed_seconds.count());
-  }
-}
-BENCHMARK(BM_ManualTiming)->Range(1, 1<<17)->UseManualTime();
-```
-
-### Preventing optimisation
-To prevent a value or expression from being optimized away by the compiler
-the `benchmark::DoNotOptimize(...)` and `benchmark::ClobberMemory()`
-functions can be used.
-
-```c++
-static void BM_test(benchmark::State& state) {
-  for (auto _ : state) {
-      int x = 0;
-      for (int i=0; i < 64; ++i) {
-        benchmark::DoNotOptimize(x += i);
-      }
-  }
-}
-```
-
-`DoNotOptimize(<expr>)` forces the  *result* of `<expr>` to be stored in either
-memory or a register. For GNU based compilers it acts as read/write barrier
-for global memory. More specifically it forces the compiler to flush pending
-writes to memory and reload any other values as necessary.
-
-Note that `DoNotOptimize(<expr>)` does not prevent optimizations on `<expr>`
-in any way. `<expr>` may even be removed entirely when the result is already
-known. For example:
-
-```c++
-  /* Example 1: `<expr>` is removed entirely. */
-  int foo(int x) { return x + 42; }
-  while (...) DoNotOptimize(foo(0)); // Optimized to DoNotOptimize(42);
-
-  /*  Example 2: Result of '<expr>' is only reused */
-  int bar(int) __attribute__((const));
-  while (...) DoNotOptimize(bar(0)); // Optimized to:
-  // int __result__ = bar(0);
-  // while (...) DoNotOptimize(__result__);
-```
-
-The second tool for preventing optimizations is `ClobberMemory()`. In essence
-`ClobberMemory()` forces the compiler to perform all pending writes to global
-memory. Memory managed by block scope objects must be "escaped" using
-`DoNotOptimize(...)` before it can be clobbered. In the below example
-`ClobberMemory()` prevents the call to `v.push_back(42)` from being optimized
-away.
-
-```c++
-static void BM_vector_push_back(benchmark::State& state) {
-  for (auto _ : state) {
-    std::vector<int> v;
-    v.reserve(1);
-    benchmark::DoNotOptimize(v.data()); // Allow v.data() to be clobbered.
-    v.push_back(42);
-    benchmark::ClobberMemory(); // Force 42 to be written to memory.
-  }
-}
-```
-
-Note that `ClobberMemory()` is only available for GNU or MSVC based compilers.
-
-### Set time unit manually
-If a benchmark runs a few milliseconds it may be hard to visually compare the
-measured times, since the output data is given in nanoseconds per default. In
-order to manually set the time unit, you can specify it manually:
-
-```c++
-BENCHMARK(BM_test)->Unit(benchmark::kMillisecond);
-```
-
-## Controlling number of iterations
-In all cases, the number of iterations for which the benchmark is run is
-governed by the amount of time the benchmark takes. Concretely, the number of
-iterations is at least one, not more than 1e9, until CPU time is greater than
-the minimum time, or the wallclock time is 5x minimum time. The minimum time is
-set as a flag `--benchmark_min_time` or per benchmark by calling `MinTime` on
-the registered benchmark object.
-
-## Reporting the mean, median and standard deviation by repeated benchmarks
-By default each benchmark is run once and that single result is reported.
-However benchmarks are often noisy and a single result may not be representative
-of the overall behavior. For this reason it's possible to repeatedly rerun the
-benchmark.
-
-The number of runs of each benchmark is specified globally by the
-`--benchmark_repetitions` flag or on a per benchmark basis by calling
-`Repetitions` on the registered benchmark object. When a benchmark is run more
-than once the mean, median and standard deviation of the runs will be reported.
-
-Additionally the `--benchmark_report_aggregates_only={true|false}` flag or
-`ReportAggregatesOnly(bool)` function can be used to change how repeated tests
-are reported. By default the result of each repeated run is reported. When this
-option is `true` only the mean, median and standard deviation of the runs is reported.
-Calling `ReportAggregatesOnly(bool)` on a registered benchmark object overrides
-the value of the flag for that benchmark.
-
-## User-defined statistics for repeated benchmarks
-While having mean, median and standard deviation is nice, this may not be
-enough for everyone. For example you may want to know what is the largest
-observation, e.g. because you have some real-time constraints. This is easy.
-The following code will specify a custom statistic to be calculated, defined
-by a lambda function.
-
-```c++
-void BM_spin_empty(benchmark::State& state) {
-  for (auto _ : state) {
-    for (int x = 0; x < state.range(0); ++x) {
-      benchmark::DoNotOptimize(x);
-    }
-  }
-}
-
-BENCHMARK(BM_spin_empty)
-  ->ComputeStatistics("max", [](const std::vector<double>& v) -> double {
-    return *(std::max_element(std::begin(v), std::end(v)));
-  })
-  ->Arg(512);
-```
-
-## Fixtures
-Fixture tests are created by
-first defining a type that derives from `::benchmark::Fixture` and then
-creating/registering the tests using the following macros:
-
-* `BENCHMARK_F(ClassName, Method)`
-* `BENCHMARK_DEFINE_F(ClassName, Method)`
-* `BENCHMARK_REGISTER_F(ClassName, Method)`
-
-For Example:
-
-```c++
-class MyFixture : public benchmark::Fixture {};
-
-BENCHMARK_F(MyFixture, FooTest)(benchmark::State& st) {
-   for (auto _ : st) {
-     ...
-  }
-}
-
-BENCHMARK_DEFINE_F(MyFixture, BarTest)(benchmark::State& st) {
-   for (auto _ : st) {
-     ...
-  }
-}
-/* BarTest is NOT registered */
-BENCHMARK_REGISTER_F(MyFixture, BarTest)->Threads(2);
-/* BarTest is now registered */
-```
-
-### Templated fixtures
-Also you can create templated fixture by using the following macros:
-
-* `BENCHMARK_TEMPLATE_F(ClassName, Method, ...)`
-* `BENCHMARK_TEMPLATE_DEFINE_F(ClassName, Method, ...)`
-
-For example:
-```c++
-template<typename T>
-class MyFixture : public benchmark::Fixture {};
-
-BENCHMARK_TEMPLATE_F(MyFixture, IntTest, int)(benchmark::State& st) {
-   for (auto _ : st) {
-     ...
-  }
-}
-
-BENCHMARK_TEMPLATE_DEFINE_F(MyFixture, DoubleTest, double)(benchmark::State& st) {
-   for (auto _ : st) {
-     ...
-  }
-}
-
-BENCHMARK_REGISTER_F(MyFixture, DoubleTest)->Threads(2);
-```
-
-## User-defined counters
-
-You can add your own counters with user-defined names. The example below
-will add columns "Foo", "Bar" and "Baz" in its output:
-
-```c++
-static void UserCountersExample1(benchmark::State& state) {
-  double numFoos = 0, numBars = 0, numBazs = 0;
-  for (auto _ : state) {
-    // ... count Foo,Bar,Baz events
-  }
-  state.counters["Foo"] = numFoos;
-  state.counters["Bar"] = numBars;
-  state.counters["Baz"] = numBazs;
-}
-```
-
-The `state.counters` object is a `std::map` with `std::string` keys
-and `Counter` values. The latter is a `double`-like class, via an implicit
-conversion to `double&`. Thus you can use all of the standard arithmetic
-assignment operators (`=,+=,-=,*=,/=`) to change the value of each counter.
-
-In multithreaded benchmarks, each counter is set on the calling thread only.
-When the benchmark finishes, the counters from each thread will be summed;
-the resulting sum is the value which will be shown for the benchmark.
-
-The `Counter` constructor accepts two parameters: the value as a `double`
-and a bit flag which allows you to show counters as rates and/or as
-per-thread averages:
-
-```c++
-  // sets a simple counter
-  state.counters["Foo"] = numFoos;
-
-  // Set the counter as a rate. It will be presented divided
-  // by the duration of the benchmark.
-  state.counters["FooRate"] = Counter(numFoos, benchmark::Counter::kIsRate);
-
-  // Set the counter as a thread-average quantity. It will
-  // be presented divided by the number of threads.
-  state.counters["FooAvg"] = Counter(numFoos, benchmark::Counter::kAvgThreads);
-
-  // There's also a combined flag:
-  state.counters["FooAvgRate"] = Counter(numFoos,benchmark::Counter::kAvgThreadsRate);
-```
-
-When you're compiling in C++11 mode or later you can use `insert()` with
-`std::initializer_list`:
-
-```c++
-  // With C++11, this can be done:
-  state.counters.insert({{"Foo", numFoos}, {"Bar", numBars}, {"Baz", numBazs}});
-  // ... instead of:
-  state.counters["Foo"] = numFoos;
-  state.counters["Bar"] = numBars;
-  state.counters["Baz"] = numBazs;
-```
-
-### Counter reporting
-
-When using the console reporter, by default, user counters are are printed at
-the end after the table, the same way as ``bytes_processed`` and
-``items_processed``. This is best for cases in which there are few counters,
-or where there are only a couple of lines per benchmark. Here's an example of
-the default output:
-
-```
-------------------------------------------------------------------------------
-Benchmark                        Time           CPU Iterations UserCounters...
-------------------------------------------------------------------------------
-BM_UserCounter/threads:8      2248 ns      10277 ns      68808 Bar=16 Bat=40 Baz=24 Foo=8
-BM_UserCounter/threads:1      9797 ns       9788 ns      71523 Bar=2 Bat=5 Baz=3 Foo=1024m
-BM_UserCounter/threads:2      4924 ns       9842 ns      71036 Bar=4 Bat=10 Baz=6 Foo=2
-BM_UserCounter/threads:4      2589 ns      10284 ns      68012 Bar=8 Bat=20 Baz=12 Foo=4
-BM_UserCounter/threads:8      2212 ns      10287 ns      68040 Bar=16 Bat=40 Baz=24 Foo=8
-BM_UserCounter/threads:16     1782 ns      10278 ns      68144 Bar=32 Bat=80 Baz=48 Foo=16
-BM_UserCounter/threads:32     1291 ns      10296 ns      68256 Bar=64 Bat=160 Baz=96 Foo=32
-BM_UserCounter/threads:4      2615 ns      10307 ns      68040 Bar=8 Bat=20 Baz=12 Foo=4
-BM_Factorial                    26 ns         26 ns   26608979 40320
-BM_Factorial/real_time          26 ns         26 ns   26587936 40320
-BM_CalculatePiRange/1           16 ns         16 ns   45704255 0
-BM_CalculatePiRange/8           73 ns         73 ns    9520927 3.28374
-BM_CalculatePiRange/64         609 ns        609 ns    1140647 3.15746
-BM_CalculatePiRange/512       4900 ns       4901 ns     142696 3.14355
-```
-
-If this doesn't suit you, you can print each counter as a table column by
-passing the flag `--benchmark_counters_tabular=true` to the benchmark
-application. This is best for cases in which there are a lot of counters, or
-a lot of lines per individual benchmark. Note that this will trigger a
-reprinting of the table header any time the counter set changes between
-individual benchmarks. Here's an example of corresponding output when
-`--benchmark_counters_tabular=true` is passed:
-
-```
----------------------------------------------------------------------------------------
-Benchmark                        Time           CPU Iterations    Bar   Bat   Baz   Foo
----------------------------------------------------------------------------------------
-BM_UserCounter/threads:8      2198 ns       9953 ns      70688     16    40    24     8
-BM_UserCounter/threads:1      9504 ns       9504 ns      73787      2     5     3     1
-BM_UserCounter/threads:2      4775 ns       9550 ns      72606      4    10     6     2
-BM_UserCounter/threads:4      2508 ns       9951 ns      70332      8    20    12     4
-BM_UserCounter/threads:8      2055 ns       9933 ns      70344     16    40    24     8
-BM_UserCounter/threads:16     1610 ns       9946 ns      70720     32    80    48    16
-BM_UserCounter/threads:32     1192 ns       9948 ns      70496     64   160    96    32
-BM_UserCounter/threads:4      2506 ns       9949 ns      70332      8    20    12     4
---------------------------------------------------------------
-Benchmark                        Time           CPU Iterations
---------------------------------------------------------------
-BM_Factorial                    26 ns         26 ns   26392245 40320
-BM_Factorial/real_time          26 ns         26 ns   26494107 40320
-BM_CalculatePiRange/1           15 ns         15 ns   45571597 0
-BM_CalculatePiRange/8           74 ns         74 ns    9450212 3.28374
-BM_CalculatePiRange/64         595 ns        595 ns    1173901 3.15746
-BM_CalculatePiRange/512       4752 ns       4752 ns     147380 3.14355
-BM_CalculatePiRange/4k       37970 ns      37972 ns      18453 3.14184
-BM_CalculatePiRange/32k     303733 ns     303744 ns       2305 3.14162
-BM_CalculatePiRange/256k   2434095 ns    2434186 ns        288 3.1416
-BM_CalculatePiRange/1024k  9721140 ns    9721413 ns         71 3.14159
-BM_CalculatePi/threads:8      2255 ns       9943 ns      70936
-```
-Note above the additional header printed when the benchmark changes from
-``BM_UserCounter`` to ``BM_Factorial``. This is because ``BM_Factorial`` does
-not have the same counter set as ``BM_UserCounter``.
-
-## Exiting Benchmarks in Error
-
-When errors caused by external influences, such as file I/O and network
-communication, occur within a benchmark the
-`State::SkipWithError(const char* msg)` function can be used to skip that run
-of benchmark and report the error. Note that only future iterations of the
-`KeepRunning()` are skipped. For the ranged-for version of the benchmark loop
-Users must explicitly exit the loop, otherwise all iterations will be performed.
-Users may explicitly return to exit the benchmark immediately.
-
-The `SkipWithError(...)` function may be used at any point within the benchmark,
-including before and after the benchmark loop.
-
-For example:
-
-```c++
-static void BM_test(benchmark::State& state) {
-  auto resource = GetResource();
-  if (!resource.good()) {
-      state.SkipWithError("Resource is not good!");
-      // KeepRunning() loop will not be entered.
-  }
-  for (state.KeepRunning()) {
-      auto data = resource.read_data();
-      if (!resource.good()) {
-        state.SkipWithError("Failed to read data!");
-        break; // Needed to skip the rest of the iteration.
-     }
-     do_stuff(data);
-  }
-}
-
-static void BM_test_ranged_fo(benchmark::State & state) {
-  state.SkipWithError("test will not be entered");
-  for (auto _ : state) {
-    state.SkipWithError("Failed!");
-    break; // REQUIRED to prevent all further iterations.
-  }
-}
-```
-
-## Running a subset of the benchmarks
-
-The `--benchmark_filter=<regex>` option can be used to only run the benchmarks
-which match the specified `<regex>`. For example:
-
-```bash
-$ ./run_benchmarks.x --benchmark_filter=BM_memcpy/32
-Run on (1 X 2300 MHz CPU )
-2016-06-25 19:34:24
-Benchmark              Time           CPU Iterations
-----------------------------------------------------
-BM_memcpy/32          11 ns         11 ns   79545455
-BM_memcpy/32k       2181 ns       2185 ns     324074
-BM_memcpy/32          12 ns         12 ns   54687500
-BM_memcpy/32k       1834 ns       1837 ns     357143
-```
-
-
-## Output Formats
-The library supports multiple output formats. Use the
-`--benchmark_format=<console|json|csv>` flag to set the format type. `console`
-is the default format.
-
-The Console format is intended to be a human readable format. By default
-the format generates color output. Context is output on stderr and the
-tabular data on stdout. Example tabular output looks like:
-```
-Benchmark                               Time(ns)    CPU(ns) Iterations
-----------------------------------------------------------------------
-BM_SetInsert/1024/1                        28928      29349      23853  133.097kB/s   33.2742k items/s
-BM_SetInsert/1024/8                        32065      32913      21375  949.487kB/s   237.372k items/s
-BM_SetInsert/1024/10                       33157      33648      21431  1.13369MB/s   290.225k items/s
-```
-
-The JSON format outputs human readable json split into two top level attributes.
-The `context` attribute contains information about the run in general, including
-information about the CPU and the date.
-The `benchmarks` attribute contains a list of every benchmark run. Example json
-output looks like:
-```json
-{
-  "context": {
-    "date": "2015/03/17-18:40:25",
-    "num_cpus": 40,
-    "mhz_per_cpu": 2801,
-    "cpu_scaling_enabled": false,
-    "build_type": "debug"
-  },
-  "benchmarks": [
-    {
-      "name": "BM_SetInsert/1024/1",
-      "iterations": 94877,
-      "real_time": 29275,
-      "cpu_time": 29836,
-      "bytes_per_second": 134066,
-      "items_per_second": 33516
-    },
-    {
-      "name": "BM_SetInsert/1024/8",
-      "iterations": 21609,
-      "real_time": 32317,
-      "cpu_time": 32429,
-      "bytes_per_second": 986770,
-      "items_per_second": 246693
-    },
-    {
-      "name": "BM_SetInsert/1024/10",
-      "iterations": 21393,
-      "real_time": 32724,
-      "cpu_time": 33355,
-      "bytes_per_second": 1199226,
-      "items_per_second": 299807
-    }
-  ]
-}
-```
-
-The CSV format outputs comma-separated values. The `context` is output on stderr
-and the CSV itself on stdout. Example CSV output looks like:
-```
-name,iterations,real_time,cpu_time,bytes_per_second,items_per_second,label
-"BM_SetInsert/1024/1",65465,17890.7,8407.45,475768,118942,
-"BM_SetInsert/1024/8",116606,18810.1,9766.64,3.27646e+06,819115,
-"BM_SetInsert/1024/10",106365,17238.4,8421.53,4.74973e+06,1.18743e+06,
-```
-
-## Output Files
-The library supports writing the output of the benchmark to a file specified
-by `--benchmark_out=<filename>`. The format of the output can be specified
-using `--benchmark_out_format={json|console|csv}`. Specifying
-`--benchmark_out` does not suppress the console output.
-
-## Debug vs Release
-By default, benchmark builds as a debug library. You will see a warning in the output when this is the case. To build it as a release library instead, use:
-
-```
-cmake -DCMAKE_BUILD_TYPE=Release
-```
-
-To enable link-time optimisation, use
-
-```
-cmake -DCMAKE_BUILD_TYPE=Release -DBENCHMARK_ENABLE_LTO=true
-```
-
-If you are using gcc, you might need to set `GCC_AR` and `GCC_RANLIB` cmake cache variables, if autodetection fails.
-If you are using clang, you may need to set `LLVMAR_EXECUTABLE`, `LLVMNM_EXECUTABLE` and `LLVMRANLIB_EXECUTABLE` cmake cache variables.
-
-## Linking against the library
-
-When the library is built using GCC it is necessary to link with `-pthread`,
-due to how GCC implements `std::thread`.
-
-For GCC 4.x failing to link to pthreads will lead to runtime exceptions, not linker errors.
-See [issue #67](https://github.com/google/benchmark/issues/67) for more details.
-
-## Compiler Support
-
-Google Benchmark uses C++11 when building the library. As such we require
-a modern C++ toolchain, both compiler and standard library.
-
-The following minimum versions are strongly recommended build the library:
-
-* GCC 4.8
-* Clang 3.4
-* Visual Studio 2013
-* Intel 2015 Update 1
-
-Anything older *may* work.
-
-Note: Using the library and its headers in C++03 is supported. C++11 is only
-required to build the library.
-
-## Disable CPU frequency scaling
-If you see this error:
-```
-***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
-```
-you might want to disable the CPU frequency scaling while running the benchmark:
-```bash
-sudo cpupower frequency-set --governor performance
-./mybench
-sudo cpupower frequency-set --governor powersave
-```
-
-# Known Issues
-
-### Windows with CMake
-
-* Users must manually link `shlwapi.lib`. Failure to do so may result
-in unresolved symbols.
-
-### Solaris
-
-* Users must explicitly link with kstat library (-lkstat compilation flag).

diff  --git a/llvm/utils/benchmark/WORKSPACE b/llvm/utils/benchmark/WORKSPACE
deleted file mode 100644
index 54734f1ea55e7..0000000000000
--- a/llvm/utils/benchmark/WORKSPACE
+++ /dev/null
@@ -1,7 +0,0 @@
-workspace(name = "com_github_google_benchmark")
-
-http_archive(
-     name = "com_google_googletest",
-     urls = ["https://github.com/google/googletest/archive/3f0cf6b62ad1eb50d8736538363d3580dd640c3e.zip"],
-     strip_prefix = "googletest-3f0cf6b62ad1eb50d8736538363d3580dd640c3e",
-)

diff  --git a/llvm/utils/benchmark/appveyor.yml b/llvm/utils/benchmark/appveyor.yml
deleted file mode 100644
index e99c6e77f006e..0000000000000
--- a/llvm/utils/benchmark/appveyor.yml
+++ /dev/null
@@ -1,56 +0,0 @@
-version: '{build}'
-
-image: Visual Studio 2017
-
-configuration:
-  - Debug
-  - Release
-
-environment:
-  matrix:
-    - compiler: msvc-15-seh
-      generator: "Visual Studio 15 2017"
-
-    - compiler: msvc-15-seh
-      generator: "Visual Studio 15 2017 Win64"
-
-    - compiler: msvc-14-seh
-      generator: "Visual Studio 14 2015"
-
-    - compiler: msvc-14-seh
-      generator: "Visual Studio 14 2015 Win64"
-
-    - compiler: msvc-12-seh
-      generator: "Visual Studio 12 2013"
-
-    - compiler: msvc-12-seh
-      generator: "Visual Studio 12 2013 Win64"
-
-    - compiler: gcc-5.3.0-posix
-      generator: "MinGW Makefiles"
-      cxx_path: 'C:\mingw-w64\i686-5.3.0-posix-dwarf-rt_v4-rev0\mingw32\bin'
-      APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2015
-
-matrix:
-  fast_finish: true
-
-install:
-  # git bash conflicts with MinGW makefiles
-  - if "%generator%"=="MinGW Makefiles" (set "PATH=%PATH:C:\Program Files\Git\usr\bin;=%")
-  - if not "%cxx_path%"=="" (set "PATH=%PATH%;%cxx_path%")
-
-build_script:
-  - md _build -Force
-  - cd _build
-  - echo %configuration%
-  - cmake -G "%generator%" "-DCMAKE_BUILD_TYPE=%configuration%" -DBENCHMARK_DOWNLOAD_DEPENDENCIES=ON ..
-  - cmake --build . --config %configuration%
-
-test_script:
-  - ctest -c %configuration% --timeout 300 --output-on-failure
-
-artifacts:
-  - path: '_build/CMakeFiles/*.log'
-    name: logs
-  - path: '_build/Testing/**/*.xml'
-    name: test_results

diff  --git a/llvm/utils/benchmark/cmake/AddCXXCompilerFlag.cmake b/llvm/utils/benchmark/cmake/AddCXXCompilerFlag.cmake
deleted file mode 100644
index d0d2099814402..0000000000000
--- a/llvm/utils/benchmark/cmake/AddCXXCompilerFlag.cmake
+++ /dev/null
@@ -1,74 +0,0 @@
-# - Adds a compiler flag if it is supported by the compiler
-#
-# This function checks that the supplied compiler flag is supported and then
-# adds it to the corresponding compiler flags
-#
-#  add_cxx_compiler_flag(<FLAG> [<VARIANT>])
-#
-# - Example
-#
-# include(AddCXXCompilerFlag)
-# add_cxx_compiler_flag(-Wall)
-# add_cxx_compiler_flag(-no-strict-aliasing RELEASE)
-# Requires CMake 2.6+
-
-if(__add_cxx_compiler_flag)
-  return()
-endif()
-set(__add_cxx_compiler_flag INCLUDED)
-
-include(CheckCXXCompilerFlag)
-
-function(mangle_compiler_flag FLAG OUTPUT)
-  string(TOUPPER "HAVE_CXX_FLAG_${FLAG}" SANITIZED_FLAG)
-  string(REPLACE "+" "X" SANITIZED_FLAG ${SANITIZED_FLAG})
-  string(REGEX REPLACE "[^A-Za-z_0-9]" "_" SANITIZED_FLAG ${SANITIZED_FLAG})
-  string(REGEX REPLACE "_+" "_" SANITIZED_FLAG ${SANITIZED_FLAG})
-  set(${OUTPUT} "${SANITIZED_FLAG}" PARENT_SCOPE)
-endfunction(mangle_compiler_flag)
-
-function(add_cxx_compiler_flag FLAG)
-  mangle_compiler_flag("${FLAG}" MANGLED_FLAG)
-  set(OLD_CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS}")
-  set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} ${FLAG}")
-  check_cxx_compiler_flag("${FLAG}" ${MANGLED_FLAG})
-  set(CMAKE_REQUIRED_FLAGS "${OLD_CMAKE_REQUIRED_FLAGS}")
-  if(${MANGLED_FLAG})
-    set(VARIANT ${ARGV1})
-    if(ARGV1)
-      string(TOUPPER "_${VARIANT}" VARIANT)
-    endif()
-    set(CMAKE_CXX_FLAGS${VARIANT} "${CMAKE_CXX_FLAGS${VARIANT}} ${BENCHMARK_CXX_FLAGS${VARIANT}} ${FLAG}" PARENT_SCOPE)
-  endif()
-endfunction()
-
-function(add_required_cxx_compiler_flag FLAG)
-  mangle_compiler_flag("${FLAG}" MANGLED_FLAG)
-  set(OLD_CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS}")
-  set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} ${FLAG}")
-  check_cxx_compiler_flag("${FLAG}" ${MANGLED_FLAG})
-  set(CMAKE_REQUIRED_FLAGS "${OLD_CMAKE_REQUIRED_FLAGS}")
-  if(${MANGLED_FLAG})
-    set(VARIANT ${ARGV1})
-    if(ARGV1)
-      string(TOUPPER "_${VARIANT}" VARIANT)
-    endif()
-    set(CMAKE_CXX_FLAGS${VARIANT} "${CMAKE_CXX_FLAGS${VARIANT}} ${FLAG}" PARENT_SCOPE)
-    set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${FLAG}" PARENT_SCOPE)
-    set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} ${FLAG}" PARENT_SCOPE)
-    set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} ${FLAG}" PARENT_SCOPE)
-    set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} ${FLAG}" PARENT_SCOPE)
-  else()
-    message(FATAL_ERROR "Required flag '${FLAG}' is not supported by the compiler")
-  endif()
-endfunction()
-
-function(check_cxx_warning_flag FLAG)
-  mangle_compiler_flag("${FLAG}" MANGLED_FLAG)
-  set(OLD_CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS}")
-  # Add -Werror to ensure the compiler generates an error if the warning flag
-  # doesn't exist.
-  set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} -Werror ${FLAG}")
-  check_cxx_compiler_flag("${FLAG}" ${MANGLED_FLAG})
-  set(CMAKE_REQUIRED_FLAGS "${OLD_CMAKE_REQUIRED_FLAGS}")
-endfunction()

diff  --git a/llvm/utils/benchmark/cmake/CXXFeatureCheck.cmake b/llvm/utils/benchmark/cmake/CXXFeatureCheck.cmake
deleted file mode 100644
index c4c4d660f1eba..0000000000000
--- a/llvm/utils/benchmark/cmake/CXXFeatureCheck.cmake
+++ /dev/null
@@ -1,64 +0,0 @@
-# - Compile and run code to check for C++ features
-#
-# This functions compiles a source file under the `cmake` folder
-# and adds the corresponding `HAVE_[FILENAME]` flag to the CMake
-# environment
-#
-#  cxx_feature_check(<FLAG> [<VARIANT>])
-#
-# - Example
-#
-# include(CXXFeatureCheck)
-# cxx_feature_check(STD_REGEX)
-# Requires CMake 2.8.12+
-
-if(__cxx_feature_check)
-  return()
-endif()
-set(__cxx_feature_check INCLUDED)
-
-function(cxx_feature_check FILE)
-  string(TOLOWER ${FILE} FILE)
-  string(TOUPPER ${FILE} VAR)
-  string(TOUPPER "HAVE_${VAR}" FEATURE)
-  if (DEFINED HAVE_${VAR})
-    set(HAVE_${VAR} 1 PARENT_SCOPE)
-    add_definitions(-DHAVE_${VAR})
-    return()
-  endif()
-
-  if (NOT DEFINED COMPILE_${FEATURE})
-    message("-- Performing Test ${FEATURE}")
-    if(CMAKE_CROSSCOMPILING)
-      try_compile(COMPILE_${FEATURE}
-              ${CMAKE_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/cmake/${FILE}.cpp
-              CMAKE_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS}
-              LINK_LIBRARIES ${BENCHMARK_CXX_LIBRARIES})
-      if(COMPILE_${FEATURE})
-        message(WARNING
-              "If you see build failures due to cross compilation, try setting HAVE_${VAR} to 0")
-        set(RUN_${FEATURE} 0)
-      else()
-        set(RUN_${FEATURE} 1)
-      endif()
-    else()
-      message("-- Performing Test ${FEATURE}")
-      try_run(RUN_${FEATURE} COMPILE_${FEATURE}
-              ${CMAKE_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/cmake/${FILE}.cpp
-              CMAKE_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS}
-              LINK_LIBRARIES ${BENCHMARK_CXX_LIBRARIES})
-    endif()
-  endif()
-
-  if(RUN_${FEATURE} EQUAL 0)
-    message("-- Performing Test ${FEATURE} -- success")
-    set(HAVE_${VAR} 1 PARENT_SCOPE)
-    add_definitions(-DHAVE_${VAR})
-  else()
-    if(NOT COMPILE_${FEATURE})
-      message("-- Performing Test ${FEATURE} -- failed to compile")
-    else()
-      message("-- Performing Test ${FEATURE} -- compiled but failed to run")
-    endif()
-  endif()
-endfunction()

diff  --git a/llvm/utils/benchmark/cmake/Config.cmake.in b/llvm/utils/benchmark/cmake/Config.cmake.in
deleted file mode 100644
index 6e9256eea8a2d..0000000000000
--- a/llvm/utils/benchmark/cmake/Config.cmake.in
+++ /dev/null
@@ -1 +0,0 @@
-include("${CMAKE_CURRENT_LIST_DIR}/@targets_export_name at .cmake")

diff  --git a/llvm/utils/benchmark/cmake/GetGitVersion.cmake b/llvm/utils/benchmark/cmake/GetGitVersion.cmake
deleted file mode 100644
index 88cebe3a1caac..0000000000000
--- a/llvm/utils/benchmark/cmake/GetGitVersion.cmake
+++ /dev/null
@@ -1,54 +0,0 @@
-# - Returns a version string from Git tags
-#
-# This function inspects the annotated git tags for the project and returns a string
-# into a CMake variable
-#
-#  get_git_version(<var>)
-#
-# - Example
-#
-# include(GetGitVersion)
-# get_git_version(GIT_VERSION)
-#
-# Requires CMake 2.8.11+
-find_package(Git)
-
-if(__get_git_version)
-  return()
-endif()
-set(__get_git_version INCLUDED)
-
-function(get_git_version var)
-  if(GIT_EXECUTABLE)
-      execute_process(COMMAND ${GIT_EXECUTABLE} describe --match "v[0-9]*.[0-9]*.[0-9]*" --abbrev=8
-          WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
-          RESULT_VARIABLE status
-          OUTPUT_VARIABLE GIT_VERSION
-          ERROR_QUIET)
-      if(${status})
-          set(GIT_VERSION "v0.0.0")
-      else()
-          string(STRIP ${GIT_VERSION} GIT_VERSION)
-          string(REGEX REPLACE "-[0-9]+-g" "-" GIT_VERSION ${GIT_VERSION})
-      endif()
-
-      # Work out if the repository is dirty
-      execute_process(COMMAND ${GIT_EXECUTABLE} update-index -q --refresh
-          WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
-          OUTPUT_QUIET
-          ERROR_QUIET)
-      execute_process(COMMAND ${GIT_EXECUTABLE} 
diff -index --name-only HEAD --
-          WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
-          OUTPUT_VARIABLE GIT_DIFF_INDEX
-          ERROR_QUIET)
-      string(COMPARE NOTEQUAL "${GIT_DIFF_INDEX}" "" GIT_DIRTY)
-      if (${GIT_DIRTY})
-          set(GIT_VERSION "${GIT_VERSION}-dirty")
-      endif()
-  else()
-      set(GIT_VERSION "v0.0.0")
-  endif()
-
-  message("-- git Version: ${GIT_VERSION}")
-  set(${var} ${GIT_VERSION} PARENT_SCOPE)
-endfunction()

diff  --git a/llvm/utils/benchmark/cmake/HandleGTest.cmake b/llvm/utils/benchmark/cmake/HandleGTest.cmake
deleted file mode 100644
index 7ce1a633d65a2..0000000000000
--- a/llvm/utils/benchmark/cmake/HandleGTest.cmake
+++ /dev/null
@@ -1,113 +0,0 @@
-
-include(split_list)
-
-macro(build_external_gtest)
-  include(ExternalProject)
-  set(GTEST_FLAGS "")
-  if (BENCHMARK_USE_LIBCXX)
-    if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
-      list(APPEND GTEST_FLAGS -stdlib=libc++)
-    else()
-      message(WARNING "Unsupported compiler (${CMAKE_CXX_COMPILER}) when using libc++")
-    endif()
-  endif()
-  if (BENCHMARK_BUILD_32_BITS)
-    list(APPEND GTEST_FLAGS -m32)
-  endif()
-  if (NOT "${CMAKE_CXX_FLAGS}" STREQUAL "")
-    list(APPEND GTEST_FLAGS ${CMAKE_CXX_FLAGS})
-  endif()
-  string(TOUPPER "${CMAKE_BUILD_TYPE}" GTEST_BUILD_TYPE)
-  if ("${GTEST_BUILD_TYPE}" STREQUAL "COVERAGE")
-    set(GTEST_BUILD_TYPE "DEBUG")
-  endif()
-  # FIXME: Since 10/Feb/2017 the googletest trunk has had a bug where
-  # -Werror=unused-function fires during the build on OS X. This is a temporary
-  # workaround to keep our travis bots from failing. It should be removed
-  # once gtest is fixed.
-  if (NOT "${CMAKE_CXX_COMPILER_ID}" STREQUAL "MSVC")
-    list(APPEND GTEST_FLAGS "-Wno-unused-function")
-  endif()
-  split_list(GTEST_FLAGS)
-  set(EXCLUDE_FROM_ALL_OPT "")
-  set(EXCLUDE_FROM_ALL_VALUE "")
-  if (${CMAKE_VERSION} VERSION_GREATER "3.0.99")
-      set(EXCLUDE_FROM_ALL_OPT "EXCLUDE_FROM_ALL")
-      set(EXCLUDE_FROM_ALL_VALUE "ON")
-  endif()
-  ExternalProject_Add(googletest
-      ${EXCLUDE_FROM_ALL_OPT} ${EXCLUDE_FROM_ALL_VALUE}
-      GIT_REPOSITORY https://github.com/google/googletest.git
-      GIT_TAG master
-      PREFIX "${CMAKE_BINARY_DIR}/googletest"
-      INSTALL_DIR "${CMAKE_BINARY_DIR}/googletest"
-      CMAKE_CACHE_ARGS
-        -DCMAKE_BUILD_TYPE:STRING=${GTEST_BUILD_TYPE}
-        -DCMAKE_C_COMPILER:STRING=${CMAKE_C_COMPILER}
-        -DCMAKE_CXX_COMPILER:STRING=${CMAKE_CXX_COMPILER}
-        -DCMAKE_INSTALL_PREFIX:PATH=<INSTALL_DIR>
-        -DCMAKE_INSTALL_LIBDIR:PATH=<INSTALL_DIR>/lib
-        -DCMAKE_CXX_FLAGS:STRING=${GTEST_FLAGS}
-        -Dgtest_force_shared_crt:BOOL=ON
-      )
-
-  ExternalProject_Get_Property(googletest install_dir)
-  set(GTEST_INCLUDE_DIRS ${install_dir}/include)
-  file(MAKE_DIRECTORY ${GTEST_INCLUDE_DIRS})
-
-  set(LIB_SUFFIX "${CMAKE_STATIC_LIBRARY_SUFFIX}")
-  set(LIB_PREFIX "${CMAKE_STATIC_LIBRARY_PREFIX}")
-  if("${GTEST_BUILD_TYPE}" STREQUAL "DEBUG")
-    set(LIB_SUFFIX "d${CMAKE_STATIC_LIBRARY_SUFFIX}")
-  endif()
-
-  # Use gmock_main instead of gtest_main because it initializes gtest as well.
-  # Note: The libraries are listed in reverse order of their dependancies.
-  foreach(LIB gtest gmock gmock_main)
-    add_library(${LIB} UNKNOWN IMPORTED)
-    set_target_properties(${LIB} PROPERTIES
-      IMPORTED_LOCATION ${install_dir}/lib/${LIB_PREFIX}${LIB}${LIB_SUFFIX}
-      INTERFACE_INCLUDE_DIRECTORIES ${GTEST_INCLUDE_DIRS}
-      INTERFACE_LINK_LIBRARIES "${GTEST_BOTH_LIBRARIES}"
-    )
-    add_dependencies(${LIB} googletest)
-    list(APPEND GTEST_BOTH_LIBRARIES ${LIB})
-  endforeach()
-endmacro(build_external_gtest)
-
-if (BENCHMARK_ENABLE_GTEST_TESTS)
-  if (IS_DIRECTORY ${CMAKE_SOURCE_DIR}/googletest)
-    set(GTEST_ROOT "${CMAKE_SOURCE_DIR}/googletest")
-    set(INSTALL_GTEST OFF CACHE INTERNAL "")
-    set(INSTALL_GMOCK OFF CACHE INTERNAL "")
-    add_subdirectory(${CMAKE_SOURCE_DIR}/googletest)
-    set(GTEST_BOTH_LIBRARIES gtest gmock gmock_main)
-    foreach(HEADER test mock)
-      # CMake 2.8 and older don't respect INTERFACE_INCLUDE_DIRECTORIES, so we
-      # have to add the paths ourselves.
-      set(HFILE g${HEADER}/g${HEADER}.h)
-      set(HPATH ${GTEST_ROOT}/google${HEADER}/include)
-      find_path(HEADER_PATH_${HEADER} ${HFILE}
-          NO_DEFAULT_PATHS
-          HINTS ${HPATH}
-      )
-      if (NOT HEADER_PATH_${HEADER})
-        message(FATAL_ERROR "Failed to find header ${HFILE} in ${HPATH}")
-      endif()
-      list(APPEND GTEST_INCLUDE_DIRS ${HEADER_PATH_${HEADER}})
-    endforeach()
-  elseif(BENCHMARK_DOWNLOAD_DEPENDENCIES)
-    build_external_gtest()
-  else()
-    find_package(GTest REQUIRED)
-    find_path(GMOCK_INCLUDE_DIRS gmock/gmock.h
-        HINTS ${GTEST_INCLUDE_DIRS})
-    if (NOT GMOCK_INCLUDE_DIRS)
-      message(FATAL_ERROR "Failed to find header gmock/gmock.h with hint ${GTEST_INCLUDE_DIRS}")
-    endif()
-    set(GTEST_INCLUDE_DIRS ${GTEST_INCLUDE_DIRS} ${GMOCK_INCLUDE_DIRS})
-    # FIXME: We don't currently require the gmock library to build the tests,
-    # and it's likely we won't find it, so we don't try. As long as we've
-    # found the gmock/gmock.h header and gtest_main that should be good enough.
-  endif()
-endif()

diff  --git a/llvm/utils/benchmark/cmake/Modules/FindLLVMAr.cmake b/llvm/utils/benchmark/cmake/Modules/FindLLVMAr.cmake
deleted file mode 100644
index 23469813cfab5..0000000000000
--- a/llvm/utils/benchmark/cmake/Modules/FindLLVMAr.cmake
+++ /dev/null
@@ -1,16 +0,0 @@
-include(FeatureSummary)
-
-find_program(LLVMAR_EXECUTABLE
-  NAMES llvm-ar
-  DOC "The llvm-ar executable"
-  )
-
-include(FindPackageHandleStandardArgs)
-find_package_handle_standard_args(LLVMAr
-  DEFAULT_MSG
-  LLVMAR_EXECUTABLE)
-
-SET_PACKAGE_PROPERTIES(LLVMAr PROPERTIES
-  URL https://llvm.org/docs/CommandGuide/llvm-ar.html
-  DESCRIPTION "create, modify, and extract from archives"
-)

diff  --git a/llvm/utils/benchmark/cmake/Modules/FindLLVMNm.cmake b/llvm/utils/benchmark/cmake/Modules/FindLLVMNm.cmake
deleted file mode 100644
index e56430a04f6e8..0000000000000
--- a/llvm/utils/benchmark/cmake/Modules/FindLLVMNm.cmake
+++ /dev/null
@@ -1,16 +0,0 @@
-include(FeatureSummary)
-
-find_program(LLVMNM_EXECUTABLE
-  NAMES llvm-nm
-  DOC "The llvm-nm executable"
-  )
-
-include(FindPackageHandleStandardArgs)
-find_package_handle_standard_args(LLVMNm
-  DEFAULT_MSG
-  LLVMNM_EXECUTABLE)
-
-SET_PACKAGE_PROPERTIES(LLVMNm PROPERTIES
-  URL https://llvm.org/docs/CommandGuide/llvm-nm.html
-  DESCRIPTION "list LLVM bitcode and object file’s symbol table"
-)

diff  --git a/llvm/utils/benchmark/cmake/Modules/FindLLVMRanLib.cmake b/llvm/utils/benchmark/cmake/Modules/FindLLVMRanLib.cmake
deleted file mode 100644
index 7b53e1a790590..0000000000000
--- a/llvm/utils/benchmark/cmake/Modules/FindLLVMRanLib.cmake
+++ /dev/null
@@ -1,15 +0,0 @@
-include(FeatureSummary)
-
-find_program(LLVMRANLIB_EXECUTABLE
-  NAMES llvm-ranlib
-  DOC "The llvm-ranlib executable"
-  )
-
-include(FindPackageHandleStandardArgs)
-find_package_handle_standard_args(LLVMRanLib
-  DEFAULT_MSG
-  LLVMRANLIB_EXECUTABLE)
-
-SET_PACKAGE_PROPERTIES(LLVMRanLib PROPERTIES
-  DESCRIPTION "generate index for LLVM archive"
-)

diff  --git a/llvm/utils/benchmark/cmake/benchmark.pc.in b/llvm/utils/benchmark/cmake/benchmark.pc.in
deleted file mode 100644
index 1e84bff68d811..0000000000000
--- a/llvm/utils/benchmark/cmake/benchmark.pc.in
+++ /dev/null
@@ -1,11 +0,0 @@
-prefix=@CMAKE_INSTALL_PREFIX@
-exec_prefix=${prefix}
-libdir=${prefix}/lib
-includedir=${prefix}/include
-
-Name: @PROJECT_NAME@
-Description: Google microbenchmark framework
-Version: @VERSION@
-
-Libs: -L${libdir} -lbenchmark
-Cflags: -I${includedir}

diff  --git a/llvm/utils/benchmark/cmake/gnu_posix_regex.cpp b/llvm/utils/benchmark/cmake/gnu_posix_regex.cpp
deleted file mode 100644
index 105189f02ee6f..0000000000000
--- a/llvm/utils/benchmark/cmake/gnu_posix_regex.cpp
+++ /dev/null
@@ -1,11 +0,0 @@
-#include <gnuregex.h>
-#include <string>
-int main() {
-  std::string str = "test0159";
-  regex_t re;
-  int ec = regcomp(&re, "^[a-z]+[0-9]+$", REG_EXTENDED | REG_NOSUB);
-  if (ec != 0) {
-    return ec;
-  }
-  return regexec(&re, str.c_str(), 0, nullptr, 0) ? -1 : 0;
-}

diff  --git a/llvm/utils/benchmark/cmake/llvm-toolchain.cmake b/llvm/utils/benchmark/cmake/llvm-toolchain.cmake
deleted file mode 100644
index fc119e52fd26a..0000000000000
--- a/llvm/utils/benchmark/cmake/llvm-toolchain.cmake
+++ /dev/null
@@ -1,8 +0,0 @@
-find_package(LLVMAr REQUIRED)
-set(CMAKE_AR "${LLVMAR_EXECUTABLE}" CACHE FILEPATH "" FORCE)
-
-find_package(LLVMNm REQUIRED)
-set(CMAKE_NM "${LLVMNM_EXECUTABLE}" CACHE FILEPATH "" FORCE)
-
-find_package(LLVMRanLib REQUIRED)
-set(CMAKE_RANLIB "${LLVMRANLIB_EXECUTABLE}" CACHE FILEPATH "" FORCE)

diff  --git a/llvm/utils/benchmark/cmake/posix_regex.cpp b/llvm/utils/benchmark/cmake/posix_regex.cpp
deleted file mode 100644
index 02f6dfc278a7c..0000000000000
--- a/llvm/utils/benchmark/cmake/posix_regex.cpp
+++ /dev/null
@@ -1,13 +0,0 @@
-#include <regex.h>
-#include <string>
-int main() {
-  std::string str = "test0159";
-  regex_t re;
-  int ec = regcomp(&re, "^[a-z]+[0-9]+$", REG_EXTENDED | REG_NOSUB);
-  if (ec != 0) {
-    return ec;
-  }
-  int ret = regexec(&re, str.c_str(), 0, nullptr, 0) ? -1 : 0;
-  regfree(&re);
-  return ret;
-}

diff  --git a/llvm/utils/benchmark/cmake/split_list.cmake b/llvm/utils/benchmark/cmake/split_list.cmake
deleted file mode 100644
index 67aed3fdc8579..0000000000000
--- a/llvm/utils/benchmark/cmake/split_list.cmake
+++ /dev/null
@@ -1,3 +0,0 @@
-macro(split_list listname)
-  string(REPLACE ";" " " ${listname} "${${listname}}")
-endmacro()

diff  --git a/llvm/utils/benchmark/cmake/std_regex.cpp b/llvm/utils/benchmark/cmake/std_regex.cpp
deleted file mode 100644
index 8177c482e838b..0000000000000
--- a/llvm/utils/benchmark/cmake/std_regex.cpp
+++ /dev/null
@@ -1,9 +0,0 @@
-#include <regex>
-#include <string>
-int main() {
-  const std::string str = "test0159";
-  std::regex re;
-  re = std::regex("^[a-z]+[0-9]+$",
-       std::regex_constants::extended | std::regex_constants::nosubs);
-  return std::regex_search(str, re) ? 0 : -1;
-}

diff  --git a/llvm/utils/benchmark/cmake/steady_clock.cpp b/llvm/utils/benchmark/cmake/steady_clock.cpp
deleted file mode 100644
index 66d50d17e9e61..0000000000000
--- a/llvm/utils/benchmark/cmake/steady_clock.cpp
+++ /dev/null
@@ -1,7 +0,0 @@
-#include <chrono>
-
-int main() {
-    typedef std::chrono::steady_clock Clock;
-    Clock::time_point tp = Clock::now();
-    ((void)tp);
-}

diff  --git a/llvm/utils/benchmark/cmake/thread_safety_attributes.cpp b/llvm/utils/benchmark/cmake/thread_safety_attributes.cpp
deleted file mode 100644
index 46161babdb100..0000000000000
--- a/llvm/utils/benchmark/cmake/thread_safety_attributes.cpp
+++ /dev/null
@@ -1,4 +0,0 @@
-#define HAVE_THREAD_SAFETY_ATTRIBUTES
-#include "../src/mutex.h"
-
-int main() {}

diff  --git a/llvm/utils/benchmark/docs/AssemblyTests.md b/llvm/utils/benchmark/docs/AssemblyTests.md
deleted file mode 100644
index 0d06f50ac652d..0000000000000
--- a/llvm/utils/benchmark/docs/AssemblyTests.md
+++ /dev/null
@@ -1,146 +0,0 @@
-# Assembly Tests
-
-The Benchmark library provides a number of functions whose primary
-purpose in to affect assembly generation, including `DoNotOptimize`
-and `ClobberMemory`. In addition there are other functions,
-such as `KeepRunning`, for which generating good assembly is paramount.
-
-For these functions it's important to have tests that verify the
-correctness and quality of the implementation. This requires testing
-the code generated by the compiler.
-
-This document describes how the Benchmark library tests compiler output,
-as well as how to properly write new tests.
-
-
-## Anatomy of a Test
-
-Writing a test has two steps:
-
-* Write the code you want to generate assembly for.
-* Add `// CHECK` lines to match against the verified assembly.
-
-Example:
-```c++
-
-// CHECK-LABEL: test_add:
-extern "C" int test_add() {
-    extern int ExternInt;
-    return ExternInt + 1;
-
-    // CHECK: movl ExternInt(%rip), %eax
-    // CHECK: addl %eax
-    // CHECK: ret
-}
-
-```
-
-#### LLVM Filecheck
-
-[LLVM's Filecheck](https://llvm.org/docs/CommandGuide/FileCheck.html)
-is used to test the generated assembly against the `// CHECK` lines
-specified in the tests source file. Please see the documentation
-linked above for information on how to write `CHECK` directives.
-
-#### Tips and Tricks:
-
-* Tests should match the minimal amount of output required to establish
-correctness. `CHECK` directives don't have to match on the exact next line
-after the previous match, so tests should omit checks for unimportant
-bits of assembly. ([`CHECK-NEXT`](https://llvm.org/docs/CommandGuide/FileCheck.html#the-check-next-directive)
-can be used to ensure a match occurs exactly after the previous match).
-
-* The tests are compiled with `-O3 -g0`. So we're only testing the
-optimized output.
-
-* The assembly output is further cleaned up using `tools/strip_asm.py`.
-This removes comments, assembler directives, and unused labels before
-the test is run.
-
-* The generated and stripped assembly file for a test is output under
-`<build-directory>/test/<test-name>.s`
-
-* Filecheck supports using [`CHECK` prefixes](https://llvm.org/docs/CommandGuide/FileCheck.html#cmdoption-check-prefixes)
-to specify lines that should only match in certain situations.
-The Benchmark tests use `CHECK-CLANG` and `CHECK-GNU` for lines that
-are only expected to match Clang or GCC's output respectively. Normal
-`CHECK` lines match against all compilers. (Note: `CHECK-NOT` and
-`CHECK-LABEL` are NOT prefixes. They are versions of non-prefixed
-`CHECK` lines)
-
-* Use `extern "C"` to disable name mangling for specific functions. This
-makes them easier to name in the `CHECK` lines.
-
-
-## Problems Writing Portable Tests
-
-Writing tests which check the code generated by a compiler are
-inherently non-portable. Different compilers and even 
diff erent compiler
-versions may generate entirely 
diff erent code. The Benchmark tests
-must tolerate this.
-
-LLVM Filecheck provides a number of mechanisms to help write
-"more portable" tests; including [matching using regular expressions](https://llvm.org/docs/CommandGuide/FileCheck.html#filecheck-pattern-matching-syntax),
-allowing the creation of [named variables](https://llvm.org/docs/CommandGuide/FileCheck.html#filecheck-variables)
-for later matching, and [checking non-sequential matches](https://llvm.org/docs/CommandGuide/FileCheck.html#the-check-dag-directive).
-
-#### Capturing Variables
-
-For example, say GCC stores a variable in a register but Clang stores
-it in memory. To write a test that tolerates both cases we "capture"
-the destination of the store, and then use the captured expression
-to write the remainder of the test.
-
-```c++
-// CHECK-LABEL: test_div_no_op_into_shr:
-extern "C" void test_div_no_op_into_shr(int value) {
-    int divisor = 2;
-    benchmark::DoNotOptimize(divisor); // hide the value from the optimizer
-    return value / divisor;
-
-    // CHECK: movl $2, [[DEST:.*]]
-    // CHECK: idivl [[DEST]]
-    // CHECK: ret
-}
-```
-
-#### Using Regular Expressions to Match Differing Output
-
-Often tests require testing assembly lines which may subtly 
diff er
-between compilers or compiler versions. A common example of this
-is matching stack frame addresses. In this case regular expressions
-can be used to match the 
diff ering bits of output. For example:
-
-```c++
-int ExternInt;
-struct Point { int x, y, z; };
-
-// CHECK-LABEL: test_store_point:
-extern "C" void test_store_point() {
-    Point p{ExternInt, ExternInt, ExternInt};
-    benchmark::DoNotOptimize(p);
-
-    // CHECK: movl ExternInt(%rip), %eax
-    // CHECK: movl %eax, -{{[0-9]+}}(%rsp)
-    // CHECK: movl %eax, -{{[0-9]+}}(%rsp)
-    // CHECK: movl %eax, -{{[0-9]+}}(%rsp)
-    // CHECK: ret
-}
-```
-
-## Current Requirements and Limitations
-
-The tests require Filecheck to be installed along the `PATH` of the
-build machine. Otherwise the tests will be disabled.
-
-Additionally, as mentioned in the previous section, codegen tests are
-inherently non-portable. Currently the tests are limited to:
-
-* x86_64 targets.
-* Compiled with GCC or Clang
-
-Further work could be done, at least on a limited basis, to extend the
-tests to other architectures and compilers (using `CHECK` prefixes).
-
-Furthermore, the tests fail for builds which specify additional flags
-that modify code generation, including `--coverage` or `-fsanitize=`.

diff  --git a/llvm/utils/benchmark/docs/tools.md b/llvm/utils/benchmark/docs/tools.md
deleted file mode 100644
index 70500bd3223ce..0000000000000
--- a/llvm/utils/benchmark/docs/tools.md
+++ /dev/null
@@ -1,242 +0,0 @@
-# Benchmark Tools
-
-## compare_bench.py
-
-The `compare_bench.py` utility which can be used to compare the result of benchmarks.
-The program is invoked like:
-
-``` bash
-$ compare_bench.py <old-benchmark> <new-benchmark> [benchmark options]...
-```
-
-Where `<old-benchmark>` and `<new-benchmark>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
-
-`[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes.
-
-The sample output using the JSON test files under `Inputs/` gives:
-
-``` bash
-$ ./compare_bench.py ./gbench/Inputs/test1_run1.json ./gbench/Inputs/test1_run2.json
-Comparing ./gbench/Inputs/test1_run1.json to ./gbench/Inputs/test1_run2.json
-Benchmark                        Time             CPU      Time Old      Time New       CPU Old       CPU New
--------------------------------------------------------------------------------------------------------------
-BM_SameTimes                  +0.0000         +0.0000            10            10            10            10
-BM_2xFaster                   -0.5000         -0.5000            50            25            50            25
-BM_2xSlower                   +1.0000         +1.0000            50           100            50           100
-BM_1PercentFaster             -0.0100         -0.0100           100            99           100            99
-BM_1PercentSlower             +0.0100         +0.0100           100           101           100           101
-BM_10PercentFaster            -0.1000         -0.1000           100            90           100            90
-BM_10PercentSlower            +0.1000         +0.1000           100           110           100           110
-BM_100xSlower                +99.0000        +99.0000           100         10000           100         10000
-BM_100xFaster                 -0.9900         -0.9900         10000           100         10000           100
-BM_10PercentCPUToTime         +0.1000         -0.1000           100           110           100            90
-BM_ThirdFaster                -0.3333         -0.3334           100            67           100            67
-BM_BadTimeUnit                -0.9000         +0.2000             0             0             0             1
-```
-
-As you can note, the values in `Time` and `CPU` columns are calculated as `(new - old) / |old|`.
-
-When a benchmark executable is run, the raw output from the benchmark is printed in real time to stdout. The sample output using `benchmark/basic_test` for both arguments looks like:
-
-```
-./compare_bench.py  test/basic_test test/basic_test  --benchmark_filter=BM_empty.*
-RUNNING: test/basic_test --benchmark_filter=BM_empty.* --benchmark_out=/tmp/tmpN7LF3a
-Run on (8 X 4000 MHz CPU s)
-2017-11-07 23:28:36
----------------------------------------------------------------------
-Benchmark                              Time           CPU Iterations
----------------------------------------------------------------------
-BM_empty                               4 ns          4 ns  170178757
-BM_empty/threads:8                     1 ns          7 ns  103868920
-BM_empty_stop_start                    0 ns          0 ns 1000000000
-BM_empty_stop_start/threads:8          0 ns          0 ns 1403031720
-RUNNING: /test/basic_test --benchmark_filter=BM_empty.* --benchmark_out=/tmp/tmplvrIp8
-Run on (8 X 4000 MHz CPU s)
-2017-11-07 23:28:38
----------------------------------------------------------------------
-Benchmark                              Time           CPU Iterations
----------------------------------------------------------------------
-BM_empty                               4 ns          4 ns  169534855
-BM_empty/threads:8                     1 ns          7 ns  104188776
-BM_empty_stop_start                    0 ns          0 ns 1000000000
-BM_empty_stop_start/threads:8          0 ns          0 ns 1404159424
-Comparing ../build/test/basic_test to ../build/test/basic_test
-Benchmark                                Time             CPU      Time Old      Time New       CPU Old       CPU New
----------------------------------------------------------------------------------------------------------------------
-BM_empty                              -0.0048         -0.0049             4             4             4             4
-BM_empty/threads:8                    -0.0123         -0.0054             1             1             7             7
-BM_empty_stop_start                   -0.0000         -0.0000             0             0             0             0
-BM_empty_stop_start/threads:8         -0.0029         +0.0001             0             0             0             0
-
-```
-
-As you can note, the values in `Time` and `CPU` columns are calculated as `(new - old) / |old|`.
-Obviously this example doesn't give any useful output, but it's intended to show the output format when 'compare_bench.py' needs to run benchmarks.
-
-## compare.py
-
-The `compare.py` can be used to compare the result of benchmarks.
-There are three modes of operation:
-
-1. Just compare two benchmarks, what `compare_bench.py` did.
-The program is invoked like:
-
-``` bash
-$ compare.py benchmarks <benchmark_baseline> <benchmark_contender> [benchmark options]...
-```
-Where `<benchmark_baseline>` and `<benchmark_contender>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
-
-`[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes.
-
-Example output:
-```
-$ ./compare.py benchmarks ./a.out ./a.out
-RUNNING: ./a.out --benchmark_out=/tmp/tmprBT5nW
-Run on (8 X 4000 MHz CPU s)
-2017-11-07 21:16:44
-------------------------------------------------------
-Benchmark               Time           CPU Iterations
-------------------------------------------------------
-BM_memcpy/8            36 ns         36 ns   19101577   211.669MB/s
-BM_memcpy/64           76 ns         76 ns    9412571   800.199MB/s
-BM_memcpy/512          84 ns         84 ns    8249070   5.64771GB/s
-BM_memcpy/1024        116 ns        116 ns    6181763   8.19505GB/s
-BM_memcpy/8192        643 ns        643 ns    1062855   11.8636GB/s
-BM_copy/8             222 ns        222 ns    3137987   34.3772MB/s
-BM_copy/64           1608 ns       1608 ns     432758   37.9501MB/s
-BM_copy/512         12589 ns      12589 ns      54806   38.7867MB/s
-BM_copy/1024        25169 ns      25169 ns      27713   38.8003MB/s
-BM_copy/8192       201165 ns     201112 ns       3486   38.8466MB/s
-RUNNING: ./a.out --benchmark_out=/tmp/tmpt1wwG_
-Run on (8 X 4000 MHz CPU s)
-2017-11-07 21:16:53
-------------------------------------------------------
-Benchmark               Time           CPU Iterations
-------------------------------------------------------
-BM_memcpy/8            36 ns         36 ns   19397903   211.255MB/s
-BM_memcpy/64           73 ns         73 ns    9691174   839.635MB/s
-BM_memcpy/512          85 ns         85 ns    8312329   5.60101GB/s
-BM_memcpy/1024        118 ns        118 ns    6438774   8.11608GB/s
-BM_memcpy/8192        656 ns        656 ns    1068644   11.6277GB/s
-BM_copy/8             223 ns        223 ns    3146977   34.2338MB/s
-BM_copy/64           1611 ns       1611 ns     435340   37.8751MB/s
-BM_copy/512         12622 ns      12622 ns      54818   38.6844MB/s
-BM_copy/1024        25257 ns      25239 ns      27779   38.6927MB/s
-BM_copy/8192       205013 ns     205010 ns       3479    38.108MB/s
-Comparing ./a.out to ./a.out
-Benchmark                 Time             CPU      Time Old      Time New       CPU Old       CPU New
-------------------------------------------------------------------------------------------------------
-BM_memcpy/8            +0.0020         +0.0020            36            36            36            36
-BM_memcpy/64           -0.0468         -0.0470            76            73            76            73
-BM_memcpy/512          +0.0081         +0.0083            84            85            84            85
-BM_memcpy/1024         +0.0098         +0.0097           116           118           116           118
-BM_memcpy/8192         +0.0200         +0.0203           643           656           643           656
-BM_copy/8              +0.0046         +0.0042           222           223           222           223
-BM_copy/64             +0.0020         +0.0020          1608          1611          1608          1611
-BM_copy/512            +0.0027         +0.0026         12589         12622         12589         12622
-BM_copy/1024           +0.0035         +0.0028         25169         25257         25169         25239
-BM_copy/8192           +0.0191         +0.0194        201165        205013        201112        205010
-```
-
-What it does is for the every benchmark from the first run it looks for the benchmark with exactly the same name in the second run, and then compares the results. If the names 
diff er, the benchmark is omitted from the 
diff .
-As you can note, the values in `Time` and `CPU` columns are calculated as `(new - old) / |old|`.
-
-2. Compare two 
diff erent filters of one benchmark
-The program is invoked like:
-
-``` bash
-$ compare.py filters <benchmark> <filter_baseline> <filter_contender> [benchmark options]...
-```
-Where `<benchmark>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
-
-Where `<filter_baseline>` and `<filter_contender>` are the same regex filters that you would pass to the `[--benchmark_filter=<regex>]` parameter of the benchmark binary.
-
-`[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes.
-
-Example output:
-```
-$ ./compare.py filters ./a.out BM_memcpy BM_copy
-RUNNING: ./a.out --benchmark_filter=BM_memcpy --benchmark_out=/tmp/tmpBWKk0k
-Run on (8 X 4000 MHz CPU s)
-2017-11-07 21:37:28
-------------------------------------------------------
-Benchmark               Time           CPU Iterations
-------------------------------------------------------
-BM_memcpy/8            36 ns         36 ns   17891491   211.215MB/s
-BM_memcpy/64           74 ns         74 ns    9400999   825.646MB/s
-BM_memcpy/512          87 ns         87 ns    8027453   5.46126GB/s
-BM_memcpy/1024        111 ns        111 ns    6116853    8.5648GB/s
-BM_memcpy/8192        657 ns        656 ns    1064679   11.6247GB/s
-RUNNING: ./a.out --benchmark_filter=BM_copy --benchmark_out=/tmp/tmpAvWcOM
-Run on (8 X 4000 MHz CPU s)
-2017-11-07 21:37:33
-----------------------------------------------------
-Benchmark             Time           CPU Iterations
-----------------------------------------------------
-BM_copy/8           227 ns        227 ns    3038700   33.6264MB/s
-BM_copy/64         1640 ns       1640 ns     426893   37.2154MB/s
-BM_copy/512       12804 ns      12801 ns      55417   38.1444MB/s
-BM_copy/1024      25409 ns      25407 ns      27516   38.4365MB/s
-BM_copy/8192     202986 ns     202990 ns       3454   38.4871MB/s
-Comparing BM_memcpy to BM_copy (from ./a.out)
-Benchmark                               Time             CPU      Time Old      Time New       CPU Old       CPU New
---------------------------------------------------------------------------------------------------------------------
-[BM_memcpy vs. BM_copy]/8            +5.2829         +5.2812            36           227            36           227
-[BM_memcpy vs. BM_copy]/64          +21.1719        +21.1856            74          1640            74          1640
-[BM_memcpy vs. BM_copy]/512        +145.6487       +145.6097            87         12804            87         12801
-[BM_memcpy vs. BM_copy]/1024       +227.1860       +227.1776           111         25409           111         25407
-[BM_memcpy vs. BM_copy]/8192       +308.1664       +308.2898           657        202986           656        202990
-```
-
-As you can see, it applies filter to the benchmarks, both when running the benchmark, and before doing the 
diff . And to make the 
diff  work, the matches are replaced with some common string. Thus, you can compare two 
diff erent benchmark families within one benchmark binary.
-As you can note, the values in `Time` and `CPU` columns are calculated as `(new - old) / |old|`.
-
-3. Compare filter one from benchmark one to filter two from benchmark two:
-The program is invoked like:
-
-``` bash
-$ compare.py filters <benchmark_baseline> <filter_baseline> <benchmark_contender> <filter_contender> [benchmark options]...
-```
-
-Where `<benchmark_baseline>` and `<benchmark_contender>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
-
-Where `<filter_baseline>` and `<filter_contender>` are the same regex filters that you would pass to the `[--benchmark_filter=<regex>]` parameter of the benchmark binary.
-
-`[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes.
-
-Example output:
-```
-$ ./compare.py benchmarksfiltered ./a.out BM_memcpy ./a.out BM_copy
-RUNNING: ./a.out --benchmark_filter=BM_memcpy --benchmark_out=/tmp/tmp_FvbYg
-Run on (8 X 4000 MHz CPU s)
-2017-11-07 21:38:27
-------------------------------------------------------
-Benchmark               Time           CPU Iterations
-------------------------------------------------------
-BM_memcpy/8            37 ns         37 ns   18953482   204.118MB/s
-BM_memcpy/64           74 ns         74 ns    9206578   828.245MB/s
-BM_memcpy/512          91 ns         91 ns    8086195   5.25476GB/s
-BM_memcpy/1024        120 ns        120 ns    5804513   7.95662GB/s
-BM_memcpy/8192        664 ns        664 ns    1028363   11.4948GB/s
-RUNNING: ./a.out --benchmark_filter=BM_copy --benchmark_out=/tmp/tmpDfL5iE
-Run on (8 X 4000 MHz CPU s)
-2017-11-07 21:38:32
-----------------------------------------------------
-Benchmark             Time           CPU Iterations
-----------------------------------------------------
-BM_copy/8           230 ns        230 ns    2985909   33.1161MB/s
-BM_copy/64         1654 ns       1653 ns     419408   36.9137MB/s
-BM_copy/512       13122 ns      13120 ns      53403   37.2156MB/s
-BM_copy/1024      26679 ns      26666 ns      26575   36.6218MB/s
-BM_copy/8192     215068 ns     215053 ns       3221   36.3283MB/s
-Comparing BM_memcpy (from ./a.out) to BM_copy (from ./a.out)
-Benchmark                               Time             CPU      Time Old      Time New       CPU Old       CPU New
---------------------------------------------------------------------------------------------------------------------
-[BM_memcpy vs. BM_copy]/8            +5.1649         +5.1637            37           230            37           230
-[BM_memcpy vs. BM_copy]/64          +21.4352        +21.4374            74          1654            74          1653
-[BM_memcpy vs. BM_copy]/512        +143.6022       +143.5865            91         13122            91         13120
-[BM_memcpy vs. BM_copy]/1024       +221.5903       +221.4790           120         26679           120         26666
-[BM_memcpy vs. BM_copy]/8192       +322.9059       +323.0096           664        215068           664        215053
-```
-This is a mix of the previous two modes, two (potentially 
diff erent) benchmark binaries are run, and a 
diff erent filter is applied to each one.
-As you can note, the values in `Time` and `CPU` columns are calculated as `(new - old) / |old|`.

diff  --git a/llvm/utils/benchmark/include/benchmark/benchmark.h b/llvm/utils/benchmark/include/benchmark/benchmark.h
deleted file mode 100644
index 528aa7f9c8bb8..0000000000000
--- a/llvm/utils/benchmark/include/benchmark/benchmark.h
+++ /dev/null
@@ -1,1467 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-// Support for registering benchmarks for functions.
-
-/* Example usage:
-// Define a function that executes the code to be measured a
-// specified number of times:
-static void BM_StringCreation(benchmark::State& state) {
-  for (auto _ : state)
-    std::string empty_string;
-}
-
-// Register the function as a benchmark
-BENCHMARK(BM_StringCreation);
-
-// Define another benchmark
-static void BM_StringCopy(benchmark::State& state) {
-  std::string x = "hello";
-  for (auto _ : state)
-    std::string copy(x);
-}
-BENCHMARK(BM_StringCopy);
-
-// Augment the main() program to invoke benchmarks if specified
-// via the --benchmarks command line flag.  E.g.,
-//       my_unittest --benchmark_filter=all
-//       my_unittest --benchmark_filter=BM_StringCreation
-//       my_unittest --benchmark_filter=String
-//       my_unittest --benchmark_filter='Copy|Creation'
-int main(int argc, char** argv) {
-  benchmark::Initialize(&argc, argv);
-  benchmark::RunSpecifiedBenchmarks();
-  return 0;
-}
-
-// Sometimes a family of microbenchmarks can be implemented with
-// just one routine that takes an extra argument to specify which
-// one of the family of benchmarks to run.  For example, the following
-// code defines a family of microbenchmarks for measuring the speed
-// of memcpy() calls of 
diff erent lengths:
-
-static void BM_memcpy(benchmark::State& state) {
-  char* src = new char[state.range(0)]; char* dst = new char[state.range(0)];
-  memset(src, 'x', state.range(0));
-  for (auto _ : state)
-    memcpy(dst, src, state.range(0));
-  state.SetBytesProcessed(int64_t(state.iterations()) *
-                          int64_t(state.range(0)));
-  delete[] src; delete[] dst;
-}
-BENCHMARK(BM_memcpy)->Arg(8)->Arg(64)->Arg(512)->Arg(1<<10)->Arg(8<<10);
-
-// The preceding code is quite repetitive, and can be replaced with the
-// following short-hand.  The following invocation will pick a few
-// appropriate arguments in the specified range and will generate a
-// microbenchmark for each such argument.
-BENCHMARK(BM_memcpy)->Range(8, 8<<10);
-
-// You might have a microbenchmark that depends on two inputs.  For
-// example, the following code defines a family of microbenchmarks for
-// measuring the speed of set insertion.
-static void BM_SetInsert(benchmark::State& state) {
-  set<int> data;
-  for (auto _ : state) {
-    state.PauseTiming();
-    data = ConstructRandomSet(state.range(0));
-    state.ResumeTiming();
-    for (int j = 0; j < state.range(1); ++j)
-      data.insert(RandomNumber());
-  }
-}
-BENCHMARK(BM_SetInsert)
-   ->Args({1<<10, 128})
-   ->Args({2<<10, 128})
-   ->Args({4<<10, 128})
-   ->Args({8<<10, 128})
-   ->Args({1<<10, 512})
-   ->Args({2<<10, 512})
-   ->Args({4<<10, 512})
-   ->Args({8<<10, 512});
-
-// The preceding code is quite repetitive, and can be replaced with
-// the following short-hand.  The following macro will pick a few
-// appropriate arguments in the product of the two specified ranges
-// and will generate a microbenchmark for each such pair.
-BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {128, 512}});
-
-// For more complex patterns of inputs, passing a custom function
-// to Apply allows programmatic specification of an
-// arbitrary set of arguments to run the microbenchmark on.
-// The following example enumerates a dense range on
-// one parameter, and a sparse range on the second.
-static void CustomArguments(benchmark::internal::Benchmark* b) {
-  for (int i = 0; i <= 10; ++i)
-    for (int j = 32; j <= 1024*1024; j *= 8)
-      b->Args({i, j});
-}
-BENCHMARK(BM_SetInsert)->Apply(CustomArguments);
-
-// Templated microbenchmarks work the same way:
-// Produce then consume 'size' messages 'iters' times
-// Measures throughput in the absence of multiprogramming.
-template <class Q> int BM_Sequential(benchmark::State& state) {
-  Q q;
-  typename Q::value_type v;
-  for (auto _ : state) {
-    for (int i = state.range(0); i--; )
-      q.push(v);
-    for (int e = state.range(0); e--; )
-      q.Wait(&v);
-  }
-  // actually messages, not bytes:
-  state.SetBytesProcessed(
-      static_cast<int64_t>(state.iterations())*state.range(0));
-}
-BENCHMARK_TEMPLATE(BM_Sequential, WaitQueue<int>)->Range(1<<0, 1<<10);
-
-Use `Benchmark::MinTime(double t)` to set the minimum time used to run the
-benchmark. This option overrides the `benchmark_min_time` flag.
-
-void BM_test(benchmark::State& state) {
- ... body ...
-}
-BENCHMARK(BM_test)->MinTime(2.0); // Run for at least 2 seconds.
-
-In a multithreaded test, it is guaranteed that none of the threads will start
-until all have reached the loop start, and all will have finished before any
-thread exits the loop body. As such, any global setup or teardown you want to
-do can be wrapped in a check against the thread index:
-
-static void BM_MultiThreaded(benchmark::State& state) {
-  if (state.thread_index == 0) {
-    // Setup code here.
-  }
-  for (auto _ : state) {
-    // Run the test as normal.
-  }
-  if (state.thread_index == 0) {
-    // Teardown code here.
-  }
-}
-BENCHMARK(BM_MultiThreaded)->Threads(4);
-
-
-If a benchmark runs a few milliseconds it may be hard to visually compare the
-measured times, since the output data is given in nanoseconds per default. In
-order to manually set the time unit, you can specify it manually:
-
-BENCHMARK(BM_test)->Unit(benchmark::kMillisecond);
-*/
-
-#ifndef BENCHMARK_BENCHMARK_H_
-#define BENCHMARK_BENCHMARK_H_
-
-
-// The _MSVC_LANG check should detect Visual Studio 2015 Update 3 and newer.
-#if __cplusplus >= 201103L || (defined(_MSVC_LANG) && _MSVC_LANG >= 201103L)
-#define BENCHMARK_HAS_CXX11
-#endif
-
-#include <stdint.h>
-
-#include <algorithm>
-#include <cassert>
-#include <cstddef>
-#include <iosfwd>
-#include <string>
-#include <vector>
-#include <map>
-#include <set>
-
-#if defined(BENCHMARK_HAS_CXX11)
-#include <type_traits>
-#include <initializer_list>
-#include <utility>
-#endif
-
-#if defined(_MSC_VER)
-#include <intrin.h> // for _ReadWriteBarrier
-#endif
-
-#ifndef BENCHMARK_HAS_CXX11
-#define BENCHMARK_DISALLOW_COPY_AND_ASSIGN(TypeName) \
-  TypeName(const TypeName&);                         \
-  TypeName& operator=(const TypeName&)
-#else
-#define BENCHMARK_DISALLOW_COPY_AND_ASSIGN(TypeName) \
-  TypeName(const TypeName&) = delete;                \
-  TypeName& operator=(const TypeName&) = delete
-#endif
-
-#if defined(__GNUC__)
-#define BENCHMARK_UNUSED __attribute__((unused))
-#define BENCHMARK_ALWAYS_INLINE __attribute__((always_inline))
-#define BENCHMARK_NOEXCEPT noexcept
-#define BENCHMARK_NOEXCEPT_OP(x) noexcept(x)
-#elif defined(_MSC_VER) && !defined(__clang__)
-#define BENCHMARK_UNUSED
-#define BENCHMARK_ALWAYS_INLINE __forceinline
-#if _MSC_VER >= 1900
-#define BENCHMARK_NOEXCEPT noexcept
-#define BENCHMARK_NOEXCEPT_OP(x) noexcept(x)
-#else
-#define BENCHMARK_NOEXCEPT
-#define BENCHMARK_NOEXCEPT_OP(x)
-#endif
-#define __func__ __FUNCTION__
-#else
-#define BENCHMARK_UNUSED
-#define BENCHMARK_ALWAYS_INLINE
-#define BENCHMARK_NOEXCEPT
-#define BENCHMARK_NOEXCEPT_OP(x)
-#endif
-
-#define BENCHMARK_INTERNAL_TOSTRING2(x) #x
-#define BENCHMARK_INTERNAL_TOSTRING(x) BENCHMARK_INTERNAL_TOSTRING2(x)
-
-#if defined(__GNUC__)
-#define BENCHMARK_BUILTIN_EXPECT(x, y) __builtin_expect(x, y)
-#define BENCHMARK_DEPRECATED_MSG(msg) __attribute__((deprecated(msg)))
-#else
-#define BENCHMARK_BUILTIN_EXPECT(x, y) x
-#define BENCHMARK_DEPRECATED_MSG(msg)
-#define BENCHMARK_WARNING_MSG(msg) __pragma(message(__FILE__ "(" BENCHMARK_INTERNAL_TOSTRING(__LINE__) ") : warning note: " msg))
-#endif
-
-#if defined(__GNUC__) && !defined(__clang__)
-#define BENCHMARK_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
-#endif
-
-#ifndef __has_builtin
-#define __has_builtin(x) 0
-#endif
-
-#if defined(__GNUC__) || __has_builtin(__builtin_unreachable)
-  #define BENCHMARK_UNREACHABLE() __builtin_unreachable()
-#elif defined(_MSC_VER)
-  #define BENCHMARK_UNREACHABLE() __assume(false)
-#else
-  #define BENCHMARK_UNREACHABLE() ((void)0)
-#endif
-
-namespace benchmark {
-class BenchmarkReporter;
-
-void Initialize(int* argc, char** argv);
-
-// Report to stdout all arguments in 'argv' as unrecognized except the first.
-// Returns true there is at least on unrecognized argument (i.e. 'argc' > 1).
-bool ReportUnrecognizedArguments(int argc, char** argv);
-
-// Generate a list of benchmarks matching the specified --benchmark_filter flag
-// and if --benchmark_list_tests is specified return after printing the name
-// of each matching benchmark. Otherwise run each matching benchmark and
-// report the results.
-//
-// The second and third overload use the specified 'console_reporter' and
-//  'file_reporter' respectively. 'file_reporter' will write to the file
-//  specified
-//   by '--benchmark_output'. If '--benchmark_output' is not given the
-//  'file_reporter' is ignored.
-//
-// RETURNS: The number of matching benchmarks.
-size_t RunSpecifiedBenchmarks();
-size_t RunSpecifiedBenchmarks(BenchmarkReporter* console_reporter);
-size_t RunSpecifiedBenchmarks(BenchmarkReporter* console_reporter,
-                              BenchmarkReporter* file_reporter);
-
-// If this routine is called, peak memory allocation past this point in the
-// benchmark is reported at the end of the benchmark report line. (It is
-// computed by running the benchmark once with a single iteration and a memory
-// tracer.)
-// TODO(dominic)
-// void MemoryUsage();
-
-namespace internal {
-class Benchmark;
-class BenchmarkImp;
-class BenchmarkFamilies;
-
-void UseCharPointer(char const volatile*);
-
-// Take ownership of the pointer and register the benchmark. Return the
-// registered benchmark.
-Benchmark* RegisterBenchmarkInternal(Benchmark*);
-
-// Ensure that the standard streams are properly initialized in every TU.
-int InitializeStreams();
-BENCHMARK_UNUSED static int stream_init_anchor = InitializeStreams();
-
-}  // namespace internal
-
-
-#if (!defined(__GNUC__) && !defined(__clang__)) || defined(__pnacl__) || \
-    defined(__EMSCRIPTEN__)
-# define BENCHMARK_HAS_NO_INLINE_ASSEMBLY
-#endif
-
-
-// The DoNotOptimize(...) function can be used to prevent a value or
-// expression from being optimized away by the compiler. This function is
-// intended to add little to no overhead.
-// See: https://youtu.be/nXaxk27zwlk?t=2441
-#ifndef BENCHMARK_HAS_NO_INLINE_ASSEMBLY
-template <class Tp>
-inline BENCHMARK_ALWAYS_INLINE
-void DoNotOptimize(Tp const& value) {
-    asm volatile("" : : "r,m"(value) : "memory");
-}
-
-template <class Tp>
-inline BENCHMARK_ALWAYS_INLINE void DoNotOptimize(Tp& value) {
-#if defined(__clang__)
-  asm volatile("" : "+r,m"(value) : : "memory");
-#else
-  asm volatile("" : "+m,r"(value) : : "memory");
-#endif
-}
-
-// Force the compiler to flush pending writes to global memory. Acts as an
-// effective read/write barrier
-inline BENCHMARK_ALWAYS_INLINE void ClobberMemory() {
-  asm volatile("" : : : "memory");
-}
-#elif defined(_MSC_VER)
-template <class Tp>
-inline BENCHMARK_ALWAYS_INLINE void DoNotOptimize(Tp const& value) {
-  internal::UseCharPointer(&reinterpret_cast<char const volatile&>(value));
-  _ReadWriteBarrier();
-}
-
-inline BENCHMARK_ALWAYS_INLINE void ClobberMemory() {
-  _ReadWriteBarrier();
-}
-#else
-template <class Tp>
-inline BENCHMARK_ALWAYS_INLINE void DoNotOptimize(Tp const& value) {
-  internal::UseCharPointer(&reinterpret_cast<char const volatile&>(value));
-}
-// FIXME Add ClobberMemory() for non-gnu and non-msvc compilers
-#endif
-
-
-
-// This class is used for user-defined counters.
-class Counter {
-public:
-
-  enum Flags {
-    kDefaults   = 0,
-    // Mark the counter as a rate. It will be presented divided
-    // by the duration of the benchmark.
-    kIsRate     = 1,
-    // Mark the counter as a thread-average quantity. It will be
-    // presented divided by the number of threads.
-    kAvgThreads = 2,
-    // Mark the counter as a thread-average rate. See above.
-    kAvgThreadsRate = kIsRate|kAvgThreads
-  };
-
-  double value;
-  Flags  flags;
-
-  BENCHMARK_ALWAYS_INLINE
-  Counter(double v = 0., Flags f = kDefaults) : value(v), flags(f) {}
-
-  BENCHMARK_ALWAYS_INLINE operator double const& () const { return value; }
-  BENCHMARK_ALWAYS_INLINE operator double      & ()       { return value; }
-
-};
-
-// This is the container for the user-defined counters.
-typedef std::map<std::string, Counter> UserCounters;
-
-
-// TimeUnit is passed to a benchmark in order to specify the order of magnitude
-// for the measured time.
-enum TimeUnit { kNanosecond, kMicrosecond, kMillisecond };
-
-// BigO is passed to a benchmark in order to specify the asymptotic
-// computational
-// complexity for the benchmark. In case oAuto is selected, complexity will be
-// calculated automatically to the best fit.
-enum BigO { oNone, o1, oN, oNSquared, oNCubed, oLogN, oNLogN, oAuto, oLambda };
-
-// BigOFunc is passed to a benchmark in order to specify the asymptotic
-// computational complexity for the benchmark.
-typedef double(BigOFunc)(int64_t);
-
-// StatisticsFunc is passed to a benchmark in order to compute some descriptive
-// statistics over all the measurements of some type
-typedef double(StatisticsFunc)(const std::vector<double>&);
-
-struct Statistics {
-  std::string name_;
-  StatisticsFunc* compute_;
-
-  Statistics(std::string name, StatisticsFunc* compute)
-    : name_(name), compute_(compute) {}
-};
-
-namespace internal {
-class ThreadTimer;
-class ThreadManager;
-
-enum ReportMode
-#if defined(BENCHMARK_HAS_CXX11)
-  : unsigned
-#else
-#endif
-  {
-  RM_Unspecified,  // The mode has not been manually specified
-  RM_Default,      // The mode is user-specified as default.
-  RM_ReportAggregatesOnly
-};
-}  // namespace internal
-
-// State is passed to a running Benchmark and contains state for the
-// benchmark to use.
-class State {
- public:
-  struct StateIterator;
-  friend struct StateIterator;
-
-  // Returns iterators used to run each iteration of a benchmark using a
-  // C++11 ranged-based for loop. These functions should not be called directly.
-  //
-  // REQUIRES: The benchmark has not started running yet. Neither begin nor end
-  // have been called previously.
-  //
-  // NOTE: KeepRunning may not be used after calling either of these functions.
-  BENCHMARK_ALWAYS_INLINE StateIterator begin();
-  BENCHMARK_ALWAYS_INLINE StateIterator end();
-
-  // Returns true if the benchmark should continue through another iteration.
-  // NOTE: A benchmark may not return from the test until KeepRunning() has
-  // returned false.
-  bool KeepRunning();
-
-  // Returns true iff the benchmark should run n more iterations.
-  // REQUIRES: 'n' > 0.
-  // NOTE: A benchmark must not return from the test until KeepRunningBatch()
-  // has returned false.
-  // NOTE: KeepRunningBatch() may overshoot by up to 'n' iterations.
-  //
-  // Intended usage:
-  //   while (state.KeepRunningBatch(1000)) {
-  //     // process 1000 elements
-  //   }
-  bool KeepRunningBatch(size_t n);
-
-  // REQUIRES: timer is running and 'SkipWithError(...)' has not been called
-  //           by the current thread.
-  // Stop the benchmark timer.  If not called, the timer will be
-  // automatically stopped after the last iteration of the benchmark loop.
-  //
-  // For threaded benchmarks the PauseTiming() function only pauses the timing
-  // for the current thread.
-  //
-  // NOTE: The "real time" measurement is per-thread. If 
diff erent threads
-  // report 
diff erent measurements the largest one is reported.
-  //
-  // NOTE: PauseTiming()/ResumeTiming() are relatively
-  // heavyweight, and so their use should generally be avoided
-  // within each benchmark iteration, if possible.
-  void PauseTiming();
-
-  // REQUIRES: timer is not running and 'SkipWithError(...)' has not been called
-  //           by the current thread.
-  // Start the benchmark timer.  The timer is NOT running on entrance to the
-  // benchmark function. It begins running after control flow enters the
-  // benchmark loop.
-  //
-  // NOTE: PauseTiming()/ResumeTiming() are relatively
-  // heavyweight, and so their use should generally be avoided
-  // within each benchmark iteration, if possible.
-  void ResumeTiming();
-
-  // REQUIRES: 'SkipWithError(...)' has not been called previously by the
-  //            current thread.
-  // Report the benchmark as resulting in an error with the specified 'msg'.
-  // After this call the user may explicitly 'return' from the benchmark.
-  //
-  // If the ranged-for style of benchmark loop is used, the user must explicitly
-  // break from the loop, otherwise all future iterations will be run.
-  // If the 'KeepRunning()' loop is used the current thread will automatically
-  // exit the loop at the end of the current iteration.
-  //
-  // For threaded benchmarks only the current thread stops executing and future
-  // calls to `KeepRunning()` will block until all threads have completed
-  // the `KeepRunning()` loop. If multiple threads report an error only the
-  // first error message is used.
-  //
-  // NOTE: Calling 'SkipWithError(...)' does not cause the benchmark to exit
-  // the current scope immediately. If the function is called from within
-  // the 'KeepRunning()' loop the current iteration will finish. It is the users
-  // responsibility to exit the scope as needed.
-  void SkipWithError(const char* msg);
-
-  // REQUIRES: called exactly once per iteration of the benchmarking loop.
-  // Set the manually measured time for this benchmark iteration, which
-  // is used instead of automatically measured time if UseManualTime() was
-  // specified.
-  //
-  // For threaded benchmarks the final value will be set to the largest
-  // reported values.
-  void SetIterationTime(double seconds);
-
-  // Set the number of bytes processed by the current benchmark
-  // execution.  This routine is typically called once at the end of a
-  // throughput oriented benchmark.  If this routine is called with a
-  // value > 0, the report is printed in MB/sec instead of nanoseconds
-  // per iteration.
-  //
-  // REQUIRES: a benchmark has exited its benchmarking loop.
-  BENCHMARK_ALWAYS_INLINE
-  void SetBytesProcessed(int64_t bytes) { bytes_processed_ = bytes; }
-
-  BENCHMARK_ALWAYS_INLINE
-  int64_t bytes_processed() const { return bytes_processed_; }
-
-  // If this routine is called with complexity_n > 0 and complexity report is
-  // requested for the
-  // family benchmark, then current benchmark will be part of the computation
-  // and complexity_n will
-  // represent the length of N.
-  BENCHMARK_ALWAYS_INLINE
-  void SetComplexityN(int64_t complexity_n) { complexity_n_ = complexity_n; }
-
-  BENCHMARK_ALWAYS_INLINE
-  int64_t complexity_length_n() { return complexity_n_; }
-
-  // If this routine is called with items > 0, then an items/s
-  // label is printed on the benchmark report line for the currently
-  // executing benchmark. It is typically called at the end of a processing
-  // benchmark where a processing items/second output is desired.
-  //
-  // REQUIRES: a benchmark has exited its benchmarking loop.
-  BENCHMARK_ALWAYS_INLINE
-  void SetItemsProcessed(int64_t items) { items_processed_ = items; }
-
-  BENCHMARK_ALWAYS_INLINE
-  int64_t items_processed() const { return items_processed_; }
-
-  // If this routine is called, the specified label is printed at the
-  // end of the benchmark report line for the currently executing
-  // benchmark.  Example:
-  //  static void BM_Compress(benchmark::State& state) {
-  //    ...
-  //    double compress = input_size / output_size;
-  //    state.SetLabel(StrFormat("compress:%.1f%%", 100.0*compression));
-  //  }
-  // Produces output that looks like:
-  //  BM_Compress   50         50   14115038  compress:27.3%
-  //
-  // REQUIRES: a benchmark has exited its benchmarking loop.
-  void SetLabel(const char* label);
-
-  void BENCHMARK_ALWAYS_INLINE SetLabel(const std::string& str) {
-    this->SetLabel(str.c_str());
-  }
-
-  // Range arguments for this run. CHECKs if the argument has been set.
-  BENCHMARK_ALWAYS_INLINE
-  int64_t range(std::size_t pos = 0) const {
-    assert(range_.size() > pos);
-    return range_[pos];
-  }
-
-  BENCHMARK_DEPRECATED_MSG("use 'range(0)' instead")
-  int64_t range_x() const { return range(0); }
-
-  BENCHMARK_DEPRECATED_MSG("use 'range(1)' instead")
-  int64_t range_y() const { return range(1); }
-
-  BENCHMARK_ALWAYS_INLINE
-  size_t iterations() const {
-    if (BENCHMARK_BUILTIN_EXPECT(!started_, false)) {
-      return 0;
-    }
-    return max_iterations - total_iterations_ + batch_leftover_;
-  }
-
-private: // items we expect on the first cache line (ie 64 bytes of the struct)
-
-  // When total_iterations_ is 0, KeepRunning() and friends will return false.
-  // May be larger than max_iterations.
-  size_t total_iterations_;
-
-  // When using KeepRunningBatch(), batch_leftover_ holds the number of
-  // iterations beyond max_iters that were run. Used to track
-  // completed_iterations_ accurately.
-  size_t batch_leftover_;
-
-public:
-  const size_t max_iterations;
-
-private:
-  bool started_;
-  bool finished_;
-  bool error_occurred_;
-
-private: // items we don't need on the first cache line
-  std::vector<int64_t> range_;
-
-  int64_t bytes_processed_;
-  int64_t items_processed_;
-
-  int64_t complexity_n_;
-
- public:
-  // Container for user-defined counters.
-  UserCounters counters;
-  // Index of the executing thread. Values from [0, threads).
-  const int thread_index;
-  // Number of threads concurrently executing the benchmark.
-  const int threads;
-
-
-  // TODO(EricWF) make me private
-  State(size_t max_iters, const std::vector<int64_t>& ranges, int thread_i,
-        int n_threads, internal::ThreadTimer* timer,
-        internal::ThreadManager* manager);
-
- private:
-  void StartKeepRunning();
-  // Implementation of KeepRunning() and KeepRunningBatch().
-  // is_batch must be true unless n is 1.
-  bool KeepRunningInternal(size_t n, bool is_batch);
-  void FinishKeepRunning();
-  internal::ThreadTimer* timer_;
-  internal::ThreadManager* manager_;
-  BENCHMARK_DISALLOW_COPY_AND_ASSIGN(State);
-};
-
-inline BENCHMARK_ALWAYS_INLINE
-bool State::KeepRunning() {
-  return KeepRunningInternal(1, /*is_batch=*/ false);
-}
-
-inline BENCHMARK_ALWAYS_INLINE
-bool State::KeepRunningBatch(size_t n) {
-  return KeepRunningInternal(n, /*is_batch=*/ true);
-}
-
-inline BENCHMARK_ALWAYS_INLINE
-bool State::KeepRunningInternal(size_t n, bool is_batch) {
-  // total_iterations_ is set to 0 by the constructor, and always set to a
-  // nonzero value by StartKepRunning().
-  assert(n > 0);
-  // n must be 1 unless is_batch is true.
-  assert(is_batch || n == 1);
-  if (BENCHMARK_BUILTIN_EXPECT(total_iterations_ >= n, true)) {
-    total_iterations_ -= n;
-    return true;
-  }
-  if (!started_) {
-    StartKeepRunning();
-    if (!error_occurred_ && total_iterations_ >= n) {
-      total_iterations_-= n;
-      return true;
-    }
-  }
-  // For non-batch runs, total_iterations_ must be 0 by now.
-  if (is_batch && total_iterations_ != 0) {
-    batch_leftover_  = n - total_iterations_;
-    total_iterations_ = 0;
-    return true;
-  }
-  FinishKeepRunning();
-  return false;
-}
-
-struct State::StateIterator {
-  struct BENCHMARK_UNUSED Value {};
-  typedef std::forward_iterator_tag iterator_category;
-  typedef Value value_type;
-  typedef Value reference;
-  typedef Value pointer;
-  typedef std::ptr
diff _t 
diff erence_type;
-
- private:
-  friend class State;
-  BENCHMARK_ALWAYS_INLINE
-  StateIterator() : cached_(0), parent_() {}
-
-  BENCHMARK_ALWAYS_INLINE
-  explicit StateIterator(State* st)
-      : cached_(st->error_occurred_ ? 0 : st->max_iterations), parent_(st) {}
-
- public:
-  BENCHMARK_ALWAYS_INLINE
-  Value operator*() const { return Value(); }
-
-  BENCHMARK_ALWAYS_INLINE
-  StateIterator& operator++() {
-    assert(cached_ > 0);
-    --cached_;
-    return *this;
-  }
-
-  BENCHMARK_ALWAYS_INLINE
-  bool operator!=(StateIterator const&) const {
-    if (BENCHMARK_BUILTIN_EXPECT(cached_ != 0, true)) return true;
-    parent_->FinishKeepRunning();
-    return false;
-  }
-
- private:
-  size_t cached_;
-  State* const parent_;
-};
-
-inline BENCHMARK_ALWAYS_INLINE State::StateIterator State::begin() {
-  return StateIterator(this);
-}
-inline BENCHMARK_ALWAYS_INLINE State::StateIterator State::end() {
-  StartKeepRunning();
-  return StateIterator();
-}
-
-namespace internal {
-
-typedef void(Function)(State&);
-
-// ------------------------------------------------------
-// Benchmark registration object.  The BENCHMARK() macro expands
-// into an internal::Benchmark* object.  Various methods can
-// be called on this object to change the properties of the benchmark.
-// Each method returns "this" so that multiple method calls can
-// chained into one expression.
-class Benchmark {
- public:
-  virtual ~Benchmark();
-
-  // Note: the following methods all return "this" so that multiple
-  // method calls can be chained together in one expression.
-
-  // Run this benchmark once with "x" as the extra argument passed
-  // to the function.
-  // REQUIRES: The function passed to the constructor must accept an arg1.
-  Benchmark* Arg(int64_t x);
-
-  // Run this benchmark with the given time unit for the generated output report
-  Benchmark* Unit(TimeUnit unit);
-
-  // Run this benchmark once for a number of values picked from the
-  // range [start..limit].  (start and limit are always picked.)
-  // REQUIRES: The function passed to the constructor must accept an arg1.
-  Benchmark* Range(int64_t start, int64_t limit);
-
-  // Run this benchmark once for all values in the range [start..limit] with
-  // specific step
-  // REQUIRES: The function passed to the constructor must accept an arg1.
-  Benchmark* DenseRange(int64_t start, int64_t limit, int step = 1);
-
-  // Run this benchmark once with "args" as the extra arguments passed
-  // to the function.
-  // REQUIRES: The function passed to the constructor must accept arg1, arg2 ...
-  Benchmark* Args(const std::vector<int64_t>& args);
-
-  // Equivalent to Args({x, y})
-  // NOTE: This is a legacy C++03 interface provided for compatibility only.
-  //   New code should use 'Args'.
-  Benchmark* ArgPair(int64_t x, int64_t y) {
-    std::vector<int64_t> args;
-    args.push_back(x);
-    args.push_back(y);
-    return Args(args);
-  }
-
-  // Run this benchmark once for a number of values picked from the
-  // ranges [start..limit].  (starts and limits are always picked.)
-  // REQUIRES: The function passed to the constructor must accept arg1, arg2 ...
-  Benchmark* Ranges(const std::vector<std::pair<int64_t, int64_t> >& ranges);
-
-  // Equivalent to ArgNames({name})
-  Benchmark* ArgName(const std::string& name);
-
-  // Set the argument names to display in the benchmark name. If not called,
-  // only argument values will be shown.
-  Benchmark* ArgNames(const std::vector<std::string>& names);
-
-  // Equivalent to Ranges({{lo1, hi1}, {lo2, hi2}}).
-  // NOTE: This is a legacy C++03 interface provided for compatibility only.
-  //   New code should use 'Ranges'.
-  Benchmark* RangePair(int64_t lo1, int64_t hi1, int64_t lo2, int64_t hi2) {
-    std::vector<std::pair<int64_t, int64_t> > ranges;
-    ranges.push_back(std::make_pair(lo1, hi1));
-    ranges.push_back(std::make_pair(lo2, hi2));
-    return Ranges(ranges);
-  }
-
-  // Pass this benchmark object to *func, which can customize
-  // the benchmark by calling various methods like Arg, Args,
-  // Threads, etc.
-  Benchmark* Apply(void (*func)(Benchmark* benchmark));
-
-  // Set the range multiplier for non-dense range. If not called, the range
-  // multiplier kRangeMultiplier will be used.
-  Benchmark* RangeMultiplier(int multiplier);
-
-  // Set the minimum amount of time to use when running this benchmark. This
-  // option overrides the `benchmark_min_time` flag.
-  // REQUIRES: `t > 0` and `Iterations` has not been called on this benchmark.
-  Benchmark* MinTime(double t);
-
-  // Specify the amount of iterations that should be run by this benchmark.
-  // REQUIRES: 'n > 0' and `MinTime` has not been called on this benchmark.
-  //
-  // NOTE: This function should only be used when *exact* iteration control is
-  //   needed and never to control or limit how long a benchmark runs, where
-  // `--benchmark_min_time=N` or `MinTime(...)` should be used instead.
-  Benchmark* Iterations(size_t n);
-
-  // Specify the amount of times to repeat this benchmark. This option overrides
-  // the `benchmark_repetitions` flag.
-  // REQUIRES: `n > 0`
-  Benchmark* Repetitions(int n);
-
-  // Specify if each repetition of the benchmark should be reported separately
-  // or if only the final statistics should be reported. If the benchmark
-  // is not repeated then the single result is always reported.
-  Benchmark* ReportAggregatesOnly(bool value = true);
-
-  // If a particular benchmark is I/O bound, runs multiple threads internally or
-  // if for some reason CPU timings are not representative, call this method. If
-  // called, the elapsed time will be used to control how many iterations are
-  // run, and in the printing of items/second or MB/seconds values.  If not
-  // called, the cpu time used by the benchmark will be used.
-  Benchmark* UseRealTime();
-
-  // If a benchmark must measure time manually (e.g. if GPU execution time is
-  // being
-  // measured), call this method. If called, each benchmark iteration should
-  // call
-  // SetIterationTime(seconds) to report the measured time, which will be used
-  // to control how many iterations are run, and in the printing of items/second
-  // or MB/second values.
-  Benchmark* UseManualTime();
-
-  // Set the asymptotic computational complexity for the benchmark. If called
-  // the asymptotic computational complexity will be shown on the output.
-  Benchmark* Complexity(BigO complexity = benchmark::oAuto);
-
-  // Set the asymptotic computational complexity for the benchmark. If called
-  // the asymptotic computational complexity will be shown on the output.
-  Benchmark* Complexity(BigOFunc* complexity);
-
-  // Add this statistics to be computed over all the values of benchmark run
-  Benchmark* ComputeStatistics(std::string name, StatisticsFunc* statistics);
-
-  // Support for running multiple copies of the same benchmark concurrently
-  // in multiple threads.  This may be useful when measuring the scaling
-  // of some piece of code.
-
-  // Run one instance of this benchmark concurrently in t threads.
-  Benchmark* Threads(int t);
-
-  // Pick a set of values T from [min_threads,max_threads].
-  // min_threads and max_threads are always included in T.  Run this
-  // benchmark once for each value in T.  The benchmark run for a
-  // particular value t consists of t threads running the benchmark
-  // function concurrently.  For example, consider:
-  //    BENCHMARK(Foo)->ThreadRange(1,16);
-  // This will run the following benchmarks:
-  //    Foo in 1 thread
-  //    Foo in 2 threads
-  //    Foo in 4 threads
-  //    Foo in 8 threads
-  //    Foo in 16 threads
-  Benchmark* ThreadRange(int min_threads, int max_threads);
-
-  // For each value n in the range, run this benchmark once using n threads.
-  // min_threads and max_threads are always included in the range.
-  // stride specifies the increment. E.g. DenseThreadRange(1, 8, 3) starts
-  // a benchmark with 1, 4, 7 and 8 threads.
-  Benchmark* DenseThreadRange(int min_threads, int max_threads, int stride = 1);
-
-  // Equivalent to ThreadRange(NumCPUs(), NumCPUs())
-  Benchmark* ThreadPerCpu();
-
-  virtual void Run(State& state) = 0;
-
-  // Used inside the benchmark implementation
-  struct Instance;
-
- protected:
-  explicit Benchmark(const char* name);
-  Benchmark(Benchmark const&);
-  void SetName(const char* name);
-
-  int ArgsCnt() const;
-
- private:
-  friend class BenchmarkFamilies;
-
-  std::string name_;
-  ReportMode report_mode_;
-  std::vector<std::string> arg_names_;   // Args for all benchmark runs
-  std::vector<std::vector<int64_t> > args_;  // Args for all benchmark runs
-  TimeUnit time_unit_;
-  int range_multiplier_;
-  double min_time_;
-  size_t iterations_;
-  int repetitions_;
-  bool use_real_time_;
-  bool use_manual_time_;
-  BigO complexity_;
-  BigOFunc* complexity_lambda_;
-  std::vector<Statistics> statistics_;
-  std::vector<int> thread_counts_;
-
-  Benchmark& operator=(Benchmark const&);
-};
-
-}  // namespace internal
-
-// Create and register a benchmark with the specified 'name' that invokes
-// the specified functor 'fn'.
-//
-// RETURNS: A pointer to the registered benchmark.
-internal::Benchmark* RegisterBenchmark(const char* name,
-                                       internal::Function* fn);
-
-#if defined(BENCHMARK_HAS_CXX11)
-template <class Lambda>
-internal::Benchmark* RegisterBenchmark(const char* name, Lambda&& fn);
-#endif
-
-// Remove all registered benchmarks. All pointers to previously registered
-// benchmarks are invalidated.
-void ClearRegisteredBenchmarks();
-
-namespace internal {
-// The class used to hold all Benchmarks created from static function.
-// (ie those created using the BENCHMARK(...) macros.
-class FunctionBenchmark : public Benchmark {
- public:
-  FunctionBenchmark(const char* name, Function* func)
-      : Benchmark(name), func_(func) {}
-
-  void Run(State& st) override;
-
- private:
-  Function* func_;
-};
-
-#ifdef BENCHMARK_HAS_CXX11
-template <class Lambda>
-class LambdaBenchmark : public Benchmark {
- public:
-  void Run(State& st) override { lambda_(st); }
-
- private:
-  template <class OLambda>
-  LambdaBenchmark(const char* name, OLambda&& lam)
-      : Benchmark(name), lambda_(std::forward<OLambda>(lam)) {}
-
-  LambdaBenchmark(LambdaBenchmark const&) = delete;
-
- private:
-  template <class Lam>
-  friend Benchmark* ::benchmark::RegisterBenchmark(const char*, Lam&&);
-
-  Lambda lambda_;
-};
-#endif
-
-}  // namespace internal
-
-inline internal::Benchmark* RegisterBenchmark(const char* name,
-                                              internal::Function* fn) {
-  return internal::RegisterBenchmarkInternal(
-      ::new internal::FunctionBenchmark(name, fn));
-}
-
-#ifdef BENCHMARK_HAS_CXX11
-template <class Lambda>
-internal::Benchmark* RegisterBenchmark(const char* name, Lambda&& fn) {
-  using BenchType = internal::LambdaBenchmark<typename std::decay<Lambda>::type>;
-  return internal::RegisterBenchmarkInternal(
-      ::new BenchType(name, std::forward<Lambda>(fn)));
-}
-#endif
-
-#if defined(BENCHMARK_HAS_CXX11) && \
-    (!defined(BENCHMARK_GCC_VERSION) || BENCHMARK_GCC_VERSION >= 409)
-template <class Lambda, class... Args>
-internal::Benchmark* RegisterBenchmark(const char* name, Lambda&& fn,
-                                       Args&&... args) {
-  return benchmark::RegisterBenchmark(
-      name, [=](benchmark::State& st) { fn(st, args...); });
-}
-#else
-#define BENCHMARK_HAS_NO_VARIADIC_REGISTER_BENCHMARK
-#endif
-
-// The base class for all fixture tests.
-class Fixture : public internal::Benchmark {
- public:
-  Fixture() : internal::Benchmark("") {}
-
-  void Run(State& st) override {
-    this->SetUp(st);
-    this->BenchmarkCase(st);
-    this->TearDown(st);
-  }
-
-  // These will be deprecated ...
-  virtual void SetUp(const State&) {}
-  virtual void TearDown(const State&) {}
-  // ... In favor of these.
-  virtual void SetUp(State& st) { SetUp(const_cast<const State&>(st)); }
-  virtual void TearDown(State& st) { TearDown(const_cast<const State&>(st)); }
-
- protected:
-  virtual void BenchmarkCase(State&) = 0;
-};
-
-}  // namespace benchmark
-
-// ------------------------------------------------------
-// Macro to register benchmarks
-
-// Check that __COUNTER__ is defined and that __COUNTER__ increases by 1
-// every time it is expanded. X + 1 == X + 0 is used in case X is defined to be
-// empty. If X is empty the expression becomes (+1 == +0).
-#if defined(__COUNTER__) && (__COUNTER__ + 1 == __COUNTER__ + 0)
-#define BENCHMARK_PRIVATE_UNIQUE_ID __COUNTER__
-#else
-#define BENCHMARK_PRIVATE_UNIQUE_ID __LINE__
-#endif
-
-// Helpers for generating unique variable names
-#define BENCHMARK_PRIVATE_NAME(n) \
-  BENCHMARK_PRIVATE_CONCAT(_benchmark_, BENCHMARK_PRIVATE_UNIQUE_ID, n)
-#define BENCHMARK_PRIVATE_CONCAT(a, b, c) BENCHMARK_PRIVATE_CONCAT2(a, b, c)
-#define BENCHMARK_PRIVATE_CONCAT2(a, b, c) a##b##c
-
-#define BENCHMARK_PRIVATE_DECLARE(n)                                 \
-  static ::benchmark::internal::Benchmark* BENCHMARK_PRIVATE_NAME(n) \
-      BENCHMARK_UNUSED
-
-#define BENCHMARK(n)                                     \
-  BENCHMARK_PRIVATE_DECLARE(n) =                         \
-      (::benchmark::internal::RegisterBenchmarkInternal( \
-          new ::benchmark::internal::FunctionBenchmark(#n, n)))
-
-// Old-style macros
-#define BENCHMARK_WITH_ARG(n, a) BENCHMARK(n)->Arg((a))
-#define BENCHMARK_WITH_ARG2(n, a1, a2) BENCHMARK(n)->Args({(a1), (a2)})
-#define BENCHMARK_WITH_UNIT(n, t) BENCHMARK(n)->Unit((t))
-#define BENCHMARK_RANGE(n, lo, hi) BENCHMARK(n)->Range((lo), (hi))
-#define BENCHMARK_RANGE2(n, l1, h1, l2, h2) \
-  BENCHMARK(n)->RangePair({{(l1), (h1)}, {(l2), (h2)}})
-
-#ifdef BENCHMARK_HAS_CXX11
-
-// Register a benchmark which invokes the function specified by `func`
-// with the additional arguments specified by `...`.
-//
-// For example:
-//
-// template <class ...ExtraArgs>`
-// void BM_takes_args(benchmark::State& state, ExtraArgs&&... extra_args) {
-//  [...]
-//}
-// /* Registers a benchmark named "BM_takes_args/int_string_test` */
-// BENCHMARK_CAPTURE(BM_takes_args, int_string_test, 42, std::string("abc"));
-#define BENCHMARK_CAPTURE(func, test_case_name, ...)     \
-  BENCHMARK_PRIVATE_DECLARE(func) =                      \
-      (::benchmark::internal::RegisterBenchmarkInternal( \
-          new ::benchmark::internal::FunctionBenchmark(  \
-              #func "/" #test_case_name,                 \
-              [](::benchmark::State& st) { func(st, __VA_ARGS__); })))
-
-#endif  // BENCHMARK_HAS_CXX11
-
-// This will register a benchmark for a templatized function.  For example:
-//
-// template<int arg>
-// void BM_Foo(int iters);
-//
-// BENCHMARK_TEMPLATE(BM_Foo, 1);
-//
-// will register BM_Foo<1> as a benchmark.
-#define BENCHMARK_TEMPLATE1(n, a)                        \
-  BENCHMARK_PRIVATE_DECLARE(n) =                         \
-      (::benchmark::internal::RegisterBenchmarkInternal( \
-          new ::benchmark::internal::FunctionBenchmark(#n "<" #a ">", n<a>)))
-
-#define BENCHMARK_TEMPLATE2(n, a, b)                                         \
-  BENCHMARK_PRIVATE_DECLARE(n) =                                             \
-      (::benchmark::internal::RegisterBenchmarkInternal(                     \
-          new ::benchmark::internal::FunctionBenchmark(#n "<" #a "," #b ">", \
-                                                       n<a, b>)))
-
-#ifdef BENCHMARK_HAS_CXX11
-#define BENCHMARK_TEMPLATE(n, ...)                       \
-  BENCHMARK_PRIVATE_DECLARE(n) =                         \
-      (::benchmark::internal::RegisterBenchmarkInternal( \
-          new ::benchmark::internal::FunctionBenchmark(  \
-              #n "<" #__VA_ARGS__ ">", n<__VA_ARGS__>)))
-#else
-#define BENCHMARK_TEMPLATE(n, a) BENCHMARK_TEMPLATE1(n, a)
-#endif
-
-#define BENCHMARK_PRIVATE_DECLARE_F(BaseClass, Method)        \
-  class BaseClass##_##Method##_Benchmark : public BaseClass { \
-   public:                                                    \
-    BaseClass##_##Method##_Benchmark() : BaseClass() {        \
-      this->SetName(#BaseClass "/" #Method);                  \
-    }                                                         \
-                                                              \
-   protected:                                                 \
-    virtual void BenchmarkCase(::benchmark::State&);          \
-  };
-
-#define BENCHMARK_TEMPLATE1_PRIVATE_DECLARE_F(BaseClass, Method, a) \
-  class BaseClass##_##Method##_Benchmark : public BaseClass<a> {    \
-   public:                                                          \
-    BaseClass##_##Method##_Benchmark() : BaseClass<a>() {           \
-      this->SetName(#BaseClass"<" #a ">/" #Method);                 \
-    }                                                               \
-                                                                    \
-   protected:                                                       \
-    virtual void BenchmarkCase(::benchmark::State&);                \
-  };
-
-#define BENCHMARK_TEMPLATE2_PRIVATE_DECLARE_F(BaseClass, Method, a, b) \
-  class BaseClass##_##Method##_Benchmark : public BaseClass<a, b> {    \
-   public:                                                             \
-    BaseClass##_##Method##_Benchmark() : BaseClass<a, b>() {           \
-      this->SetName(#BaseClass"<" #a "," #b ">/" #Method);             \
-    }                                                                  \
-                                                                       \
-   protected:                                                          \
-    virtual void BenchmarkCase(::benchmark::State&);                   \
-  };
-
-#ifdef BENCHMARK_HAS_CXX11
-#define BENCHMARK_TEMPLATE_PRIVATE_DECLARE_F(BaseClass, Method, ...)       \
-  class BaseClass##_##Method##_Benchmark : public BaseClass<__VA_ARGS__> { \
-   public:                                                                 \
-    BaseClass##_##Method##_Benchmark() : BaseClass<__VA_ARGS__>() {        \
-      this->SetName(#BaseClass"<" #__VA_ARGS__ ">/" #Method);              \
-    }                                                                      \
-                                                                           \
-   protected:                                                              \
-    virtual void BenchmarkCase(::benchmark::State&);                       \
-  };
-#else
-#define BENCHMARK_TEMPLATE_PRIVATE_DECLARE_F(n, a) BENCHMARK_TEMPLATE1_PRIVATE_DECLARE_F(n, a)
-#endif
-
-#define BENCHMARK_DEFINE_F(BaseClass, Method)    \
-  BENCHMARK_PRIVATE_DECLARE_F(BaseClass, Method) \
-  void BaseClass##_##Method##_Benchmark::BenchmarkCase
-
-#define BENCHMARK_TEMPLATE1_DEFINE_F(BaseClass, Method, a)    \
-  BENCHMARK_TEMPLATE1_PRIVATE_DECLARE_F(BaseClass, Method, a) \
-  void BaseClass##_##Method##_Benchmark::BenchmarkCase
-
-#define BENCHMARK_TEMPLATE2_DEFINE_F(BaseClass, Method, a, b)    \
-  BENCHMARK_TEMPLATE2_PRIVATE_DECLARE_F(BaseClass, Method, a, b) \
-  void BaseClass##_##Method##_Benchmark::BenchmarkCase
-
-#ifdef BENCHMARK_HAS_CXX11
-#define BENCHMARK_TEMPLATE_DEFINE_F(BaseClass, Method, ...)            \
-  BENCHMARK_TEMPLATE_PRIVATE_DECLARE_F(BaseClass, Method, __VA_ARGS__) \
-  void BaseClass##_##Method##_Benchmark::BenchmarkCase
-#else
-#define BENCHMARK_TEMPLATE_DEFINE_F(BaseClass, Method, a) BENCHMARK_TEMPLATE1_DEFINE_F(BaseClass, Method, a)
-#endif
-
-#define BENCHMARK_REGISTER_F(BaseClass, Method) \
-  BENCHMARK_PRIVATE_REGISTER_F(BaseClass##_##Method##_Benchmark)
-
-#define BENCHMARK_PRIVATE_REGISTER_F(TestName) \
-  BENCHMARK_PRIVATE_DECLARE(TestName) =        \
-      (::benchmark::internal::RegisterBenchmarkInternal(new TestName()))
-
-// This macro will define and register a benchmark within a fixture class.
-#define BENCHMARK_F(BaseClass, Method)           \
-  BENCHMARK_PRIVATE_DECLARE_F(BaseClass, Method) \
-  BENCHMARK_REGISTER_F(BaseClass, Method);       \
-  void BaseClass##_##Method##_Benchmark::BenchmarkCase
-
-#define BENCHMARK_TEMPLATE1_F(BaseClass, Method, a)           \
-  BENCHMARK_TEMPLATE1_PRIVATE_DECLARE_F(BaseClass, Method, a) \
-  BENCHMARK_REGISTER_F(BaseClass, Method);                    \
-  void BaseClass##_##Method##_Benchmark::BenchmarkCase
-
-#define BENCHMARK_TEMPLATE2_F(BaseClass, Method, a, b)           \
-  BENCHMARK_TEMPLATE2_PRIVATE_DECLARE_F(BaseClass, Method, a, b) \
-  BENCHMARK_REGISTER_F(BaseClass, Method);                       \
-  void BaseClass##_##Method##_Benchmark::BenchmarkCase
-
-#ifdef BENCHMARK_HAS_CXX11
-#define BENCHMARK_TEMPLATE_F(BaseClass, Method, ...)           \
-  BENCHMARK_TEMPLATE_PRIVATE_DECLARE_F(BaseClass, Method, __VA_ARGS__) \
-  BENCHMARK_REGISTER_F(BaseClass, Method);                     \
-  void BaseClass##_##Method##_Benchmark::BenchmarkCase
-#else
-#define BENCHMARK_TEMPLATE_F(BaseClass, Method, a) BENCHMARK_TEMPLATE1_F(BaseClass, Method, a)
-#endif
-
-// Helper macro to create a main routine in a test that runs the benchmarks
-#define BENCHMARK_MAIN()                   \
-  int main(int argc, char** argv) {        \
-    ::benchmark::Initialize(&argc, argv);  \
-    if (::benchmark::ReportUnrecognizedArguments(argc, argv)) return 1; \
-    ::benchmark::RunSpecifiedBenchmarks(); \
-  }                                        \
-  int main(int, char**)
-
-
-// ------------------------------------------------------
-// Benchmark Reporters
-
-namespace benchmark {
-
-struct CPUInfo {
-  struct CacheInfo {
-    std::string type;
-    int level;
-    int size;
-    int num_sharing;
-  };
-
-  int num_cpus;
-  double cycles_per_second;
-  std::vector<CacheInfo> caches;
-  bool scaling_enabled;
-
-  static const CPUInfo& Get();
-
- private:
-  CPUInfo();
-  BENCHMARK_DISALLOW_COPY_AND_ASSIGN(CPUInfo);
-};
-
-// Interface for custom benchmark result printers.
-// By default, benchmark reports are printed to stdout. However an application
-// can control the destination of the reports by calling
-// RunSpecifiedBenchmarks and passing it a custom reporter object.
-// The reporter object must implement the following interface.
-class BenchmarkReporter {
- public:
-  struct Context {
-    CPUInfo const& cpu_info;
-    // The number of chars in the longest benchmark name.
-    size_t name_field_width;
-    static const char *executable_name;
-    Context();
-  };
-
-  struct Run {
-    Run()
-        : error_occurred(false),
-          iterations(1),
-          time_unit(kNanosecond),
-          real_accumulated_time(0),
-          cpu_accumulated_time(0),
-          bytes_per_second(0),
-          items_per_second(0),
-          max_heapbytes_used(0),
-          complexity(oNone),
-          complexity_lambda(),
-          complexity_n(0),
-          report_big_o(false),
-          report_rms(false),
-          counters() {}
-
-    std::string benchmark_name;
-    std::string report_label;  // Empty if not set by benchmark.
-    bool error_occurred;
-    std::string error_message;
-
-    int64_t iterations;
-    TimeUnit time_unit;
-    double real_accumulated_time;
-    double cpu_accumulated_time;
-
-    // Return a value representing the real time per iteration in the unit
-    // specified by 'time_unit'.
-    // NOTE: If 'iterations' is zero the returned value represents the
-    // accumulated time.
-    double GetAdjustedRealTime() const;
-
-    // Return a value representing the cpu time per iteration in the unit
-    // specified by 'time_unit'.
-    // NOTE: If 'iterations' is zero the returned value represents the
-    // accumulated time.
-    double GetAdjustedCPUTime() const;
-
-    // Zero if not set by benchmark.
-    double bytes_per_second;
-    double items_per_second;
-
-    // This is set to 0.0 if memory tracing is not enabled.
-    double max_heapbytes_used;
-
-    // Keep track of arguments to compute asymptotic complexity
-    BigO complexity;
-    BigOFunc* complexity_lambda;
-    int64_t complexity_n;
-
-    // what statistics to compute from the measurements
-    const std::vector<Statistics>* statistics;
-
-    // Inform print function whether the current run is a complexity report
-    bool report_big_o;
-    bool report_rms;
-
-    UserCounters counters;
-  };
-
-  // Construct a BenchmarkReporter with the output stream set to 'std::cout'
-  // and the error stream set to 'std::cerr'
-  BenchmarkReporter();
-
-  // Called once for every suite of benchmarks run.
-  // The parameter "context" contains information that the
-  // reporter may wish to use when generating its report, for example the
-  // platform under which the benchmarks are running. The benchmark run is
-  // never started if this function returns false, allowing the reporter
-  // to skip runs based on the context information.
-  virtual bool ReportContext(const Context& context) = 0;
-
-  // Called once for each group of benchmark runs, gives information about
-  // cpu-time and heap memory usage during the benchmark run. If the group
-  // of runs contained more than two entries then 'report' contains additional
-  // elements representing the mean and standard deviation of those runs.
-  // Additionally if this group of runs was the last in a family of benchmarks
-  // 'reports' contains additional entries representing the asymptotic
-  // complexity and RMS of that benchmark family.
-  virtual void ReportRuns(const std::vector<Run>& report) = 0;
-
-  // Called once and only once after ever group of benchmarks is run and
-  // reported.
-  virtual void Finalize() {}
-
-  // REQUIRES: The object referenced by 'out' is valid for the lifetime
-  // of the reporter.
-  void SetOutputStream(std::ostream* out) {
-    assert(out);
-    output_stream_ = out;
-  }
-
-  // REQUIRES: The object referenced by 'err' is valid for the lifetime
-  // of the reporter.
-  void SetErrorStream(std::ostream* err) {
-    assert(err);
-    error_stream_ = err;
-  }
-
-  std::ostream& GetOutputStream() const { return *output_stream_; }
-
-  std::ostream& GetErrorStream() const { return *error_stream_; }
-
-  virtual ~BenchmarkReporter();
-
-  // Write a human readable string to 'out' representing the specified
-  // 'context'.
-  // REQUIRES: 'out' is non-null.
-  static void PrintBasicContext(std::ostream* out, Context const& context);
-
- private:
-  std::ostream* output_stream_;
-  std::ostream* error_stream_;
-};
-
-// Simple reporter that outputs benchmark data to the console. This is the
-// default reporter used by RunSpecifiedBenchmarks().
-class ConsoleReporter : public BenchmarkReporter {
-public:
-  enum OutputOptions {
-    OO_None = 0,
-    OO_Color = 1,
-    OO_Tabular = 2,
-    OO_ColorTabular = OO_Color|OO_Tabular,
-    OO_Defaults = OO_ColorTabular
-  };
-  explicit ConsoleReporter(OutputOptions opts_ = OO_Defaults)
-      : output_options_(opts_), name_field_width_(0),
-        prev_counters_(), printed_header_(false) {}
-
-  bool ReportContext(const Context& context) override;
-  void ReportRuns(const std::vector<Run>& reports) override;
-
- protected:
-  virtual void PrintRunData(const Run& report);
-  virtual void PrintHeader(const Run& report);
-
-  OutputOptions output_options_;
-  size_t name_field_width_;
-  UserCounters prev_counters_;
-  bool printed_header_;
-};
-
-class JSONReporter : public BenchmarkReporter {
- public:
-  JSONReporter() : first_report_(true) {}
-  bool ReportContext(const Context& context) override;
-  void ReportRuns(const std::vector<Run>& reports) override;
-  void Finalize() override;
-
- private:
-  void PrintRunData(const Run& report);
-
-  bool first_report_;
-};
-
-class CSVReporter : public BenchmarkReporter {
- public:
-  CSVReporter() : printed_header_(false) {}
-  bool ReportContext(const Context& context) override;
-  void ReportRuns(const std::vector<Run>& reports) override;
-
- private:
-  void PrintRunData(const Run& report);
-
-  bool printed_header_;
-  std::set< std::string > user_counter_names_;
-};
-
-inline const char* GetTimeUnitString(TimeUnit unit) {
-  switch (unit) {
-    case kMillisecond:
-      return "ms";
-    case kMicrosecond:
-      return "us";
-    case kNanosecond:
-      return "ns";
-  }
-  BENCHMARK_UNREACHABLE();
-}
-
-inline double GetTimeUnitMultiplier(TimeUnit unit) {
-  switch (unit) {
-    case kMillisecond:
-      return 1e3;
-    case kMicrosecond:
-      return 1e6;
-    case kNanosecond:
-      return 1e9;
-  }
-  BENCHMARK_UNREACHABLE();
-}
-
-} // namespace benchmark
-
-#endif  // BENCHMARK_BENCHMARK_H_

diff  --git a/llvm/utils/benchmark/mingw.py b/llvm/utils/benchmark/mingw.py
deleted file mode 100644
index 0b69692ca2a40..0000000000000
--- a/llvm/utils/benchmark/mingw.py
+++ /dev/null
@@ -1,320 +0,0 @@
-#!/usr/bin/env python
-# encoding: utf-8
-
-import argparse
-import errno
-import logging
-import os
-import platform
-import re
-import sys
-import subprocess
-import tempfile
-
-try:
-    import winreg
-except ImportError:
-    import _winreg as winreg
-try:
-    import urllib.request as request
-except ImportError:
-    import urllib as request
-try:
-    import urllib.parse as parse
-except ImportError:
-    import urlparse as parse
-
-class EmptyLogger(object):
-    '''
-    Provides an implementation that performs no logging
-    '''
-    def debug(self, *k, **kw):
-        pass
-    def info(self, *k, **kw):
-        pass
-    def warn(self, *k, **kw):
-        pass
-    def error(self, *k, **kw):
-        pass
-    def critical(self, *k, **kw):
-        pass
-    def setLevel(self, *k, **kw):
-        pass
-
-urls = (
-    'http://downloads.sourceforge.net/project/mingw-w64/Toolchains%20'
-        'targetting%20Win32/Personal%20Builds/mingw-builds/installer/'
-        'repository.txt',
-    'http://downloads.sourceforge.net/project/mingwbuilds/host-windows/'
-        'repository.txt'
-)
-'''
-A list of mingw-build repositories
-'''
-
-def repository(urls = urls, log = EmptyLogger()):
-    '''
-    Downloads and parse mingw-build repository files and parses them
-    '''
-    log.info('getting mingw-builds repository')
-    versions = {}
-    re_sourceforge = re.compile(r'http://sourceforge.net/projects/([^/]+)/files')
-    re_sub = r'http://downloads.sourceforge.net/project/\1'
-    for url in urls:
-        log.debug(' - requesting: %s', url)
-        socket = request.urlopen(url)
-        repo = socket.read()
-        if not isinstance(repo, str):
-            repo = repo.decode();
-        socket.close()
-        for entry in repo.split('\n')[:-1]:
-            value = entry.split('|')
-            version = tuple([int(n) for n in value[0].strip().split('.')])
-            version = versions.setdefault(version, {})
-            arch = value[1].strip()
-            if arch == 'x32':
-                arch = 'i686'
-            elif arch == 'x64':
-                arch = 'x86_64'
-            arch = version.setdefault(arch, {})
-            threading = arch.setdefault(value[2].strip(), {})
-            exceptions = threading.setdefault(value[3].strip(), {})
-            revision = exceptions.setdefault(int(value[4].strip()[3:]),
-                re_sourceforge.sub(re_sub, value[5].strip()))
-    return versions
-
-def find_in_path(file, path=None):
-    '''
-    Attempts to find an executable in the path
-    '''
-    if platform.system() == 'Windows':
-        file += '.exe'
-    if path is None:
-        path = os.environ.get('PATH', '')
-    if type(path) is type(''):
-        path = path.split(os.pathsep)
-    return list(filter(os.path.exists,
-        map(lambda dir, file=file: os.path.join(dir, file), path)))
-
-def find_7zip(log = EmptyLogger()):
-    '''
-    Attempts to find 7zip for unpacking the mingw-build archives
-    '''
-    log.info('finding 7zip')
-    path = find_in_path('7z')
-    if not path:
-        key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, r'SOFTWARE\7-Zip')
-        path, _ = winreg.QueryValueEx(key, 'Path')
-        path = [os.path.join(path, '7z.exe')]
-    log.debug('found \'%s\'', path[0])
-    return path[0]
-
-find_7zip()
-
-def unpack(archive, location, log = EmptyLogger()):
-    '''
-    Unpacks a mingw-builds archive
-    '''
-    sevenzip = find_7zip(log)
-    log.info('unpacking %s', os.path.basename(archive))
-    cmd = [sevenzip, 'x', archive, '-o' + location, '-y']
-    log.debug(' - %r', cmd)
-    with open(os.devnull, 'w') as devnull:
-        subprocess.check_call(cmd, stdout = devnull)
-
-def download(url, location, log = EmptyLogger()):
-    '''
-    Downloads and unpacks a mingw-builds archive
-    '''
-    log.info('downloading MinGW')
-    log.debug(' - url: %s', url)
-    log.debug(' - location: %s', location)
-
-    re_content = re.compile(r'attachment;[ \t]*filename=(")?([^"]*)(")?[\r\n]*')
-
-    stream = request.urlopen(url)
-    try:
-        content = stream.getheader('Content-Disposition') or ''
-    except AttributeError:
-        content = stream.headers.getheader('Content-Disposition') or ''
-    matches = re_content.match(content)
-    if matches:
-        filename = matches.group(2)
-    else:
-        parsed = parse.urlparse(stream.geturl())
-        filename = os.path.basename(parsed.path)
-
-    try:
-        os.makedirs(location)
-    except OSError as e:
-        if e.errno == errno.EEXIST and os.path.isdir(location):
-            pass
-        else:
-            raise
-
-    archive = os.path.join(location, filename)
-    with open(archive, 'wb') as out:
-        while True:
-            buf = stream.read(1024)
-            if not buf:
-                break
-            out.write(buf)
-    unpack(archive, location, log = log)
-    os.remove(archive)
-
-    possible = os.path.join(location, 'mingw64')
-    if not os.path.exists(possible):
-        possible = os.path.join(location, 'mingw32')
-        if not os.path.exists(possible):
-            raise ValueError('Failed to find unpacked MinGW: ' + possible)
-    return possible
-
-def root(location = None, arch = None, version = None, threading = None,
-        exceptions = None, revision = None, log = EmptyLogger()):
-    '''
-    Returns the root folder of a specific version of the mingw-builds variant
-    of gcc. Will download the compiler if needed
-    '''
-
-    # Get the repository if we don't have all the information
-    if not (arch and version and threading and exceptions and revision):
-        versions = repository(log = log)
-
-    # Determine some defaults
-    version = version or max(versions.keys())
-    if not arch:
-        arch = platform.machine().lower()
-        if arch == 'x86':
-            arch = 'i686'
-        elif arch == 'amd64':
-            arch = 'x86_64'
-    if not threading:
-        keys = versions[version][arch].keys()
-        if 'posix' in keys:
-            threading = 'posix'
-        elif 'win32' in keys:
-            threading = 'win32'
-        else:
-            threading = keys[0]
-    if not exceptions:
-        keys = versions[version][arch][threading].keys()
-        if 'seh' in keys:
-            exceptions = 'seh'
-        elif 'sjlj' in keys:
-            exceptions = 'sjlj'
-        else:
-            exceptions = keys[0]
-    if revision == None:
-        revision = max(versions[version][arch][threading][exceptions].keys())
-    if not location:
-        location = os.path.join(tempfile.gettempdir(), 'mingw-builds')
-
-    # Get the download url
-    url = versions[version][arch][threading][exceptions][revision]
-
-    # Tell the user whatzzup
-    log.info('finding MinGW %s', '.'.join(str(v) for v in version))
-    log.debug(' - arch: %s', arch)
-    log.debug(' - threading: %s', threading)
-    log.debug(' - exceptions: %s', exceptions)
-    log.debug(' - revision: %s', revision)
-    log.debug(' - url: %s', url)
-
-    # Store each specific revision 
diff erently
-    slug = '{version}-{arch}-{threading}-{exceptions}-rev{revision}'
-    slug = slug.format(
-        version = '.'.join(str(v) for v in version),
-        arch = arch,
-        threading = threading,
-        exceptions = exceptions,
-        revision = revision
-    )
-    if arch == 'x86_64':
-        root_dir = os.path.join(location, slug, 'mingw64')
-    elif arch == 'i686':
-        root_dir = os.path.join(location, slug, 'mingw32')
-    else:
-        raise ValueError('Unknown MinGW arch: ' + arch)
-
-    # Download if needed
-    if not os.path.exists(root_dir):
-        downloaded = download(url, os.path.join(location, slug), log = log)
-        if downloaded != root_dir:
-            raise ValueError('The location of mingw did not match\n%s\n%s'
-                % (downloaded, root_dir))
-
-    return root_dir
-
-def str2ver(string):
-    '''
-    Converts a version string into a tuple
-    '''
-    try:
-        version = tuple(int(v) for v in string.split('.'))
-        if len(version) is not 3:
-            raise ValueError()
-    except ValueError:
-        raise argparse.ArgumentTypeError(
-            'please provide a three digit version string')
-    return version
-
-def main():
-    '''
-    Invoked when the script is run directly by the python interpreter
-    '''
-    parser = argparse.ArgumentParser(
-        description = 'Downloads a specific version of MinGW',
-        formatter_class = argparse.ArgumentDefaultsHelpFormatter
-    )
-    parser.add_argument('--location',
-        help = 'the location to download the compiler to',
-        default = os.path.join(tempfile.gettempdir(), 'mingw-builds'))
-    parser.add_argument('--arch', required = True, choices = ['i686', 'x86_64'],
-        help = 'the target MinGW architecture string')
-    parser.add_argument('--version', type = str2ver,
-        help = 'the version of GCC to download')
-    parser.add_argument('--threading', choices = ['posix', 'win32'],
-        help = 'the threading type of the compiler')
-    parser.add_argument('--exceptions', choices = ['sjlj', 'seh', 'dwarf'],
-        help = 'the method to throw exceptions')
-    parser.add_argument('--revision', type=int,
-        help = 'the revision of the MinGW release')
-    group = parser.add_mutually_exclusive_group()
-    group.add_argument('-v', '--verbose', action='store_true',
-        help='increase the script output verbosity')
-    group.add_argument('-q', '--quiet', action='store_true',
-        help='only print errors and warning')
-    args = parser.parse_args()
-
-    # Create the logger
-    logger = logging.getLogger('mingw')
-    handler = logging.StreamHandler()
-    formatter = logging.Formatter('%(message)s')
-    handler.setFormatter(formatter)
-    logger.addHandler(handler)
-    logger.setLevel(logging.INFO)
-    if args.quiet:
-        logger.setLevel(logging.WARN)
-    if args.verbose:
-        logger.setLevel(logging.DEBUG)
-
-    # Get MinGW
-    root_dir = root(location = args.location, arch = args.arch,
-        version = args.version, threading = args.threading,
-        exceptions = args.exceptions, revision = args.revision,
-        log = logger)
-
-    sys.stdout.write('%s\n' % os.path.join(root_dir, 'bin'))
-
-if __name__ == '__main__':
-    try:
-        main()
-    except IOError as e:
-        sys.stderr.write('IO error: %s\n' % e)
-        sys.exit(1)
-    except OSError as e:
-        sys.stderr.write('OS error: %s\n' % e)
-        sys.exit(1)
-    except KeyboardInterrupt as e:
-        sys.stderr.write('Killed\n')
-        sys.exit(1)

diff  --git a/llvm/utils/benchmark/releasing.md b/llvm/utils/benchmark/releasing.md
deleted file mode 100644
index e74a176cc7312..0000000000000
--- a/llvm/utils/benchmark/releasing.md
+++ /dev/null
@@ -1,16 +0,0 @@
-# How to release
-
-* Make sure you're on main and synced to HEAD
-* Ensure the project builds and tests run (sanity check only, obviously)
-    * `parallel -j0 exec ::: test/*_test` can help ensure everything at least
-      passes
-* Prepare release notes
-    * `git log $(git describe --abbrev=0 --tags)..HEAD` gives you the list of
-      commits between the last annotated tag and HEAD
-    * Pick the most interesting.
-* Create a release through github's interface
-    * Note this will create a lightweight tag.
-    * Update this to an annotated tag:
-      * `git pull --tags`
-      * `git tag -a -f <tag> <tag>`
-      * `git push --force origin`

diff  --git a/llvm/utils/benchmark/src/CMakeLists.txt b/llvm/utils/benchmark/src/CMakeLists.txt
deleted file mode 100644
index 0abfe3cfd53c2..0000000000000
--- a/llvm/utils/benchmark/src/CMakeLists.txt
+++ /dev/null
@@ -1,110 +0,0 @@
-# Allow the source files to find headers in src/
-include_directories(${PROJECT_SOURCE_DIR}/src)
-
-if (DEFINED BENCHMARK_CXX_LINKER_FLAGS)
-  list(APPEND CMAKE_SHARED_LINKER_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS})
-  list(APPEND CMAKE_MODULE_LINKER_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS})
-endif()
-
-file(GLOB
-  SOURCE_FILES
-    *.cc
-    ${PROJECT_SOURCE_DIR}/include/benchmark/*.h
-    ${CMAKE_CURRENT_SOURCE_DIR}/*.h)
-file(GLOB BENCHMARK_MAIN "benchmark_main.cc")
-foreach(item ${BENCHMARK_MAIN})
-  list(REMOVE_ITEM SOURCE_FILES "${item}")
-endforeach()
-
-add_library(benchmark ${SOURCE_FILES})
-set_target_properties(benchmark PROPERTIES
-  OUTPUT_NAME "benchmark"
-  VERSION ${GENERIC_LIB_VERSION}
-  SOVERSION ${GENERIC_LIB_SOVERSION}
-  FOLDER "Utils"
-)
-target_include_directories(benchmark PUBLIC
-    $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/../include>
-    )
-
-# Link threads.
-target_link_libraries(benchmark  ${BENCHMARK_CXX_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
-find_library(LIBRT rt)
-if(LIBRT)
-  target_link_libraries(benchmark ${LIBRT})
-endif()
-
-# We need extra libraries on Windows
-if(${CMAKE_SYSTEM_NAME} MATCHES "Windows")
-  target_link_libraries(benchmark shlwapi)
-endif()
-
-# We need extra libraries on Solaris
-if(${CMAKE_SYSTEM_NAME} MATCHES "SunOS")
-  target_link_libraries(benchmark kstat)
-endif()
-
-# Benchmark main library
-add_library(benchmark_main "benchmark_main.cc")
-set_target_properties(benchmark_main PROPERTIES
-  OUTPUT_NAME "benchmark_main"
-  VERSION ${GENERIC_LIB_VERSION}
-  SOVERSION ${GENERIC_LIB_SOVERSION}
-  FOLDER "Utils"
-)
-target_include_directories(benchmark PUBLIC
-    $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/../include>
-    )
-target_link_libraries(benchmark_main benchmark)
-
-set(include_install_dir "include")
-set(lib_install_dir "lib/")
-set(bin_install_dir "bin/")
-set(config_install_dir "lib/cmake/${PROJECT_NAME}")
-set(pkgconfig_install_dir "lib/pkgconfig")
-
-set(generated_dir "${CMAKE_CURRENT_BINARY_DIR}/generated")
-
-set(version_config "${generated_dir}/${PROJECT_NAME}ConfigVersion.cmake")
-set(project_config "${generated_dir}/${PROJECT_NAME}Config.cmake")
-set(pkg_config "${generated_dir}/${PROJECT_NAME}.pc")
-set(targets_export_name "${PROJECT_NAME}Targets")
-
-set(namespace "${PROJECT_NAME}::")
-
-include(CMakePackageConfigHelpers)
-write_basic_package_version_file(
-    "${version_config}" VERSION ${GIT_VERSION} COMPATIBILITY SameMajorVersion
-)
-
-configure_file("${PROJECT_SOURCE_DIR}/cmake/Config.cmake.in" "${project_config}" @ONLY)
-configure_file("${PROJECT_SOURCE_DIR}/cmake/benchmark.pc.in" "${pkg_config}" @ONLY)
-
-if (BENCHMARK_ENABLE_INSTALL)
-  # Install target (will install the library to specified CMAKE_INSTALL_PREFIX variable)
-  install(
-    TARGETS benchmark benchmark_main
-    EXPORT ${targets_export_name}
-    ARCHIVE DESTINATION ${lib_install_dir}
-    LIBRARY DESTINATION ${lib_install_dir}
-    RUNTIME DESTINATION ${bin_install_dir}
-    INCLUDES DESTINATION ${include_install_dir})
-
-  install(
-    DIRECTORY "${PROJECT_SOURCE_DIR}/include/benchmark"
-    DESTINATION ${include_install_dir}
-    FILES_MATCHING PATTERN "*.*h")
-
-  install(
-      FILES "${project_config}" "${version_config}"
-      DESTINATION "${config_install_dir}")
-
-  install(
-      FILES "${pkg_config}"
-      DESTINATION "${pkgconfig_install_dir}")
-
-  install(
-      EXPORT "${targets_export_name}"
-      NAMESPACE "${namespace}"
-      DESTINATION "${config_install_dir}")
-endif()

diff  --git a/llvm/utils/benchmark/src/arraysize.h b/llvm/utils/benchmark/src/arraysize.h
deleted file mode 100644
index 51a50f2dff271..0000000000000
--- a/llvm/utils/benchmark/src/arraysize.h
+++ /dev/null
@@ -1,33 +0,0 @@
-#ifndef BENCHMARK_ARRAYSIZE_H_
-#define BENCHMARK_ARRAYSIZE_H_
-
-#include "internal_macros.h"
-
-namespace benchmark {
-namespace internal {
-// The arraysize(arr) macro returns the # of elements in an array arr.
-// The expression is a compile-time constant, and therefore can be
-// used in defining new arrays, for example.  If you use arraysize on
-// a pointer by mistake, you will get a compile-time error.
-//
-
-// This template function declaration is used in defining arraysize.
-// Note that the function doesn't need an implementation, as we only
-// use its type.
-template <typename T, size_t N>
-char (&ArraySizeHelper(T (&array)[N]))[N];
-
-// That gcc wants both of these prototypes seems mysterious. VC, for
-// its part, can't decide which to use (another mystery). Matching of
-// template overloads: the final frontier.
-#ifndef COMPILER_MSVC
-template <typename T, size_t N>
-char (&ArraySizeHelper(const T (&array)[N]))[N];
-#endif
-
-#define arraysize(array) (sizeof(::benchmark::internal::ArraySizeHelper(array)))
-
-}  // end namespace internal
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_ARRAYSIZE_H_

diff  --git a/llvm/utils/benchmark/src/benchmark.cc b/llvm/utils/benchmark/src/benchmark.cc
deleted file mode 100644
index 82b15ac7090cf..0000000000000
--- a/llvm/utils/benchmark/src/benchmark.cc
+++ /dev/null
@@ -1,630 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-#include "benchmark_api_internal.h"
-#include "internal_macros.h"
-
-#ifndef BENCHMARK_OS_WINDOWS
-#ifndef BENCHMARK_OS_FUCHSIA
-#include <sys/resource.h>
-#endif
-#include <sys/time.h>
-#include <unistd.h>
-#endif
-
-#include <algorithm>
-#include <atomic>
-#include <condition_variable>
-#include <cstdio>
-#include <cstdlib>
-#include <fstream>
-#include <iostream>
-#include <memory>
-#include <string>
-#include <thread>
-
-#include "check.h"
-#include "colorprint.h"
-#include "commandlineflags.h"
-#include "complexity.h"
-#include "counter.h"
-#include "internal_macros.h"
-#include "log.h"
-#include "mutex.h"
-#include "re.h"
-#include "statistics.h"
-#include "string_util.h"
-#include "thread_manager.h"
-#include "thread_timer.h"
-
-DEFINE_bool(benchmark_list_tests, false,
-            "Print a list of benchmarks. This option overrides all other "
-            "options.");
-
-DEFINE_string(benchmark_filter, ".",
-              "A regular expression that specifies the set of benchmarks "
-              "to execute.  If this flag is empty, no benchmarks are run.  "
-              "If this flag is the string \"all\", all benchmarks linked "
-              "into the process are run.");
-
-DEFINE_double(benchmark_min_time, 0.5,
-              "Minimum number of seconds we should run benchmark before "
-              "results are considered significant.  For cpu-time based "
-              "tests, this is the lower bound on the total cpu time "
-              "used by all threads that make up the test.  For real-time "
-              "based tests, this is the lower bound on the elapsed time "
-              "of the benchmark execution, regardless of number of "
-              "threads.");
-
-DEFINE_int32(benchmark_repetitions, 1,
-             "The number of runs of each benchmark. If greater than 1, the "
-             "mean and standard deviation of the runs will be reported.");
-
-DEFINE_bool(benchmark_report_aggregates_only, false,
-            "Report the result of each benchmark repetitions. When 'true' is "
-            "specified only the mean, standard deviation, and other statistics "
-            "are reported for repeated benchmarks.");
-
-DEFINE_string(benchmark_format, "console",
-              "The format to use for console output. Valid values are "
-              "'console', 'json', or 'csv'.");
-
-DEFINE_string(benchmark_out_format, "json",
-              "The format to use for file output. Valid values are "
-              "'console', 'json', or 'csv'.");
-
-DEFINE_string(benchmark_out, "", "The file to write additional output to");
-
-DEFINE_string(benchmark_color, "auto",
-              "Whether to use colors in the output.  Valid values: "
-              "'true'/'yes'/1, 'false'/'no'/0, and 'auto'. 'auto' means to use "
-              "colors if the output is being sent to a terminal and the TERM "
-              "environment variable is set to a terminal type that supports "
-              "colors.");
-
-DEFINE_bool(benchmark_counters_tabular, false,
-            "Whether to use tabular format when printing user counters to "
-            "the console.  Valid values: 'true'/'yes'/1, 'false'/'no'/0."
-            "Defaults to false.");
-
-DEFINE_int32(v, 0, "The level of verbose logging to output");
-
-namespace benchmark {
-
-namespace {
-static const size_t kMaxIterations = 1000000000;
-}  // end namespace
-
-namespace internal {
-
-void UseCharPointer(char const volatile*) {}
-
-namespace {
-
-BenchmarkReporter::Run CreateRunReport(
-    const benchmark::internal::Benchmark::Instance& b,
-    const internal::ThreadManager::Result& results,
-    double seconds) {
-  // Create report about this benchmark run.
-  BenchmarkReporter::Run report;
-
-  report.benchmark_name = b.name;
-  report.error_occurred = results.has_error_;
-  report.error_message = results.error_message_;
-  report.report_label = results.report_label_;
-  // This is the total iterations across all threads.
-  report.iterations = results.iterations;
-  report.time_unit = b.time_unit;
-
-  if (!report.error_occurred) {
-    double bytes_per_second = 0;
-    if (results.bytes_processed > 0 && seconds > 0.0) {
-      bytes_per_second = (results.bytes_processed / seconds);
-    }
-    double items_per_second = 0;
-    if (results.items_processed > 0 && seconds > 0.0) {
-      items_per_second = (results.items_processed / seconds);
-    }
-
-    if (b.use_manual_time) {
-      report.real_accumulated_time = results.manual_time_used;
-    } else {
-      report.real_accumulated_time = results.real_time_used;
-    }
-    report.cpu_accumulated_time = results.cpu_time_used;
-    report.bytes_per_second = bytes_per_second;
-    report.items_per_second = items_per_second;
-    report.complexity_n = results.complexity_n;
-    report.complexity = b.complexity;
-    report.complexity_lambda = b.complexity_lambda;
-    report.statistics = b.statistics;
-    report.counters = results.counters;
-    internal::Finish(&report.counters, seconds, b.threads);
-  }
-  return report;
-}
-
-// Execute one thread of benchmark b for the specified number of iterations.
-// Adds the stats collected for the thread into *total.
-void RunInThread(const benchmark::internal::Benchmark::Instance* b,
-                 size_t iters, int thread_id,
-                 internal::ThreadManager* manager) {
-  internal::ThreadTimer timer;
-  State st(iters, b->arg, thread_id, b->threads, &timer, manager);
-  b->benchmark->Run(st);
-  CHECK(st.iterations() >= st.max_iterations)
-      << "Benchmark returned before State::KeepRunning() returned false!";
-  {
-    MutexLock l(manager->GetBenchmarkMutex());
-    internal::ThreadManager::Result& results = manager->results;
-    results.iterations += st.iterations();
-    results.cpu_time_used += timer.cpu_time_used();
-    results.real_time_used += timer.real_time_used();
-    results.manual_time_used += timer.manual_time_used();
-    results.bytes_processed += st.bytes_processed();
-    results.items_processed += st.items_processed();
-    results.complexity_n += st.complexity_length_n();
-    internal::Increment(&results.counters, st.counters);
-  }
-  manager->NotifyThreadComplete();
-}
-
-std::vector<BenchmarkReporter::Run> RunBenchmark(
-    const benchmark::internal::Benchmark::Instance& b,
-    std::vector<BenchmarkReporter::Run>* complexity_reports) {
-  std::vector<BenchmarkReporter::Run> reports;  // return value
-
-  const bool has_explicit_iteration_count = b.iterations != 0;
-  size_t iters = has_explicit_iteration_count ? b.iterations : 1;
-  std::unique_ptr<internal::ThreadManager> manager;
-  std::vector<std::thread> pool(b.threads - 1);
-  const int repeats =
-      b.repetitions != 0 ? b.repetitions : FLAGS_benchmark_repetitions;
-  const bool report_aggregates_only =
-      repeats != 1 &&
-      (b.report_mode == internal::RM_Unspecified
-           ? FLAGS_benchmark_report_aggregates_only
-           : b.report_mode == internal::RM_ReportAggregatesOnly);
-  for (int repetition_num = 0; repetition_num < repeats; repetition_num++) {
-    for (;;) {
-      // Try benchmark
-      VLOG(2) << "Running " << b.name << " for " << iters << "\n";
-
-      manager.reset(new internal::ThreadManager(b.threads));
-      for (std::size_t ti = 0; ti < pool.size(); ++ti) {
-        pool[ti] = std::thread(&RunInThread, &b, iters,
-                               static_cast<int>(ti + 1), manager.get());
-      }
-      RunInThread(&b, iters, 0, manager.get());
-      manager->WaitForAllThreads();
-      for (std::thread& thread : pool) thread.join();
-      internal::ThreadManager::Result results;
-      {
-        MutexLock l(manager->GetBenchmarkMutex());
-        results = manager->results;
-      }
-      manager.reset();
-      // Adjust real/manual time stats since they were reported per thread.
-      results.real_time_used /= b.threads;
-      results.manual_time_used /= b.threads;
-
-      VLOG(2) << "Ran in " << results.cpu_time_used << "/"
-              << results.real_time_used << "\n";
-
-      // Base decisions off of real time if requested by this benchmark.
-      double seconds = results.cpu_time_used;
-      if (b.use_manual_time) {
-        seconds = results.manual_time_used;
-      } else if (b.use_real_time) {
-        seconds = results.real_time_used;
-      }
-
-      const double min_time =
-          !IsZero(b.min_time) ? b.min_time : FLAGS_benchmark_min_time;
-
-      // Determine if this run should be reported; Either it has
-      // run for a sufficient amount of time or because an error was reported.
-      const bool should_report =  repetition_num > 0
-        || has_explicit_iteration_count  // An exact iteration count was requested
-        || results.has_error_
-        || iters >= kMaxIterations  // No chance to try again, we hit the limit.
-        || seconds >= min_time  // the elapsed time is large enough
-        // CPU time is specified but the elapsed real time greatly exceeds the
-        // minimum time. Note that user provided timers are except from this
-        // sanity check.
-        || ((results.real_time_used >= 5 * min_time) && !b.use_manual_time);
-
-      if (should_report) {
-        BenchmarkReporter::Run report = CreateRunReport(b, results, seconds);
-        if (!report.error_occurred && b.complexity != oNone)
-          complexity_reports->push_back(report);
-        reports.push_back(report);
-        break;
-      }
-
-      // See how much iterations should be increased by
-      // Note: Avoid division by zero with max(seconds, 1ns).
-      double multiplier = min_time * 1.4 / std::max(seconds, 1e-9);
-      // If our last run was at least 10% of FLAGS_benchmark_min_time then we
-      // use the multiplier directly. Otherwise we use at most 10 times
-      // expansion.
-      // NOTE: When the last run was at least 10% of the min time the max
-      // expansion should be 14x.
-      bool is_significant = (seconds / min_time) > 0.1;
-      multiplier = is_significant ? multiplier : std::min(10.0, multiplier);
-      if (multiplier <= 1.0) multiplier = 2.0;
-      double next_iters = std::max(multiplier * iters, iters + 1.0);
-      if (next_iters > kMaxIterations) {
-        next_iters = kMaxIterations;
-      }
-      VLOG(3) << "Next iters: " << next_iters << ", " << multiplier << "\n";
-      iters = static_cast<int>(next_iters + 0.5);
-    }
-  }
-  // Calculate additional statistics
-  auto stat_reports = ComputeStats(reports);
-  if ((b.complexity != oNone) && b.last_benchmark_instance) {
-    auto additional_run_stats = ComputeBigO(*complexity_reports);
-    stat_reports.insert(stat_reports.end(), additional_run_stats.begin(),
-                        additional_run_stats.end());
-    complexity_reports->clear();
-  }
-
-  if (report_aggregates_only) reports.clear();
-  reports.insert(reports.end(), stat_reports.begin(), stat_reports.end());
-  return reports;
-}
-
-}  // namespace
-}  // namespace internal
-
-State::State(size_t max_iters, const std::vector<int64_t>& ranges, int thread_i,
-             int n_threads, internal::ThreadTimer* timer,
-             internal::ThreadManager* manager)
-    : total_iterations_(0),
-      batch_leftover_(0),
-      max_iterations(max_iters),
-      started_(false),
-      finished_(false),
-      error_occurred_(false),
-      range_(ranges),
-      bytes_processed_(0),
-      items_processed_(0),
-      complexity_n_(0),
-      counters(),
-      thread_index(thread_i),
-      threads(n_threads),
-      timer_(timer),
-      manager_(manager) {
-  CHECK(max_iterations != 0) << "At least one iteration must be run";
-  CHECK_LT(thread_index, threads) << "thread_index must be less than threads";
-
-  // Note: The use of offsetof below is technically undefined until C++17
-  // because State is not a standard layout type. However, all compilers
-  // currently provide well-defined behavior as an extension (which is
-  // demonstrated since constexpr evaluation must diagnose all undefined
-  // behavior). However, GCC and Clang also warn about this use of offsetof,
-  // which must be suppressed.
-#ifdef __GNUC__
-#pragma GCC diagnostic push
-#pragma GCC diagnostic ignored "-Winvalid-offsetof"
-#endif
-  // Offset tests to ensure commonly accessed data is on the first cache line.
-  const int cache_line_size = 64;
-  static_assert(offsetof(State, error_occurred_) <=
-                (cache_line_size - sizeof(error_occurred_)), "");
-#ifdef __GNUC__
-#pragma GCC diagnostic pop
-#endif
-}
-
-void State::PauseTiming() {
-  // Add in time accumulated so far
-  CHECK(started_ && !finished_ && !error_occurred_);
-  timer_->StopTimer();
-}
-
-void State::ResumeTiming() {
-  CHECK(started_ && !finished_ && !error_occurred_);
-  timer_->StartTimer();
-}
-
-void State::SkipWithError(const char* msg) {
-  CHECK(msg);
-  error_occurred_ = true;
-  {
-    MutexLock l(manager_->GetBenchmarkMutex());
-    if (manager_->results.has_error_ == false) {
-      manager_->results.error_message_ = msg;
-      manager_->results.has_error_ = true;
-    }
-  }
-  total_iterations_ = 0;
-  if (timer_->running()) timer_->StopTimer();
-}
-
-void State::SetIterationTime(double seconds) {
-  timer_->SetIterationTime(seconds);
-}
-
-void State::SetLabel(const char* label) {
-  MutexLock l(manager_->GetBenchmarkMutex());
-  manager_->results.report_label_ = label;
-}
-
-void State::StartKeepRunning() {
-  CHECK(!started_ && !finished_);
-  started_ = true;
-  total_iterations_ = error_occurred_ ? 0 : max_iterations;
-  manager_->StartStopBarrier();
-  if (!error_occurred_) ResumeTiming();
-}
-
-void State::FinishKeepRunning() {
-  CHECK(started_ && (!finished_ || error_occurred_));
-  if (!error_occurred_) {
-    PauseTiming();
-  }
-  // Total iterations has now wrapped around past 0. Fix this.
-  total_iterations_ = 0;
-  finished_ = true;
-  manager_->StartStopBarrier();
-}
-
-namespace internal {
-namespace {
-
-void RunBenchmarks(const std::vector<Benchmark::Instance>& benchmarks,
-                           BenchmarkReporter* console_reporter,
-                           BenchmarkReporter* file_reporter) {
-  // Note the file_reporter can be null.
-  CHECK(console_reporter != nullptr);
-
-  // Determine the width of the name field using a minimum width of 10.
-  bool has_repetitions = FLAGS_benchmark_repetitions > 1;
-  size_t name_field_width = 10;
-  size_t stat_field_width = 0;
-  for (const Benchmark::Instance& benchmark : benchmarks) {
-    name_field_width =
-        std::max<size_t>(name_field_width, benchmark.name.size());
-    has_repetitions |= benchmark.repetitions > 1;
-
-    for(const auto& Stat : *benchmark.statistics)
-      stat_field_width = std::max<size_t>(stat_field_width, Stat.name_.size());
-  }
-  if (has_repetitions) name_field_width += 1 + stat_field_width;
-
-  // Print header here
-  BenchmarkReporter::Context context;
-  context.name_field_width = name_field_width;
-
-  // Keep track of running times of all instances of current benchmark
-  std::vector<BenchmarkReporter::Run> complexity_reports;
-
-  // We flush streams after invoking reporter methods that write to them. This
-  // ensures users get timely updates even when streams are not line-buffered.
-  auto flushStreams = [](BenchmarkReporter* reporter) {
-    if (!reporter) return;
-    std::flush(reporter->GetOutputStream());
-    std::flush(reporter->GetErrorStream());
-  };
-
-  if (console_reporter->ReportContext(context) &&
-      (!file_reporter || file_reporter->ReportContext(context))) {
-    flushStreams(console_reporter);
-    flushStreams(file_reporter);
-    for (const auto& benchmark : benchmarks) {
-      std::vector<BenchmarkReporter::Run> reports =
-          RunBenchmark(benchmark, &complexity_reports);
-      console_reporter->ReportRuns(reports);
-      if (file_reporter) file_reporter->ReportRuns(reports);
-      flushStreams(console_reporter);
-      flushStreams(file_reporter);
-    }
-  }
-  console_reporter->Finalize();
-  if (file_reporter) file_reporter->Finalize();
-  flushStreams(console_reporter);
-  flushStreams(file_reporter);
-}
-
-std::unique_ptr<BenchmarkReporter> CreateReporter(
-    std::string const& name, ConsoleReporter::OutputOptions output_opts) {
-  typedef std::unique_ptr<BenchmarkReporter> PtrType;
-  if (name == "console") {
-    return PtrType(new ConsoleReporter(output_opts));
-  } else if (name == "json") {
-    return PtrType(new JSONReporter);
-  } else if (name == "csv") {
-    return PtrType(new CSVReporter);
-  } else {
-    std::cerr << "Unexpected format: '" << name << "'\n";
-    std::exit(1);
-  }
-}
-
-}  // end namespace
-
-bool IsZero(double n) {
-  return std::abs(n) < std::numeric_limits<double>::epsilon();
-}
-
-ConsoleReporter::OutputOptions GetOutputOptions(bool force_no_color) {
-  int output_opts = ConsoleReporter::OO_Defaults;
-  if ((FLAGS_benchmark_color == "auto" && IsColorTerminal()) ||
-      IsTruthyFlagValue(FLAGS_benchmark_color)) {
-    output_opts |= ConsoleReporter::OO_Color;
-  } else {
-    output_opts &= ~ConsoleReporter::OO_Color;
-  }
-  if(force_no_color) {
-    output_opts &= ~ConsoleReporter::OO_Color;
-  }
-  if(FLAGS_benchmark_counters_tabular) {
-    output_opts |= ConsoleReporter::OO_Tabular;
-  } else {
-    output_opts &= ~ConsoleReporter::OO_Tabular;
-  }
-  return static_cast< ConsoleReporter::OutputOptions >(output_opts);
-}
-
-}  // end namespace internal
-
-size_t RunSpecifiedBenchmarks() {
-  return RunSpecifiedBenchmarks(nullptr, nullptr);
-}
-
-size_t RunSpecifiedBenchmarks(BenchmarkReporter* console_reporter) {
-  return RunSpecifiedBenchmarks(console_reporter, nullptr);
-}
-
-size_t RunSpecifiedBenchmarks(BenchmarkReporter* console_reporter,
-                              BenchmarkReporter* file_reporter) {
-  std::string spec = FLAGS_benchmark_filter;
-  if (spec.empty() || spec == "all")
-    spec = ".";  // Regexp that matches all benchmarks
-
-  // Setup the reporters
-  std::ofstream output_file;
-  std::unique_ptr<BenchmarkReporter> default_console_reporter;
-  std::unique_ptr<BenchmarkReporter> default_file_reporter;
-  if (!console_reporter) {
-    default_console_reporter = internal::CreateReporter(
-          FLAGS_benchmark_format, internal::GetOutputOptions());
-    console_reporter = default_console_reporter.get();
-  }
-  auto& Out = console_reporter->GetOutputStream();
-  auto& Err = console_reporter->GetErrorStream();
-
-  std::string const& fname = FLAGS_benchmark_out;
-  if (fname.empty() && file_reporter) {
-    Err << "A custom file reporter was provided but "
-           "--benchmark_out=<file> was not specified."
-        << std::endl;
-    std::exit(1);
-  }
-  if (!fname.empty()) {
-    output_file.open(fname);
-    if (!output_file.is_open()) {
-      Err << "invalid file name: '" << fname << std::endl;
-      std::exit(1);
-    }
-    if (!file_reporter) {
-      default_file_reporter = internal::CreateReporter(
-          FLAGS_benchmark_out_format, ConsoleReporter::OO_None);
-      file_reporter = default_file_reporter.get();
-    }
-    file_reporter->SetOutputStream(&output_file);
-    file_reporter->SetErrorStream(&output_file);
-  }
-
-  std::vector<internal::Benchmark::Instance> benchmarks;
-  if (!FindBenchmarksInternal(spec, &benchmarks, &Err)) return 0;
-
-  if (benchmarks.empty()) {
-    Err << "Failed to match any benchmarks against regex: " << spec << "\n";
-    return 0;
-  }
-
-  if (FLAGS_benchmark_list_tests) {
-    for (auto const& benchmark : benchmarks) Out << benchmark.name << "\n";
-  } else {
-    internal::RunBenchmarks(benchmarks, console_reporter, file_reporter);
-  }
-
-  return benchmarks.size();
-}
-
-namespace internal {
-
-void PrintUsageAndExit() {
-  fprintf(stdout,
-          "benchmark"
-          " [--benchmark_list_tests={true|false}]\n"
-          "          [--benchmark_filter=<regex>]\n"
-          "          [--benchmark_min_time=<min_time>]\n"
-          "          [--benchmark_repetitions=<num_repetitions>]\n"
-          "          [--benchmark_report_aggregates_only={true|false}\n"
-          "          [--benchmark_format=<console|json|csv>]\n"
-          "          [--benchmark_out=<filename>]\n"
-          "          [--benchmark_out_format=<json|console|csv>]\n"
-          "          [--benchmark_color={auto|true|false}]\n"
-          "          [--benchmark_counters_tabular={true|false}]\n"
-          "          [--v=<verbosity>]\n");
-  exit(0);
-}
-
-void ParseCommandLineFlags(int* argc, char** argv) {
-  using namespace benchmark;
-  BenchmarkReporter::Context::executable_name = argv[0];
-  for (int i = 1; i < *argc; ++i) {
-    if (ParseBoolFlag(argv[i], "benchmark_list_tests",
-                      &FLAGS_benchmark_list_tests) ||
-        ParseStringFlag(argv[i], "benchmark_filter", &FLAGS_benchmark_filter) ||
-        ParseDoubleFlag(argv[i], "benchmark_min_time",
-                        &FLAGS_benchmark_min_time) ||
-        ParseInt32Flag(argv[i], "benchmark_repetitions",
-                       &FLAGS_benchmark_repetitions) ||
-        ParseBoolFlag(argv[i], "benchmark_report_aggregates_only",
-                      &FLAGS_benchmark_report_aggregates_only) ||
-        ParseStringFlag(argv[i], "benchmark_format", &FLAGS_benchmark_format) ||
-        ParseStringFlag(argv[i], "benchmark_out", &FLAGS_benchmark_out) ||
-        ParseStringFlag(argv[i], "benchmark_out_format",
-                        &FLAGS_benchmark_out_format) ||
-        ParseStringFlag(argv[i], "benchmark_color", &FLAGS_benchmark_color) ||
-        // "color_print" is the deprecated name for "benchmark_color".
-        // TODO: Remove this.
-        ParseStringFlag(argv[i], "color_print", &FLAGS_benchmark_color) ||
-        ParseBoolFlag(argv[i], "benchmark_counters_tabular",
-                        &FLAGS_benchmark_counters_tabular) ||
-        ParseInt32Flag(argv[i], "v", &FLAGS_v)) {
-      for (int j = i; j != *argc - 1; ++j) argv[j] = argv[j + 1];
-
-      --(*argc);
-      --i;
-    } else if (IsFlag(argv[i], "help")) {
-      PrintUsageAndExit();
-    }
-  }
-  for (auto const* flag :
-       {&FLAGS_benchmark_format, &FLAGS_benchmark_out_format})
-    if (*flag != "console" && *flag != "json" && *flag != "csv") {
-      PrintUsageAndExit();
-    }
-  if (FLAGS_benchmark_color.empty()) {
-    PrintUsageAndExit();
-  }
-}
-
-int InitializeStreams() {
-  static std::ios_base::Init init;
-  return 0;
-}
-
-}  // end namespace internal
-
-void Initialize(int* argc, char** argv) {
-  internal::ParseCommandLineFlags(argc, argv);
-  internal::LogLevel() = FLAGS_v;
-}
-
-bool ReportUnrecognizedArguments(int argc, char** argv) {
-  for (int i = 1; i < argc; ++i) {
-    fprintf(stderr, "%s: error: unrecognized command-line flag: %s\n", argv[0], argv[i]);
-  }
-  return argc > 1;
-}
-
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/benchmark_api_internal.h b/llvm/utils/benchmark/src/benchmark_api_internal.h
deleted file mode 100644
index dd7a3ffe8cbb2..0000000000000
--- a/llvm/utils/benchmark/src/benchmark_api_internal.h
+++ /dev/null
@@ -1,47 +0,0 @@
-#ifndef BENCHMARK_API_INTERNAL_H
-#define BENCHMARK_API_INTERNAL_H
-
-#include "benchmark/benchmark.h"
-
-#include <cmath>
-#include <iosfwd>
-#include <limits>
-#include <string>
-#include <vector>
-
-namespace benchmark {
-namespace internal {
-
-// Information kept per benchmark we may want to run
-struct Benchmark::Instance {
-  std::string name;
-  Benchmark* benchmark;
-  ReportMode report_mode;
-  std::vector<int64_t> arg;
-  TimeUnit time_unit;
-  int range_multiplier;
-  bool use_real_time;
-  bool use_manual_time;
-  BigO complexity;
-  BigOFunc* complexity_lambda;
-  UserCounters counters;
-  const std::vector<Statistics>* statistics;
-  bool last_benchmark_instance;
-  int repetitions;
-  double min_time;
-  size_t iterations;
-  int threads;  // Number of concurrent threads to us
-};
-
-bool FindBenchmarksInternal(const std::string& re,
-                            std::vector<Benchmark::Instance>* benchmarks,
-                            std::ostream* Err);
-
-bool IsZero(double n);
-
-ConsoleReporter::OutputOptions GetOutputOptions(bool force_no_color = false);
-
-}  // end namespace internal
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_API_INTERNAL_H

diff  --git a/llvm/utils/benchmark/src/benchmark_main.cc b/llvm/utils/benchmark/src/benchmark_main.cc
deleted file mode 100644
index b3b247831496f..0000000000000
--- a/llvm/utils/benchmark/src/benchmark_main.cc
+++ /dev/null
@@ -1,17 +0,0 @@
-// Copyright 2018 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-
-BENCHMARK_MAIN();

diff  --git a/llvm/utils/benchmark/src/benchmark_register.cc b/llvm/utils/benchmark/src/benchmark_register.cc
deleted file mode 100644
index dc6f93568539b..0000000000000
--- a/llvm/utils/benchmark/src/benchmark_register.cc
+++ /dev/null
@@ -1,461 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark_register.h"
-
-#ifndef BENCHMARK_OS_WINDOWS
-#ifndef BENCHMARK_OS_FUCHSIA
-#include <sys/resource.h>
-#endif
-#include <sys/time.h>
-#include <unistd.h>
-#endif
-
-#include <algorithm>
-#include <atomic>
-#include <condition_variable>
-#include <cstdio>
-#include <cstdlib>
-#include <cstring>
-#include <fstream>
-#include <iostream>
-#include <memory>
-#include <sstream>
-#include <thread>
-
-#include "benchmark/benchmark.h"
-#include "benchmark_api_internal.h"
-#include "check.h"
-#include "commandlineflags.h"
-#include "complexity.h"
-#include "internal_macros.h"
-#include "log.h"
-#include "mutex.h"
-#include "re.h"
-#include "statistics.h"
-#include "string_util.h"
-#include "timers.h"
-
-namespace benchmark {
-
-namespace {
-// For non-dense Range, intermediate values are powers of kRangeMultiplier.
-static const int kRangeMultiplier = 8;
-// The size of a benchmark family determines is the number of inputs to repeat
-// the benchmark on. If this is "large" then warn the user during configuration.
-static const size_t kMaxFamilySize = 100;
-}  // end namespace
-
-namespace internal {
-
-//=============================================================================//
-//                         BenchmarkFamilies
-//=============================================================================//
-
-// Class for managing registered benchmarks.  Note that each registered
-// benchmark identifies a family of related benchmarks to run.
-class BenchmarkFamilies {
- public:
-  static BenchmarkFamilies* GetInstance();
-
-  // Registers a benchmark family and returns the index assigned to it.
-  size_t AddBenchmark(std::unique_ptr<Benchmark> family);
-
-  // Clear all registered benchmark families.
-  void ClearBenchmarks();
-
-  // Extract the list of benchmark instances that match the specified
-  // regular expression.
-  bool FindBenchmarks(std::string re,
-                      std::vector<Benchmark::Instance>* benchmarks,
-                      std::ostream* Err);
-
- private:
-  BenchmarkFamilies() {}
-
-  std::vector<std::unique_ptr<Benchmark>> families_;
-  Mutex mutex_;
-};
-
-BenchmarkFamilies* BenchmarkFamilies::GetInstance() {
-  static BenchmarkFamilies instance;
-  return &instance;
-}
-
-size_t BenchmarkFamilies::AddBenchmark(std::unique_ptr<Benchmark> family) {
-  MutexLock l(mutex_);
-  size_t index = families_.size();
-  families_.push_back(std::move(family));
-  return index;
-}
-
-void BenchmarkFamilies::ClearBenchmarks() {
-  MutexLock l(mutex_);
-  families_.clear();
-  families_.shrink_to_fit();
-}
-
-bool BenchmarkFamilies::FindBenchmarks(
-    std::string spec, std::vector<Benchmark::Instance>* benchmarks,
-    std::ostream* ErrStream) {
-  CHECK(ErrStream);
-  auto& Err = *ErrStream;
-  // Make regular expression out of command-line flag
-  std::string error_msg;
-  Regex re;
-  bool isNegativeFilter = false;
-  if(spec[0] == '-') {
-      spec.replace(0, 1, "");
-      isNegativeFilter = true;
-  }
-  if (!re.Init(spec, &error_msg)) {
-    Err << "Could not compile benchmark re: " << error_msg << std::endl;
-    return false;
-  }
-
-  // Special list of thread counts to use when none are specified
-  const std::vector<int> one_thread = {1};
-
-  MutexLock l(mutex_);
-  for (std::unique_ptr<Benchmark>& family : families_) {
-    // Family was deleted or benchmark doesn't match
-    if (!family) continue;
-
-    if (family->ArgsCnt() == -1) {
-      family->Args({});
-    }
-    const std::vector<int>* thread_counts =
-        (family->thread_counts_.empty()
-             ? &one_thread
-             : &static_cast<const std::vector<int>&>(family->thread_counts_));
-    const size_t family_size = family->args_.size() * thread_counts->size();
-    // The benchmark will be run at least 'family_size' 
diff erent inputs.
-    // If 'family_size' is very large warn the user.
-    if (family_size > kMaxFamilySize) {
-      Err << "The number of inputs is very large. " << family->name_
-          << " will be repeated at least " << family_size << " times.\n";
-    }
-    // reserve in the special case the regex ".", since we know the final
-    // family size.
-    if (spec == ".") benchmarks->reserve(family_size);
-
-    for (auto const& args : family->args_) {
-      for (int num_threads : *thread_counts) {
-        Benchmark::Instance instance;
-        instance.name = family->name_;
-        instance.benchmark = family.get();
-        instance.report_mode = family->report_mode_;
-        instance.arg = args;
-        instance.time_unit = family->time_unit_;
-        instance.range_multiplier = family->range_multiplier_;
-        instance.min_time = family->min_time_;
-        instance.iterations = family->iterations_;
-        instance.repetitions = family->repetitions_;
-        instance.use_real_time = family->use_real_time_;
-        instance.use_manual_time = family->use_manual_time_;
-        instance.complexity = family->complexity_;
-        instance.complexity_lambda = family->complexity_lambda_;
-        instance.statistics = &family->statistics_;
-        instance.threads = num_threads;
-
-        // Add arguments to instance name
-        size_t arg_i = 0;
-        for (auto const& arg : args) {
-          instance.name += "/";
-
-          if (arg_i < family->arg_names_.size()) {
-            const auto& arg_name = family->arg_names_[arg_i];
-            if (!arg_name.empty()) {
-              instance.name +=
-                  StrFormat("%s:", family->arg_names_[arg_i].c_str());
-            }
-          }
-
-          instance.name += StrFormat("%d", arg);
-          ++arg_i;
-        }
-
-        if (!IsZero(family->min_time_))
-          instance.name += StrFormat("/min_time:%0.3f", family->min_time_);
-        if (family->iterations_ != 0)
-          instance.name += StrFormat("/iterations:%d", family->iterations_);
-        if (family->repetitions_ != 0)
-          instance.name += StrFormat("/repeats:%d", family->repetitions_);
-
-        if (family->use_manual_time_) {
-          instance.name += "/manual_time";
-        } else if (family->use_real_time_) {
-          instance.name += "/real_time";
-        }
-
-        // Add the number of threads used to the name
-        if (!family->thread_counts_.empty()) {
-          instance.name += StrFormat("/threads:%d", instance.threads);
-        }
-
-        if ((re.Match(instance.name) && !isNegativeFilter) ||
-            (!re.Match(instance.name) && isNegativeFilter)) {
-          instance.last_benchmark_instance = (&args == &family->args_.back());
-          benchmarks->push_back(std::move(instance));
-        }
-      }
-    }
-  }
-  return true;
-}
-
-Benchmark* RegisterBenchmarkInternal(Benchmark* bench) {
-  std::unique_ptr<Benchmark> bench_ptr(bench);
-  BenchmarkFamilies* families = BenchmarkFamilies::GetInstance();
-  families->AddBenchmark(std::move(bench_ptr));
-  return bench;
-}
-
-// FIXME: This function is a hack so that benchmark.cc can access
-// `BenchmarkFamilies`
-bool FindBenchmarksInternal(const std::string& re,
-                            std::vector<Benchmark::Instance>* benchmarks,
-                            std::ostream* Err) {
-  return BenchmarkFamilies::GetInstance()->FindBenchmarks(re, benchmarks, Err);
-}
-
-//=============================================================================//
-//                               Benchmark
-//=============================================================================//
-
-Benchmark::Benchmark(const char* name)
-    : name_(name),
-      report_mode_(RM_Unspecified),
-      time_unit_(kNanosecond),
-      range_multiplier_(kRangeMultiplier),
-      min_time_(0),
-      iterations_(0),
-      repetitions_(0),
-      use_real_time_(false),
-      use_manual_time_(false),
-      complexity_(oNone),
-      complexity_lambda_(nullptr) {
-  ComputeStatistics("mean", StatisticsMean);
-  ComputeStatistics("median", StatisticsMedian);
-  ComputeStatistics("stddev", StatisticsStdDev);
-}
-
-Benchmark::~Benchmark() {}
-
-Benchmark* Benchmark::Arg(int64_t x) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == 1);
-  args_.push_back({x});
-  return this;
-}
-
-Benchmark* Benchmark::Unit(TimeUnit unit) {
-  time_unit_ = unit;
-  return this;
-}
-
-Benchmark* Benchmark::Range(int64_t start, int64_t limit) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == 1);
-  std::vector<int64_t> arglist;
-  AddRange(&arglist, start, limit, range_multiplier_);
-
-  for (int64_t i : arglist) {
-    args_.push_back({i});
-  }
-  return this;
-}
-
-Benchmark* Benchmark::Ranges(
-    const std::vector<std::pair<int64_t, int64_t>>& ranges) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == static_cast<int>(ranges.size()));
-  std::vector<std::vector<int64_t>> arglists(ranges.size());
-  std::size_t total = 1;
-  for (std::size_t i = 0; i < ranges.size(); i++) {
-    AddRange(&arglists[i], ranges[i].first, ranges[i].second,
-             range_multiplier_);
-    total *= arglists[i].size();
-  }
-
-  std::vector<std::size_t> ctr(arglists.size(), 0);
-
-  for (std::size_t i = 0; i < total; i++) {
-    std::vector<int64_t> tmp;
-    tmp.reserve(arglists.size());
-
-    for (std::size_t j = 0; j < arglists.size(); j++) {
-      tmp.push_back(arglists[j].at(ctr[j]));
-    }
-
-    args_.push_back(std::move(tmp));
-
-    for (std::size_t j = 0; j < arglists.size(); j++) {
-      if (ctr[j] + 1 < arglists[j].size()) {
-        ++ctr[j];
-        break;
-      }
-      ctr[j] = 0;
-    }
-  }
-  return this;
-}
-
-Benchmark* Benchmark::ArgName(const std::string& name) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == 1);
-  arg_names_ = {name};
-  return this;
-}
-
-Benchmark* Benchmark::ArgNames(const std::vector<std::string>& names) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == static_cast<int>(names.size()));
-  arg_names_ = names;
-  return this;
-}
-
-Benchmark* Benchmark::DenseRange(int64_t start, int64_t limit, int step) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == 1);
-  CHECK_GE(start, 0);
-  CHECK_LE(start, limit);
-  for (int64_t arg = start; arg <= limit; arg += step) {
-    args_.push_back({arg});
-  }
-  return this;
-}
-
-Benchmark* Benchmark::Args(const std::vector<int64_t>& args) {
-  CHECK(ArgsCnt() == -1 || ArgsCnt() == static_cast<int>(args.size()));
-  args_.push_back(args);
-  return this;
-}
-
-Benchmark* Benchmark::Apply(void (*custom_arguments)(Benchmark* benchmark)) {
-  custom_arguments(this);
-  return this;
-}
-
-Benchmark* Benchmark::RangeMultiplier(int multiplier) {
-  CHECK(multiplier > 1);
-  range_multiplier_ = multiplier;
-  return this;
-}
-
-Benchmark* Benchmark::MinTime(double t) {
-  CHECK(t > 0.0);
-  CHECK(iterations_ == 0);
-  min_time_ = t;
-  return this;
-}
-
-Benchmark* Benchmark::Iterations(size_t n) {
-  CHECK(n > 0);
-  CHECK(IsZero(min_time_));
-  iterations_ = n;
-  return this;
-}
-
-Benchmark* Benchmark::Repetitions(int n) {
-  CHECK(n > 0);
-  repetitions_ = n;
-  return this;
-}
-
-Benchmark* Benchmark::ReportAggregatesOnly(bool value) {
-  report_mode_ = value ? RM_ReportAggregatesOnly : RM_Default;
-  return this;
-}
-
-Benchmark* Benchmark::UseRealTime() {
-  CHECK(!use_manual_time_)
-      << "Cannot set UseRealTime and UseManualTime simultaneously.";
-  use_real_time_ = true;
-  return this;
-}
-
-Benchmark* Benchmark::UseManualTime() {
-  CHECK(!use_real_time_)
-      << "Cannot set UseRealTime and UseManualTime simultaneously.";
-  use_manual_time_ = true;
-  return this;
-}
-
-Benchmark* Benchmark::Complexity(BigO complexity) {
-  complexity_ = complexity;
-  return this;
-}
-
-Benchmark* Benchmark::Complexity(BigOFunc* complexity) {
-  complexity_lambda_ = complexity;
-  complexity_ = oLambda;
-  return this;
-}
-
-Benchmark* Benchmark::ComputeStatistics(std::string name,
-                                        StatisticsFunc* statistics) {
-  statistics_.emplace_back(name, statistics);
-  return this;
-}
-
-Benchmark* Benchmark::Threads(int t) {
-  CHECK_GT(t, 0);
-  thread_counts_.push_back(t);
-  return this;
-}
-
-Benchmark* Benchmark::ThreadRange(int min_threads, int max_threads) {
-  CHECK_GT(min_threads, 0);
-  CHECK_GE(max_threads, min_threads);
-
-  AddRange(&thread_counts_, min_threads, max_threads, 2);
-  return this;
-}
-
-Benchmark* Benchmark::DenseThreadRange(int min_threads, int max_threads,
-                                       int stride) {
-  CHECK_GT(min_threads, 0);
-  CHECK_GE(max_threads, min_threads);
-  CHECK_GE(stride, 1);
-
-  for (auto i = min_threads; i < max_threads; i += stride) {
-    thread_counts_.push_back(i);
-  }
-  thread_counts_.push_back(max_threads);
-  return this;
-}
-
-Benchmark* Benchmark::ThreadPerCpu() {
-  thread_counts_.push_back(CPUInfo::Get().num_cpus);
-  return this;
-}
-
-void Benchmark::SetName(const char* name) { name_ = name; }
-
-int Benchmark::ArgsCnt() const {
-  if (args_.empty()) {
-    if (arg_names_.empty()) return -1;
-    return static_cast<int>(arg_names_.size());
-  }
-  return static_cast<int>(args_.front().size());
-}
-
-//=============================================================================//
-//                            FunctionBenchmark
-//=============================================================================//
-
-void FunctionBenchmark::Run(State& st) { func_(st); }
-
-}  // end namespace internal
-
-void ClearRegisteredBenchmarks() {
-  internal::BenchmarkFamilies::GetInstance()->ClearBenchmarks();
-}
-
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/benchmark_register.h b/llvm/utils/benchmark/src/benchmark_register.h
deleted file mode 100644
index 4caa5ad4da079..0000000000000
--- a/llvm/utils/benchmark/src/benchmark_register.h
+++ /dev/null
@@ -1,34 +0,0 @@
-#ifndef BENCHMARK_REGISTER_H
-#define BENCHMARK_REGISTER_H
-
-#include <limits>
-#include <vector>
-
-#include "check.h"
-
-template <typename T>
-void AddRange(std::vector<T>* dst, T lo, T hi, int mult) {
-  CHECK_GE(lo, 0);
-  CHECK_GE(hi, lo);
-  CHECK_GE(mult, 2);
-
-  // Add "lo"
-  dst->push_back(lo);
-
-  static const T kmax = std::numeric_limits<T>::max();
-
-  // Now space out the benchmarks in multiples of "mult"
-  for (T i = 1; i < kmax / mult; i *= mult) {
-    if (i >= hi) break;
-    if (i > lo) {
-      dst->push_back(i);
-    }
-  }
-
-  // Add "hi" (if 
diff erent from "lo")
-  if (hi != lo) {
-    dst->push_back(hi);
-  }
-}
-
-#endif  // BENCHMARK_REGISTER_H

diff  --git a/llvm/utils/benchmark/src/check.h b/llvm/utils/benchmark/src/check.h
deleted file mode 100644
index 73bead2fb555b..0000000000000
--- a/llvm/utils/benchmark/src/check.h
+++ /dev/null
@@ -1,79 +0,0 @@
-#ifndef CHECK_H_
-#define CHECK_H_
-
-#include <cstdlib>
-#include <ostream>
-#include <cmath>
-
-#include "internal_macros.h"
-#include "log.h"
-
-namespace benchmark {
-namespace internal {
-
-typedef void(AbortHandlerT)();
-
-inline AbortHandlerT*& GetAbortHandler() {
-  static AbortHandlerT* handler = &std::abort;
-  return handler;
-}
-
-BENCHMARK_NORETURN inline void CallAbortHandler() {
-  GetAbortHandler()();
-  std::abort();  // fallback to enforce noreturn
-}
-
-// CheckHandler is the class constructed by failing CHECK macros. CheckHandler
-// will log information about the failures and abort when it is destructed.
-class CheckHandler {
- public:
-  CheckHandler(const char* check, const char* file, const char* func, int line)
-      : log_(GetErrorLogInstance()) {
-    log_ << file << ":" << line << ": " << func << ": Check `" << check
-         << "' failed. ";
-  }
-
-  LogType& GetLog() { return log_; }
-
-  BENCHMARK_NORETURN ~CheckHandler() BENCHMARK_NOEXCEPT_OP(false) {
-    log_ << std::endl;
-    CallAbortHandler();
-  }
-
-  CheckHandler& operator=(const CheckHandler&) = delete;
-  CheckHandler(const CheckHandler&) = delete;
-  CheckHandler() = delete;
-
- private:
-  LogType& log_;
-};
-
-}  // end namespace internal
-}  // end namespace benchmark
-
-// The CHECK macro returns a std::ostream object that can have extra information
-// written to it.
-#ifndef NDEBUG
-#define CHECK(b)                                                             \
-  (b ? ::benchmark::internal::GetNullLogInstance()                           \
-     : ::benchmark::internal::CheckHandler(#b, __FILE__, __func__, __LINE__) \
-           .GetLog())
-#else
-#define CHECK(b) ::benchmark::internal::GetNullLogInstance()
-#endif
-
-#define CHECK_EQ(a, b) CHECK((a) == (b))
-#define CHECK_NE(a, b) CHECK((a) != (b))
-#define CHECK_GE(a, b) CHECK((a) >= (b))
-#define CHECK_LE(a, b) CHECK((a) <= (b))
-#define CHECK_GT(a, b) CHECK((a) > (b))
-#define CHECK_LT(a, b) CHECK((a) < (b))
-
-#define CHECK_FLOAT_EQ(a, b, eps) CHECK(std::fabs((a) - (b)) <  (eps))
-#define CHECK_FLOAT_NE(a, b, eps) CHECK(std::fabs((a) - (b)) >= (eps))
-#define CHECK_FLOAT_GE(a, b, eps) CHECK((a) - (b) > -(eps))
-#define CHECK_FLOAT_LE(a, b, eps) CHECK((b) - (a) > -(eps))
-#define CHECK_FLOAT_GT(a, b, eps) CHECK((a) - (b) >  (eps))
-#define CHECK_FLOAT_LT(a, b, eps) CHECK((b) - (a) >  (eps))
-
-#endif  // CHECK_H_

diff  --git a/llvm/utils/benchmark/src/colorprint.cc b/llvm/utils/benchmark/src/colorprint.cc
deleted file mode 100644
index fff6a98818b84..0000000000000
--- a/llvm/utils/benchmark/src/colorprint.cc
+++ /dev/null
@@ -1,188 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "colorprint.h"
-
-#include <cstdarg>
-#include <cstdio>
-#include <cstdlib>
-#include <cstring>
-#include <memory>
-#include <string>
-
-#include "check.h"
-#include "internal_macros.h"
-
-#ifdef BENCHMARK_OS_WINDOWS
-#include <windows.h>
-#include <io.h>
-#else
-#include <unistd.h>
-#endif  // BENCHMARK_OS_WINDOWS
-
-namespace benchmark {
-namespace {
-#ifdef BENCHMARK_OS_WINDOWS
-typedef WORD PlatformColorCode;
-#else
-typedef const char* PlatformColorCode;
-#endif
-
-PlatformColorCode GetPlatformColorCode(LogColor color) {
-#ifdef BENCHMARK_OS_WINDOWS
-  switch (color) {
-    case COLOR_RED:
-      return FOREGROUND_RED;
-    case COLOR_GREEN:
-      return FOREGROUND_GREEN;
-    case COLOR_YELLOW:
-      return FOREGROUND_RED | FOREGROUND_GREEN;
-    case COLOR_BLUE:
-      return FOREGROUND_BLUE;
-    case COLOR_MAGENTA:
-      return FOREGROUND_BLUE | FOREGROUND_RED;
-    case COLOR_CYAN:
-      return FOREGROUND_BLUE | FOREGROUND_GREEN;
-    case COLOR_WHITE:  // fall through to default
-    default:
-      return 0;
-  }
-#else
-  switch (color) {
-    case COLOR_RED:
-      return "1";
-    case COLOR_GREEN:
-      return "2";
-    case COLOR_YELLOW:
-      return "3";
-    case COLOR_BLUE:
-      return "4";
-    case COLOR_MAGENTA:
-      return "5";
-    case COLOR_CYAN:
-      return "6";
-    case COLOR_WHITE:
-      return "7";
-    default:
-      return nullptr;
-  };
-#endif
-}
-
-}  // end namespace
-
-std::string FormatString(const char* msg, va_list args) {
-  // we might need a second shot at this, so pre-emptivly make a copy
-  va_list args_cp;
-  va_copy(args_cp, args);
-
-  std::size_t size = 256;
-  char local_buff[256];
-  auto ret = vsnprintf(local_buff, size, msg, args_cp);
-
-  va_end(args_cp);
-
-  // currently there is no error handling for failure, so this is hack.
-  CHECK(ret >= 0);
-
-  if (ret == 0)  // handle empty expansion
-    return {};
-  else if (static_cast<size_t>(ret) < size)
-    return local_buff;
-  else {
-    // we did not provide a long enough buffer on our first attempt.
-    size = (size_t)ret + 1;  // + 1 for the null byte
-    std::unique_ptr<char[]> buff(new char[size]);
-    ret = vsnprintf(buff.get(), size, msg, args);
-    CHECK(ret > 0 && ((size_t)ret) < size);
-    return buff.get();
-  }
-}
-
-std::string FormatString(const char* msg, ...) {
-  va_list args;
-  va_start(args, msg);
-  auto tmp = FormatString(msg, args);
-  va_end(args);
-  return tmp;
-}
-
-void ColorPrintf(std::ostream& out, LogColor color, const char* fmt, ...) {
-  va_list args;
-  va_start(args, fmt);
-  ColorPrintf(out, color, fmt, args);
-  va_end(args);
-}
-
-void ColorPrintf(std::ostream& out, LogColor color, const char* fmt,
-                 va_list args) {
-#ifdef BENCHMARK_OS_WINDOWS
-  ((void)out);  // suppress unused warning
-
-  const HANDLE stdout_handle = GetStdHandle(STD_OUTPUT_HANDLE);
-
-  // Gets the current text color.
-  CONSOLE_SCREEN_BUFFER_INFO buffer_info;
-  GetConsoleScreenBufferInfo(stdout_handle, &buffer_info);
-  const WORD old_color_attrs = buffer_info.wAttributes;
-
-  // We need to flush the stream buffers into the console before each
-  // SetConsoleTextAttribute call lest it affect the text that is already
-  // printed but has not yet reached the console.
-  fflush(stdout);
-  SetConsoleTextAttribute(stdout_handle,
-                          GetPlatformColorCode(color) | FOREGROUND_INTENSITY);
-  vprintf(fmt, args);
-
-  fflush(stdout);
-  // Restores the text color.
-  SetConsoleTextAttribute(stdout_handle, old_color_attrs);
-#else
-  const char* color_code = GetPlatformColorCode(color);
-  if (color_code) out << FormatString("\033[0;3%sm", color_code);
-  out << FormatString(fmt, args) << "\033[m";
-#endif
-}
-
-bool IsColorTerminal() {
-#if BENCHMARK_OS_WINDOWS
-  // On Windows the TERM variable is usually not set, but the
-  // console there does support colors.
-  return 0 != _isatty(_fileno(stdout));
-#else
-  // On non-Windows platforms, we rely on the TERM variable. This list of
-  // supported TERM values is copied from Google Test:
-  // <https://github.com/google/googletest/blob/master/googletest/src/gtest.cc#L2925>.
-  const char* const SUPPORTED_TERM_VALUES[] = {
-      "xterm",         "xterm-color",     "xterm-256color",
-      "screen",        "screen-256color", "tmux",
-      "tmux-256color", "rxvt-unicode",    "rxvt-unicode-256color",
-      "linux",         "cygwin",
-  };
-
-  const char* const term = getenv("TERM");
-
-  bool term_supports_color = false;
-  for (const char* candidate : SUPPORTED_TERM_VALUES) {
-    if (term && 0 == strcmp(term, candidate)) {
-      term_supports_color = true;
-      break;
-    }
-  }
-
-  return 0 != isatty(fileno(stdout)) && term_supports_color;
-#endif  // BENCHMARK_OS_WINDOWS
-}
-
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/colorprint.h b/llvm/utils/benchmark/src/colorprint.h
deleted file mode 100644
index 9f6fab9b34226..0000000000000
--- a/llvm/utils/benchmark/src/colorprint.h
+++ /dev/null
@@ -1,33 +0,0 @@
-#ifndef BENCHMARK_COLORPRINT_H_
-#define BENCHMARK_COLORPRINT_H_
-
-#include <cstdarg>
-#include <iostream>
-#include <string>
-
-namespace benchmark {
-enum LogColor {
-  COLOR_DEFAULT,
-  COLOR_RED,
-  COLOR_GREEN,
-  COLOR_YELLOW,
-  COLOR_BLUE,
-  COLOR_MAGENTA,
-  COLOR_CYAN,
-  COLOR_WHITE
-};
-
-std::string FormatString(const char* msg, va_list args);
-std::string FormatString(const char* msg, ...);
-
-void ColorPrintf(std::ostream& out, LogColor color, const char* fmt,
-                 va_list args);
-void ColorPrintf(std::ostream& out, LogColor color, const char* fmt, ...);
-
-// Returns true if stdout appears to be a terminal that supports colored
-// output, false otherwise.
-bool IsColorTerminal();
-
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_COLORPRINT_H_

diff  --git a/llvm/utils/benchmark/src/commandlineflags.cc b/llvm/utils/benchmark/src/commandlineflags.cc
deleted file mode 100644
index 2fc92517a32c7..0000000000000
--- a/llvm/utils/benchmark/src/commandlineflags.cc
+++ /dev/null
@@ -1,218 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "commandlineflags.h"
-
-#include <cctype>
-#include <cstdlib>
-#include <cstring>
-#include <iostream>
-#include <limits>
-
-namespace benchmark {
-// Parses 'str' for a 32-bit signed integer.  If successful, writes
-// the result to *value and returns true; otherwise leaves *value
-// unchanged and returns false.
-bool ParseInt32(const std::string& src_text, const char* str, int32_t* value) {
-  // Parses the environment variable as a decimal integer.
-  char* end = nullptr;
-  const long long_value = strtol(str, &end, 10);  // NOLINT
-
-  // Has strtol() consumed all characters in the string?
-  if (*end != '\0') {
-    // No - an invalid character was encountered.
-    std::cerr << src_text << " is expected to be a 32-bit integer, "
-              << "but actually has value \"" << str << "\".\n";
-    return false;
-  }
-
-  // Is the parsed value in the range of an Int32?
-  const int32_t result = static_cast<int32_t>(long_value);
-  if (long_value == std::numeric_limits<long>::max() ||
-      long_value == std::numeric_limits<long>::min() ||
-      // The parsed value overflows as a long.  (strtol() returns
-      // LONG_MAX or LONG_MIN when the input overflows.)
-      result != long_value
-      // The parsed value overflows as an Int32.
-      ) {
-    std::cerr << src_text << " is expected to be a 32-bit integer, "
-              << "but actually has value \"" << str << "\", "
-              << "which overflows.\n";
-    return false;
-  }
-
-  *value = result;
-  return true;
-}
-
-// Parses 'str' for a double.  If successful, writes the result to *value and
-// returns true; otherwise leaves *value unchanged and returns false.
-bool ParseDouble(const std::string& src_text, const char* str, double* value) {
-  // Parses the environment variable as a decimal integer.
-  char* end = nullptr;
-  const double double_value = strtod(str, &end);  // NOLINT
-
-  // Has strtol() consumed all characters in the string?
-  if (*end != '\0') {
-    // No - an invalid character was encountered.
-    std::cerr << src_text << " is expected to be a double, "
-              << "but actually has value \"" << str << "\".\n";
-    return false;
-  }
-
-  *value = double_value;
-  return true;
-}
-
-// Returns the name of the environment variable corresponding to the
-// given flag.  For example, FlagToEnvVar("foo") will return
-// "BENCHMARK_FOO" in the open-source version.
-static std::string FlagToEnvVar(const char* flag) {
-  const std::string flag_str(flag);
-
-  std::string env_var;
-  for (size_t i = 0; i != flag_str.length(); ++i)
-    env_var += static_cast<char>(::toupper(flag_str.c_str()[i]));
-
-  return "BENCHMARK_" + env_var;
-}
-
-// Reads and returns the Boolean environment variable corresponding to
-// the given flag; if it's not set, returns default_value.
-//
-// The value is considered true iff it's not "0".
-bool BoolFromEnv(const char* flag, bool default_value) {
-  const std::string env_var = FlagToEnvVar(flag);
-  const char* const string_value = getenv(env_var.c_str());
-  return string_value == nullptr ? default_value
-                                 : strcmp(string_value, "0") != 0;
-}
-
-// Reads and returns a 32-bit integer stored in the environment
-// variable corresponding to the given flag; if it isn't set or
-// doesn't represent a valid 32-bit integer, returns default_value.
-int32_t Int32FromEnv(const char* flag, int32_t default_value) {
-  const std::string env_var = FlagToEnvVar(flag);
-  const char* const string_value = getenv(env_var.c_str());
-  if (string_value == nullptr) {
-    // The environment variable is not set.
-    return default_value;
-  }
-
-  int32_t result = default_value;
-  if (!ParseInt32(std::string("Environment variable ") + env_var, string_value,
-                  &result)) {
-    std::cout << "The default value " << default_value << " is used.\n";
-    return default_value;
-  }
-
-  return result;
-}
-
-// Reads and returns the string environment variable corresponding to
-// the given flag; if it's not set, returns default_value.
-const char* StringFromEnv(const char* flag, const char* default_value) {
-  const std::string env_var = FlagToEnvVar(flag);
-  const char* const value = getenv(env_var.c_str());
-  return value == nullptr ? default_value : value;
-}
-
-// Parses a string as a command line flag.  The string should have
-// the format "--flag=value".  When def_optional is true, the "=value"
-// part can be omitted.
-//
-// Returns the value of the flag, or nullptr if the parsing failed.
-const char* ParseFlagValue(const char* str, const char* flag,
-                           bool def_optional) {
-  // str and flag must not be nullptr.
-  if (str == nullptr || flag == nullptr) return nullptr;
-
-  // The flag must start with "--".
-  const std::string flag_str = std::string("--") + std::string(flag);
-  const size_t flag_len = flag_str.length();
-  if (strncmp(str, flag_str.c_str(), flag_len) != 0) return nullptr;
-
-  // Skips the flag name.
-  const char* flag_end = str + flag_len;
-
-  // When def_optional is true, it's OK to not have a "=value" part.
-  if (def_optional && (flag_end[0] == '\0')) return flag_end;
-
-  // If def_optional is true and there are more characters after the
-  // flag name, or if def_optional is false, there must be a '=' after
-  // the flag name.
-  if (flag_end[0] != '=') return nullptr;
-
-  // Returns the string after "=".
-  return flag_end + 1;
-}
-
-bool ParseBoolFlag(const char* str, const char* flag, bool* value) {
-  // Gets the value of the flag as a string.
-  const char* const value_str = ParseFlagValue(str, flag, true);
-
-  // Aborts if the parsing failed.
-  if (value_str == nullptr) return false;
-
-  // Converts the string value to a bool.
-  *value = IsTruthyFlagValue(value_str);
-  return true;
-}
-
-bool ParseInt32Flag(const char* str, const char* flag, int32_t* value) {
-  // Gets the value of the flag as a string.
-  const char* const value_str = ParseFlagValue(str, flag, false);
-
-  // Aborts if the parsing failed.
-  if (value_str == nullptr) return false;
-
-  // Sets *value to the value of the flag.
-  return ParseInt32(std::string("The value of flag --") + flag, value_str,
-                    value);
-}
-
-bool ParseDoubleFlag(const char* str, const char* flag, double* value) {
-  // Gets the value of the flag as a string.
-  const char* const value_str = ParseFlagValue(str, flag, false);
-
-  // Aborts if the parsing failed.
-  if (value_str == nullptr) return false;
-
-  // Sets *value to the value of the flag.
-  return ParseDouble(std::string("The value of flag --") + flag, value_str,
-                     value);
-}
-
-bool ParseStringFlag(const char* str, const char* flag, std::string* value) {
-  // Gets the value of the flag as a string.
-  const char* const value_str = ParseFlagValue(str, flag, false);
-
-  // Aborts if the parsing failed.
-  if (value_str == nullptr) return false;
-
-  *value = value_str;
-  return true;
-}
-
-bool IsFlag(const char* str, const char* flag) {
-  return (ParseFlagValue(str, flag, true) != nullptr);
-}
-
-bool IsTruthyFlagValue(const std::string& value) {
-  if (value.empty()) return true;
-  char ch = value[0];
-  return isalnum(ch) &&
-         !(ch == '0' || ch == 'f' || ch == 'F' || ch == 'n' || ch == 'N');
-}
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/commandlineflags.h b/llvm/utils/benchmark/src/commandlineflags.h
deleted file mode 100644
index 945c9a9fc4af3..0000000000000
--- a/llvm/utils/benchmark/src/commandlineflags.h
+++ /dev/null
@@ -1,79 +0,0 @@
-#ifndef BENCHMARK_COMMANDLINEFLAGS_H_
-#define BENCHMARK_COMMANDLINEFLAGS_H_
-
-#include <cstdint>
-#include <string>
-
-// Macro for referencing flags.
-#define FLAG(name) FLAGS_##name
-
-// Macros for declaring flags.
-#define DECLARE_bool(name) extern bool FLAG(name)
-#define DECLARE_int32(name) extern int32_t FLAG(name)
-#define DECLARE_int64(name) extern int64_t FLAG(name)
-#define DECLARE_double(name) extern double FLAG(name)
-#define DECLARE_string(name) extern std::string FLAG(name)
-
-// Macros for defining flags.
-#define DEFINE_bool(name, default_val, doc) bool FLAG(name) = (default_val)
-#define DEFINE_int32(name, default_val, doc) int32_t FLAG(name) = (default_val)
-#define DEFINE_int64(name, default_val, doc) int64_t FLAG(name) = (default_val)
-#define DEFINE_double(name, default_val, doc) double FLAG(name) = (default_val)
-#define DEFINE_string(name, default_val, doc) \
-  std::string FLAG(name) = (default_val)
-
-namespace benchmark {
-// Parses 'str' for a 32-bit signed integer.  If successful, writes the result
-// to *value and returns true; otherwise leaves *value unchanged and returns
-// false.
-bool ParseInt32(const std::string& src_text, const char* str, int32_t* value);
-
-// Parses a bool/Int32/string from the environment variable
-// corresponding to the given Google Test flag.
-bool BoolFromEnv(const char* flag, bool default_val);
-int32_t Int32FromEnv(const char* flag, int32_t default_val);
-double DoubleFromEnv(const char* flag, double default_val);
-const char* StringFromEnv(const char* flag, const char* default_val);
-
-// Parses a string for a bool flag, in the form of either
-// "--flag=value" or "--flag".
-//
-// In the former case, the value is taken as true if it passes IsTruthyValue().
-//
-// In the latter case, the value is taken as true.
-//
-// On success, stores the value of the flag in *value, and returns
-// true.  On failure, returns false without changing *value.
-bool ParseBoolFlag(const char* str, const char* flag, bool* value);
-
-// Parses a string for an Int32 flag, in the form of
-// "--flag=value".
-//
-// On success, stores the value of the flag in *value, and returns
-// true.  On failure, returns false without changing *value.
-bool ParseInt32Flag(const char* str, const char* flag, int32_t* value);
-
-// Parses a string for a Double flag, in the form of
-// "--flag=value".
-//
-// On success, stores the value of the flag in *value, and returns
-// true.  On failure, returns false without changing *value.
-bool ParseDoubleFlag(const char* str, const char* flag, double* value);
-
-// Parses a string for a string flag, in the form of
-// "--flag=value".
-//
-// On success, stores the value of the flag in *value, and returns
-// true.  On failure, returns false without changing *value.
-bool ParseStringFlag(const char* str, const char* flag, std::string* value);
-
-// Returns true if the string matches the flag.
-bool IsFlag(const char* str, const char* flag);
-
-// Returns true unless value starts with one of: '0', 'f', 'F', 'n' or 'N', or
-// some non-alphanumeric character. As a special case, also returns true if
-// value is the empty string.
-bool IsTruthyFlagValue(const std::string& value);
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_COMMANDLINEFLAGS_H_

diff  --git a/llvm/utils/benchmark/src/complexity.cc b/llvm/utils/benchmark/src/complexity.cc
deleted file mode 100644
index 97cb0a88271d5..0000000000000
--- a/llvm/utils/benchmark/src/complexity.cc
+++ /dev/null
@@ -1,218 +0,0 @@
-// Copyright 2016 Ismael Jimenez Martinez. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-// Source project : https://github.com/ismaelJimenez/cpp.leastsq
-// Adapted to be used with google benchmark
-
-#include "benchmark/benchmark.h"
-
-#include <algorithm>
-#include <cmath>
-#include "check.h"
-#include "complexity.h"
-
-namespace benchmark {
-
-// Internal function to calculate the 
diff erent scalability forms
-BigOFunc* FittingCurve(BigO complexity) {
-  switch (complexity) {
-    case oN:
-      return [](int64_t n) -> double { return static_cast<double>(n); };
-    case oNSquared:
-      return [](int64_t n) -> double { return std::pow(n, 2); };
-    case oNCubed:
-      return [](int64_t n) -> double { return std::pow(n, 3); };
-    case oLogN:
-      return [](int64_t n) { return log2(n); };
-    case oNLogN:
-      return [](int64_t n) { return n * log2(n); };
-    case o1:
-    default:
-      return [](int64_t) { return 1.0; };
-  }
-}
-
-// Function to return an string for the calculated complexity
-std::string GetBigOString(BigO complexity) {
-  switch (complexity) {
-    case oN:
-      return "N";
-    case oNSquared:
-      return "N^2";
-    case oNCubed:
-      return "N^3";
-    case oLogN:
-      return "lgN";
-    case oNLogN:
-      return "NlgN";
-    case o1:
-      return "(1)";
-    default:
-      return "f(N)";
-  }
-}
-
-// Find the coefficient for the high-order term in the running time, by
-// minimizing the sum of squares of relative error, for the fitting curve
-// given by the lambda expression.
-//   - n             : Vector containing the size of the benchmark tests.
-//   - time          : Vector containing the times for the benchmark tests.
-//   - fitting_curve : lambda expression (e.g. [](int64_t n) {return n; };).
-
-// For a deeper explanation on the algorithm logic, look the README file at
-// http://github.com/ismaelJimenez/Minimal-Cpp-Least-Squared-Fit
-
-LeastSq MinimalLeastSq(const std::vector<int64_t>& n,
-                       const std::vector<double>& time,
-                       BigOFunc* fitting_curve) {
-  double sigma_gn_squared = 0.0;
-  double sigma_time = 0.0;
-  double sigma_time_gn = 0.0;
-
-  // Calculate least square fitting parameter
-  for (size_t i = 0; i < n.size(); ++i) {
-    double gn_i = fitting_curve(n[i]);
-    sigma_gn_squared += gn_i * gn_i;
-    sigma_time += time[i];
-    sigma_time_gn += time[i] * gn_i;
-  }
-
-  LeastSq result;
-  result.complexity = oLambda;
-
-  // Calculate complexity.
-  result.coef = sigma_time_gn / sigma_gn_squared;
-
-  // Calculate RMS
-  double rms = 0.0;
-  for (size_t i = 0; i < n.size(); ++i) {
-    double fit = result.coef * fitting_curve(n[i]);
-    rms += pow((time[i] - fit), 2);
-  }
-
-  // Normalized RMS by the mean of the observed values
-  double mean = sigma_time / n.size();
-  result.rms = sqrt(rms / n.size()) / mean;
-
-  return result;
-}
-
-// Find the coefficient for the high-order term in the running time, by
-// minimizing the sum of squares of relative error.
-//   - n          : Vector containing the size of the benchmark tests.
-//   - time       : Vector containing the times for the benchmark tests.
-//   - complexity : If 
diff erent than oAuto, the fitting curve will stick to
-//                  this one. If it is oAuto, it will be calculated the best
-//                  fitting curve.
-LeastSq MinimalLeastSq(const std::vector<int64_t>& n,
-                       const std::vector<double>& time, const BigO complexity) {
-  CHECK_EQ(n.size(), time.size());
-  CHECK_GE(n.size(), 2);  // Do not compute fitting curve is less than two
-                          // benchmark runs are given
-  CHECK_NE(complexity, oNone);
-
-  LeastSq best_fit;
-
-  if (complexity == oAuto) {
-    std::vector<BigO> fit_curves = {oLogN, oN, oNLogN, oNSquared, oNCubed};
-
-    // Take o1 as default best fitting curve
-    best_fit = MinimalLeastSq(n, time, FittingCurve(o1));
-    best_fit.complexity = o1;
-
-    // Compute all possible fitting curves and stick to the best one
-    for (const auto& fit : fit_curves) {
-      LeastSq current_fit = MinimalLeastSq(n, time, FittingCurve(fit));
-      if (current_fit.rms < best_fit.rms) {
-        best_fit = current_fit;
-        best_fit.complexity = fit;
-      }
-    }
-  } else {
-    best_fit = MinimalLeastSq(n, time, FittingCurve(complexity));
-    best_fit.complexity = complexity;
-  }
-
-  return best_fit;
-}
-
-std::vector<BenchmarkReporter::Run> ComputeBigO(
-    const std::vector<BenchmarkReporter::Run>& reports) {
-  typedef BenchmarkReporter::Run Run;
-  std::vector<Run> results;
-
-  if (reports.size() < 2) return results;
-
-  // Accumulators.
-  std::vector<int64_t> n;
-  std::vector<double> real_time;
-  std::vector<double> cpu_time;
-
-  // Populate the accumulators.
-  for (const Run& run : reports) {
-    CHECK_GT(run.complexity_n, 0) << "Did you forget to call SetComplexityN?";
-    n.push_back(run.complexity_n);
-    real_time.push_back(run.real_accumulated_time / run.iterations);
-    cpu_time.push_back(run.cpu_accumulated_time / run.iterations);
-  }
-
-  LeastSq result_cpu;
-  LeastSq result_real;
-
-  if (reports[0].complexity == oLambda) {
-    result_cpu = MinimalLeastSq(n, cpu_time, reports[0].complexity_lambda);
-    result_real = MinimalLeastSq(n, real_time, reports[0].complexity_lambda);
-  } else {
-    result_cpu = MinimalLeastSq(n, cpu_time, reports[0].complexity);
-    result_real = MinimalLeastSq(n, real_time, result_cpu.complexity);
-  }
-  std::string benchmark_name =
-      reports[0].benchmark_name.substr(0, reports[0].benchmark_name.find('/'));
-
-  // Get the data from the accumulator to BenchmarkReporter::Run's.
-  Run big_o;
-  big_o.benchmark_name = benchmark_name + "_BigO";
-  big_o.iterations = 0;
-  big_o.real_accumulated_time = result_real.coef;
-  big_o.cpu_accumulated_time = result_cpu.coef;
-  big_o.report_big_o = true;
-  big_o.complexity = result_cpu.complexity;
-
-  // All the time results are reported after being multiplied by the
-  // time unit multiplier. But since RMS is a relative quantity it
-  // should not be multiplied at all. So, here, we _divide_ it by the
-  // multiplier so that when it is multiplied later the result is the
-  // correct one.
-  double multiplier = GetTimeUnitMultiplier(reports[0].time_unit);
-
-  // Only add label to mean/stddev if it is same for all runs
-  Run rms;
-  big_o.report_label = reports[0].report_label;
-  rms.benchmark_name = benchmark_name + "_RMS";
-  rms.report_label = big_o.report_label;
-  rms.iterations = 0;
-  rms.real_accumulated_time = result_real.rms / multiplier;
-  rms.cpu_accumulated_time = result_cpu.rms / multiplier;
-  rms.report_rms = true;
-  rms.complexity = result_cpu.complexity;
-  // don't forget to keep the time unit, or we won't be able to
-  // recover the correct value.
-  rms.time_unit = reports[0].time_unit;
-
-  results.push_back(big_o);
-  results.push_back(rms);
-  return results;
-}
-
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/complexity.h b/llvm/utils/benchmark/src/complexity.h
deleted file mode 100644
index df29b48d29b4e..0000000000000
--- a/llvm/utils/benchmark/src/complexity.h
+++ /dev/null
@@ -1,55 +0,0 @@
-// Copyright 2016 Ismael Jimenez Martinez. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-// Source project : https://github.com/ismaelJimenez/cpp.leastsq
-// Adapted to be used with google benchmark
-
-#ifndef COMPLEXITY_H_
-#define COMPLEXITY_H_
-
-#include <string>
-#include <vector>
-
-#include "benchmark/benchmark.h"
-
-namespace benchmark {
-
-// Return a vector containing the bigO and RMS information for the specified
-// list of reports. If 'reports.size() < 2' an empty vector is returned.
-std::vector<BenchmarkReporter::Run> ComputeBigO(
-    const std::vector<BenchmarkReporter::Run>& reports);
-
-// This data structure will contain the result returned by MinimalLeastSq
-//   - coef        : Estimated coeficient for the high-order term as
-//                   interpolated from data.
-//   - rms         : Normalized Root Mean Squared Error.
-//   - complexity  : Scalability form (e.g. oN, oNLogN). In case a scalability
-//                   form has been provided to MinimalLeastSq this will return
-//                   the same value. In case BigO::oAuto has been selected, this
-//                   parameter will return the best fitting curve detected.
-
-struct LeastSq {
-  LeastSq() : coef(0.0), rms(0.0), complexity(oNone) {}
-
-  double coef;
-  double rms;
-  BigO complexity;
-};
-
-// Function to return an string for the calculated complexity
-std::string GetBigOString(BigO complexity);
-
-}  // end namespace benchmark
-
-#endif  // COMPLEXITY_H_

diff  --git a/llvm/utils/benchmark/src/console_reporter.cc b/llvm/utils/benchmark/src/console_reporter.cc
deleted file mode 100644
index 48920ca782956..0000000000000
--- a/llvm/utils/benchmark/src/console_reporter.cc
+++ /dev/null
@@ -1,182 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-#include "complexity.h"
-#include "counter.h"
-
-#include <algorithm>
-#include <cstdint>
-#include <cstdio>
-#include <iostream>
-#include <string>
-#include <tuple>
-#include <vector>
-
-#include "check.h"
-#include "colorprint.h"
-#include "commandlineflags.h"
-#include "internal_macros.h"
-#include "string_util.h"
-#include "timers.h"
-
-namespace benchmark {
-
-bool ConsoleReporter::ReportContext(const Context& context) {
-  name_field_width_ = context.name_field_width;
-  printed_header_ = false;
-  prev_counters_.clear();
-
-  PrintBasicContext(&GetErrorStream(), context);
-
-#ifdef BENCHMARK_OS_WINDOWS
-  if ((output_options_ & OO_Color) && &std::cout != &GetOutputStream()) {
-    GetErrorStream()
-        << "Color printing is only supported for stdout on windows."
-           " Disabling color printing\n";
-    output_options_ = static_cast< OutputOptions >(output_options_ & ~OO_Color);
-  }
-#endif
-
-  return true;
-}
-
-void ConsoleReporter::PrintHeader(const Run& run) {
-  std::string str = FormatString("%-*s %13s %13s %10s", static_cast<int>(name_field_width_),
-                                 "Benchmark", "Time", "CPU", "Iterations");
-  if(!run.counters.empty()) {
-    if(output_options_ & OO_Tabular) {
-      for(auto const& c : run.counters) {
-        str += FormatString(" %10s", c.first.c_str());
-      }
-    } else {
-      str += " UserCounters...";
-    }
-  }
-  str += "\n";
-  std::string line = std::string(str.length(), '-');
-  GetOutputStream() << line << "\n" << str << line << "\n";
-}
-
-void ConsoleReporter::ReportRuns(const std::vector<Run>& reports) {
-  for (const auto& run : reports) {
-    // print the header:
-    // --- if none was printed yet
-    bool print_header = !printed_header_;
-    // --- or if the format is tabular and this run
-    //     has 
diff erent fields from the prev header
-    print_header |= (output_options_ & OO_Tabular) &&
-                    (!internal::SameNames(run.counters, prev_counters_));
-    if (print_header) {
-      printed_header_ = true;
-      prev_counters_ = run.counters;
-      PrintHeader(run);
-    }
-    // As an alternative to printing the headers like this, we could sort
-    // the benchmarks by header and then print. But this would require
-    // waiting for the full results before printing, or printing twice.
-    PrintRunData(run);
-  }
-}
-
-static void IgnoreColorPrint(std::ostream& out, LogColor, const char* fmt,
-                             ...) {
-  va_list args;
-  va_start(args, fmt);
-  out << FormatString(fmt, args);
-  va_end(args);
-}
-
-void ConsoleReporter::PrintRunData(const Run& result) {
-  typedef void(PrinterFn)(std::ostream&, LogColor, const char*, ...);
-  auto& Out = GetOutputStream();
-  PrinterFn* printer = (output_options_ & OO_Color) ?
-                         (PrinterFn*)ColorPrintf : IgnoreColorPrint;
-  auto name_color =
-      (result.report_big_o || result.report_rms) ? COLOR_BLUE : COLOR_GREEN;
-  printer(Out, name_color, "%-*s ", name_field_width_,
-          result.benchmark_name.c_str());
-
-  if (result.error_occurred) {
-    printer(Out, COLOR_RED, "ERROR OCCURRED: \'%s\'",
-            result.error_message.c_str());
-    printer(Out, COLOR_DEFAULT, "\n");
-    return;
-  }
-  // Format bytes per second
-  std::string rate;
-  if (result.bytes_per_second > 0) {
-    rate = StrCat(" ", HumanReadableNumber(result.bytes_per_second), "B/s");
-  }
-
-  // Format items per second
-  std::string items;
-  if (result.items_per_second > 0) {
-    items =
-        StrCat(" ", HumanReadableNumber(result.items_per_second), " items/s");
-  }
-
-  const double real_time = result.GetAdjustedRealTime();
-  const double cpu_time = result.GetAdjustedCPUTime();
-
-  if (result.report_big_o) {
-    std::string big_o = GetBigOString(result.complexity);
-    printer(Out, COLOR_YELLOW, "%10.2f %s %10.2f %s ", real_time, big_o.c_str(),
-            cpu_time, big_o.c_str());
-  } else if (result.report_rms) {
-    printer(Out, COLOR_YELLOW, "%10.0f %% %10.0f %% ", real_time * 100,
-            cpu_time * 100);
-  } else {
-    const char* timeLabel = GetTimeUnitString(result.time_unit);
-    printer(Out, COLOR_YELLOW, "%10.0f %s %10.0f %s ", real_time, timeLabel,
-            cpu_time, timeLabel);
-  }
-
-  if (!result.report_big_o && !result.report_rms) {
-    printer(Out, COLOR_CYAN, "%10lld", result.iterations);
-  }
-
-  for (auto& c : result.counters) {
-    const std::size_t cNameLen = std::max(std::string::size_type(10),
-                                          c.first.length());
-    auto const& s = HumanReadableNumber(c.second.value, 1000);
-    if (output_options_ & OO_Tabular) {
-      if (c.second.flags & Counter::kIsRate) {
-        printer(Out, COLOR_DEFAULT, " %*s/s", cNameLen - 2, s.c_str());
-      } else {
-        printer(Out, COLOR_DEFAULT, " %*s", cNameLen, s.c_str());
-      }
-    } else {
-      const char* unit = (c.second.flags & Counter::kIsRate) ? "/s" : "";
-      printer(Out, COLOR_DEFAULT, " %s=%s%s", c.first.c_str(), s.c_str(),
-              unit);
-    }
-  }
-
-  if (!rate.empty()) {
-    printer(Out, COLOR_DEFAULT, " %*s", 13, rate.c_str());
-  }
-
-  if (!items.empty()) {
-    printer(Out, COLOR_DEFAULT, " %*s", 18, items.c_str());
-  }
-
-  if (!result.report_label.empty()) {
-    printer(Out, COLOR_DEFAULT, " %s", result.report_label.c_str());
-  }
-
-  printer(Out, COLOR_DEFAULT, "\n");
-}
-
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/counter.cc b/llvm/utils/benchmark/src/counter.cc
deleted file mode 100644
index ed1aa044ee79f..0000000000000
--- a/llvm/utils/benchmark/src/counter.cc
+++ /dev/null
@@ -1,68 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "counter.h"
-
-namespace benchmark {
-namespace internal {
-
-double Finish(Counter const& c, double cpu_time, double num_threads) {
-  double v = c.value;
-  if (c.flags & Counter::kIsRate) {
-    v /= cpu_time;
-  }
-  if (c.flags & Counter::kAvgThreads) {
-    v /= num_threads;
-  }
-  return v;
-}
-
-void Finish(UserCounters *l, double cpu_time, double num_threads) {
-  for (auto &c : *l) {
-    c.second.value = Finish(c.second, cpu_time, num_threads);
-  }
-}
-
-void Increment(UserCounters *l, UserCounters const& r) {
-  // add counters present in both or just in *l
-  for (auto &c : *l) {
-    auto it = r.find(c.first);
-    if (it != r.end()) {
-      c.second.value = c.second + it->second;
-    }
-  }
-  // add counters present in r, but not in *l
-  for (auto const &tc : r) {
-    auto it = l->find(tc.first);
-    if (it == l->end()) {
-      (*l)[tc.first] = tc.second;
-    }
-  }
-}
-
-bool SameNames(UserCounters const& l, UserCounters const& r) {
-  if (&l == &r) return true;
-  if (l.size() != r.size()) {
-    return false;
-  }
-  for (auto const& c : l) {
-    if (r.find(c.first) == r.end()) {
-      return false;
-    }
-  }
-  return true;
-}
-
-} // end namespace internal
-} // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/counter.h b/llvm/utils/benchmark/src/counter.h
deleted file mode 100644
index dd6865a31d766..0000000000000
--- a/llvm/utils/benchmark/src/counter.h
+++ /dev/null
@@ -1,26 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-
-namespace benchmark {
-
-// these counter-related functions are hidden to reduce API surface.
-namespace internal {
-void Finish(UserCounters *l, double time, double num_threads);
-void Increment(UserCounters *l, UserCounters const& r);
-bool SameNames(UserCounters const& l, UserCounters const& r);
-} // end namespace internal
-
-} //end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/csv_reporter.cc b/llvm/utils/benchmark/src/csv_reporter.cc
deleted file mode 100644
index 35510645b084e..0000000000000
--- a/llvm/utils/benchmark/src/csv_reporter.cc
+++ /dev/null
@@ -1,149 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-#include "complexity.h"
-
-#include <algorithm>
-#include <cstdint>
-#include <iostream>
-#include <string>
-#include <tuple>
-#include <vector>
-
-#include "string_util.h"
-#include "timers.h"
-#include "check.h"
-
-// File format reference: http://edoceo.com/utilitas/csv-file-format.
-
-namespace benchmark {
-
-namespace {
-std::vector<std::string> elements = {
-    "name",           "iterations",       "real_time",        "cpu_time",
-    "time_unit",      "bytes_per_second", "items_per_second", "label",
-    "error_occurred", "error_message"};
-}  // namespace
-
-bool CSVReporter::ReportContext(const Context& context) {
-  PrintBasicContext(&GetErrorStream(), context);
-  return true;
-}
-
-void CSVReporter::ReportRuns(const std::vector<Run> & reports) {
-  std::ostream& Out = GetOutputStream();
-
-  if (!printed_header_) {
-    // save the names of all the user counters
-    for (const auto& run : reports) {
-      for (const auto& cnt : run.counters) {
-        user_counter_names_.insert(cnt.first);
-      }
-    }
-
-    // print the header
-    for (auto B = elements.begin(); B != elements.end();) {
-      Out << *B++;
-      if (B != elements.end()) Out << ",";
-    }
-    for (auto B = user_counter_names_.begin(); B != user_counter_names_.end();) {
-      Out << ",\"" << *B++ << "\"";
-    }
-    Out << "\n";
-
-    printed_header_ = true;
-  } else {
-    // check that all the current counters are saved in the name set
-    for (const auto& run : reports) {
-      for (const auto& cnt : run.counters) {
-        CHECK(user_counter_names_.find(cnt.first) != user_counter_names_.end())
-              << "All counters must be present in each run. "
-              << "Counter named \"" << cnt.first
-              << "\" was not in a run after being added to the header";
-      }
-    }
-  }
-
-  // print results for each run
-  for (const auto& run : reports) {
-    PrintRunData(run);
-  }
-
-}
-
-void CSVReporter::PrintRunData(const Run & run) {
-  std::ostream& Out = GetOutputStream();
-
-  // Field with embedded double-quote characters must be doubled and the field
-  // delimited with double-quotes.
-  std::string name = run.benchmark_name;
-  ReplaceAll(&name, "\"", "\"\"");
-  Out << '"' << name << "\",";
-  if (run.error_occurred) {
-    Out << std::string(elements.size() - 3, ',');
-    Out << "true,";
-    std::string msg = run.error_message;
-    ReplaceAll(&msg, "\"", "\"\"");
-    Out << '"' << msg << "\"\n";
-    return;
-  }
-
-  // Do not print iteration on bigO and RMS report
-  if (!run.report_big_o && !run.report_rms) {
-    Out << run.iterations;
-  }
-  Out << ",";
-
-  Out << run.GetAdjustedRealTime() << ",";
-  Out << run.GetAdjustedCPUTime() << ",";
-
-  // Do not print timeLabel on bigO and RMS report
-  if (run.report_big_o) {
-    Out << GetBigOString(run.complexity);
-  } else if (!run.report_rms) {
-    Out << GetTimeUnitString(run.time_unit);
-  }
-  Out << ",";
-
-  if (run.bytes_per_second > 0.0) {
-    Out << run.bytes_per_second;
-  }
-  Out << ",";
-  if (run.items_per_second > 0.0) {
-    Out << run.items_per_second;
-  }
-  Out << ",";
-  if (!run.report_label.empty()) {
-    // Field with embedded double-quote characters must be doubled and the field
-    // delimited with double-quotes.
-    std::string label = run.report_label;
-    ReplaceAll(&label, "\"", "\"\"");
-    Out << "\"" << label << "\"";
-  }
-  Out << ",,";  // for error_occurred and error_message
-
-  // Print user counters
-  for (const auto &ucn : user_counter_names_) {
-    auto it = run.counters.find(ucn);
-    if(it == run.counters.end()) {
-      Out << ",";
-    } else {
-      Out << "," << it->second;
-    }
-  }
-  Out << '\n';
-}
-
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/cycleclock.h b/llvm/utils/benchmark/src/cycleclock.h
deleted file mode 100644
index 040ec22c20a96..0000000000000
--- a/llvm/utils/benchmark/src/cycleclock.h
+++ /dev/null
@@ -1,206 +0,0 @@
-// ----------------------------------------------------------------------
-// CycleClock
-//    A CycleClock tells you the current time in Cycles.  The "time"
-//    is actually time since power-on.  This is like time() but doesn't
-//    involve a system call and is much more precise.
-//
-// NOTE: Not all cpu/platform/kernel combinations guarantee that this
-// clock increments at a constant rate or is synchronized across all logical
-// cpus in a system.
-//
-// If you need the above guarantees, please consider using a 
diff erent
-// API. There are efforts to provide an interface which provides a millisecond
-// granularity and implemented as a memory read. A memory read is generally
-// cheaper than the CycleClock for many architectures.
-//
-// Also, in some out of order CPU implementations, the CycleClock is not
-// serializing. So if you're trying to count at cycles granularity, your
-// data might be inaccurate due to out of order instruction execution.
-// ----------------------------------------------------------------------
-
-#ifndef BENCHMARK_CYCLECLOCK_H_
-#define BENCHMARK_CYCLECLOCK_H_
-
-#include <cstdint>
-
-#include "benchmark/benchmark.h"
-#include "internal_macros.h"
-
-#if defined(BENCHMARK_OS_MACOSX)
-#include <mach/mach_time.h>
-#endif
-// For MSVC, we want to use '_asm rdtsc' when possible (since it works
-// with even ancient MSVC compilers), and when not possible the
-// __rdtsc intrinsic, declared in <intrin.h>.  Unfortunately, in some
-// environments, <windows.h> and <intrin.h> have conflicting
-// declarations of some other intrinsics, breaking compilation.
-// Therefore, we simply declare __rdtsc ourselves. See also
-// http://connect.microsoft.com/VisualStudio/feedback/details/262047
-#if defined(COMPILER_MSVC) && !defined(_M_IX86)
-extern "C" uint64_t __rdtsc();
-#pragma intrinsic(__rdtsc)
-#endif
-
-#if !defined(BENCHMARK_OS_WINDOWS) || defined(BENCHMARK_OS_MINGW)
-#include <sys/time.h>
-#include <time.h>
-#endif
-
-#ifdef BENCHMARK_OS_EMSCRIPTEN
-#include <emscripten.h>
-#endif
-
-namespace benchmark {
-// NOTE: only i386 and x86_64 have been well tested.
-// PPC, sparc, alpha, and ia64 are based on
-//    http://peter.kuscsik.com/wordpress/?p=14
-// with modifications by m3b.  See also
-//    https://setisvn.ssl.berkeley.edu/svn/lib/fftw-3.0.1/kernel/cycle.h
-namespace cycleclock {
-// This should return the number of cycles since power-on.  Thread-safe.
-inline BENCHMARK_ALWAYS_INLINE int64_t Now() {
-#if defined(BENCHMARK_OS_MACOSX)
-  // this goes at the top because we need ALL Macs, regardless of
-  // architecture, to return the number of "mach time units" that
-  // have passed since startup.  See sysinfo.cc where
-  // InitializeSystemInfo() sets the supposed cpu clock frequency of
-  // macs to the number of mach time units per second, not actual
-  // CPU clock frequency (which can change in the face of CPU
-  // frequency scaling).  Also note that when the Mac sleeps, this
-  // counter pauses; it does not continue counting, nor does it
-  // reset to zero.
-  return mach_absolute_time();
-#elif defined(BENCHMARK_OS_EMSCRIPTEN)
-  // this goes above x86-specific code because old versions of Emscripten
-  // define __x86_64__, although they have nothing to do with it.
-  return static_cast<int64_t>(emscripten_get_now() * 1e+6);
-#elif defined(__i386__)
-  int64_t ret;
-  __asm__ volatile("rdtsc" : "=A"(ret));
-  return ret;
-#elif defined(__x86_64__) || defined(__amd64__)
-  uint64_t low, high;
-  __asm__ volatile("rdtsc" : "=a"(low), "=d"(high));
-  return (high << 32) | low;
-#elif defined(__powerpc__) || defined(__ppc__)
-  // This returns a time-base, which is not always precisely a cycle-count.
-#if defined(__powerpc64__) || defined(__ppc64__)
-  int64_t tb;
-  asm volatile("mfspr %0, 268" : "=r"(tb));
-  return tb;
-#else
-  uint32_t tbl, tbu0, tbu1;
-  asm volatile(
-      "mftbu %0\n"
-      "mftb %1\n"
-      "mftbu %2"
-      : "=r"(tbu0), "=r"(tbl), "=r"(tbu1));
-  tbl &= -static_cast<int32_t>(tbu0 == tbu1);
-  // high 32 bits in tbu1; low 32 bits in tbl  (tbu0 is no longer needed)
-  return (static_cast<uint64_t>(tbu1) << 32) | tbl;
-#endif
-#elif defined(__sparc__)
-  int64_t tick;
-  asm(".byte 0x83, 0x41, 0x00, 0x00");
-  asm("mov   %%g1, %0" : "=r"(tick));
-  return tick;
-#elif defined(__ia64__)
-  int64_t itc;
-  asm("mov %0 = ar.itc" : "=r"(itc));
-  return itc;
-#elif defined(COMPILER_MSVC) && defined(_M_IX86)
-  // Older MSVC compilers (like 7.x) don't seem to support the
-  // __rdtsc intrinsic properly, so I prefer to use _asm instead
-  // when I know it will work.  Otherwise, I'll use __rdtsc and hope
-  // the code is being compiled with a non-ancient compiler.
-  _asm rdtsc
-#elif defined(COMPILER_MSVC)
-  return __rdtsc();
-#elif defined(BENCHMARK_OS_NACL)
-  // Native Client validator on x86/x86-64 allows RDTSC instructions,
-  // and this case is handled above. Native Client validator on ARM
-  // rejects MRC instructions (used in the ARM-specific sequence below),
-  // so we handle it here. Portable Native Client compiles to
-  // architecture-agnostic bytecode, which doesn't provide any
-  // cycle counter access mnemonics.
-
-  // Native Client does not provide any API to access cycle counter.
-  // Use clock_gettime(CLOCK_MONOTONIC, ...) instead of gettimeofday
-  // because is provides nanosecond resolution (which is noticable at
-  // least for PNaCl modules running on x86 Mac & Linux).
-  // Initialize to always return 0 if clock_gettime fails.
-  struct timespec ts = { 0, 0 };
-  clock_gettime(CLOCK_MONOTONIC, &ts);
-  return static_cast<int64_t>(ts.tv_sec) * 1000000000 + ts.tv_nsec;
-#elif defined(__aarch64__)
-  // System timer of ARMv8 runs at a 
diff erent frequency than the CPU's.
-  // The frequency is fixed, typically in the range 1-50MHz.  It can be
-  // read at CNTFRQ special register.  We assume the OS has set up
-  // the virtual timer properly.
-  int64_t virtual_timer_value;
-  asm volatile("mrs %0, cntvct_el0" : "=r"(virtual_timer_value));
-  return virtual_timer_value;
-#elif defined(__ARM_ARCH)
-  // V6 is the earliest arch that has a standard cyclecount
-  // Native Client validator doesn't allow MRC instructions.
-#if (__ARM_ARCH >= 6)
-  uint32_t pmccntr;
-  uint32_t pmuseren;
-  uint32_t pmcntenset;
-  // Read the user mode perf monitor counter access permissions.
-  asm volatile("mrc p15, 0, %0, c9, c14, 0" : "=r"(pmuseren));
-  if (pmuseren & 1) {  // Allows reading perfmon counters for user mode code.
-    asm volatile("mrc p15, 0, %0, c9, c12, 1" : "=r"(pmcntenset));
-    if (pmcntenset & 0x80000000ul) {  // Is it counting?
-      asm volatile("mrc p15, 0, %0, c9, c13, 0" : "=r"(pmccntr));
-      // The counter is set up to count every 64th cycle
-      return static_cast<int64_t>(pmccntr) * 64;  // Should optimize to << 6
-    }
-  }
-#endif
-  struct timeval tv;
-  gettimeofday(&tv, nullptr);
-  return static_cast<int64_t>(tv.tv_sec) * 1000000 + tv.tv_usec;
-#elif defined(__mips__) || defined(__m68k__)
-  // mips apparently only allows rdtsc for superusers, so we fall
-  // back to gettimeofday.  It's possible clock_gettime would be better.
-  struct timeval tv;
-  gettimeofday(&tv, nullptr);
-  return static_cast<int64_t>(tv.tv_sec) * 1000000 + tv.tv_usec;
-#elif defined(__s390__) // Covers both s390 and s390x.
-  // Return the CPU clock.
-  uint64_t tsc;
-  asm("stck %0" : "=Q" (tsc) : : "cc");
-  return tsc;
-#elif defined(__riscv) // RISC-V
-  // Use RDCYCLE (and RDCYCLEH on riscv32)
-#if __riscv_xlen == 32
-  uint32_t cycles_lo, cycles_hi0, cycles_hi1;
-  // This asm also includes the PowerPC overflow handling strategy, as above.
-  // Implemented in assembly because Clang insisted on branching.
-  asm volatile(
-      "rdcycleh %0\n"
-      "rdcycle %1\n"
-      "rdcycleh %2\n"
-      "sub %0, %0, %2\n"
-      "seqz %0, %0\n"
-      "sub %0, zero, %0\n"
-      "and %1, %1, %0\n"
-      : "=r"(cycles_hi0), "=r"(cycles_lo), "=r"(cycles_hi1));
-  return (static_cast<uint64_t>(cycles_hi1) << 32) | cycles_lo;
-#else
-  uint64_t cycles;
-  asm volatile("rdcycle %0" : "=r"(cycles));
-  return cycles;
-#endif
-#else
-// The soft failover to a generic implementation is automatic only for ARM.
-// For other platforms the developer is expected to make an attempt to create
-// a fast implementation and use generic version if nothing better is available.
-#error You need to define CycleTimer for your OS and CPU
-#endif
-}
-}  // end namespace cycleclock
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_CYCLECLOCK_H_

diff  --git a/llvm/utils/benchmark/src/internal_macros.h b/llvm/utils/benchmark/src/internal_macros.h
deleted file mode 100644
index f2d54bfcbd9dd..0000000000000
--- a/llvm/utils/benchmark/src/internal_macros.h
+++ /dev/null
@@ -1,81 +0,0 @@
-#ifndef BENCHMARK_INTERNAL_MACROS_H_
-#define BENCHMARK_INTERNAL_MACROS_H_
-
-#include "benchmark/benchmark.h"
-
-#ifndef __has_feature
-#define __has_feature(x) 0
-#endif
-
-#if defined(__clang__)
-  #if !defined(COMPILER_CLANG)
-    #define COMPILER_CLANG
-  #endif
-#elif defined(_MSC_VER)
-  #if !defined(COMPILER_MSVC)
-    #define COMPILER_MSVC
-  #endif
-#elif defined(__GNUC__)
-  #if !defined(COMPILER_GCC)
-    #define COMPILER_GCC
-  #endif
-#endif
-
-#if __has_feature(cxx_attributes)
-  #define BENCHMARK_NORETURN [[noreturn]]
-#elif defined(__GNUC__)
-  #define BENCHMARK_NORETURN __attribute__((noreturn))
-#elif defined(COMPILER_MSVC)
-  #define BENCHMARK_NORETURN __declspec(noreturn)
-#else
-  #define BENCHMARK_NORETURN
-#endif
-
-#if defined(__CYGWIN__)
-  #define BENCHMARK_OS_CYGWIN 1
-#elif defined(_WIN32)
-  #define BENCHMARK_OS_WINDOWS 1
-  #if defined(__MINGW32__)
-    #define BENCHMARK_OS_MINGW 1
-  #endif
-#elif defined(__APPLE__)
-  #define BENCHMARK_OS_APPLE 1
-  #include "TargetConditionals.h"
-  #if defined(TARGET_OS_MAC)
-    #define BENCHMARK_OS_MACOSX 1
-    #if defined(TARGET_OS_IPHONE)
-      #define BENCHMARK_OS_IOS 1
-    #endif
-  #endif
-#elif defined(__FreeBSD__)
-  #define BENCHMARK_OS_FREEBSD 1
-#elif defined(__NetBSD__)
-  #define BENCHMARK_OS_NETBSD 1
-#elif defined(__OpenBSD__)
-  #define BENCHMARK_OS_OPENBSD 1
-#elif defined(__linux__)
-  #define BENCHMARK_OS_LINUX 1
-#elif defined(__native_client__)
-  #define BENCHMARK_OS_NACL 1
-#elif defined(__EMSCRIPTEN__)
-  #define BENCHMARK_OS_EMSCRIPTEN 1
-#elif defined(__rtems__)
-  #define BENCHMARK_OS_RTEMS 1
-#elif defined(__Fuchsia__)
-#define BENCHMARK_OS_FUCHSIA 1
-#elif defined (__SVR4) && defined (__sun)
-#define BENCHMARK_OS_SOLARIS 1
-#endif
-
-#if !__has_feature(cxx_exceptions) && !defined(__cpp_exceptions) \
-     && !defined(__EXCEPTIONS)
-  #define BENCHMARK_HAS_NO_EXCEPTIONS
-#endif
-
-#if defined(COMPILER_CLANG) || defined(COMPILER_GCC)
-  #define BENCHMARK_MAYBE_UNUSED __attribute__((unused))
-#else
-  #define BENCHMARK_MAYBE_UNUSED
-#endif
-
-#endif  // BENCHMARK_INTERNAL_MACROS_H_

diff  --git a/llvm/utils/benchmark/src/json_reporter.cc b/llvm/utils/benchmark/src/json_reporter.cc
deleted file mode 100644
index 685d6b097dcf1..0000000000000
--- a/llvm/utils/benchmark/src/json_reporter.cc
+++ /dev/null
@@ -1,205 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-#include "complexity.h"
-
-#include <algorithm>
-#include <cstdint>
-#include <iostream>
-#include <string>
-#include <tuple>
-#include <vector>
-#include <iomanip> // for setprecision
-#include <limits>
-
-#include "string_util.h"
-#include "timers.h"
-
-namespace benchmark {
-
-namespace {
-
-std::string FormatKV(std::string const& key, std::string const& value) {
-  return StrFormat("\"%s\": \"%s\"", key.c_str(), value.c_str());
-}
-
-std::string FormatKV(std::string const& key, const char* value) {
-  return StrFormat("\"%s\": \"%s\"", key.c_str(), value);
-}
-
-std::string FormatKV(std::string const& key, bool value) {
-  return StrFormat("\"%s\": %s", key.c_str(), value ? "true" : "false");
-}
-
-std::string FormatKV(std::string const& key, int64_t value) {
-  std::stringstream ss;
-  ss << '"' << key << "\": " << value;
-  return ss.str();
-}
-
-std::string FormatKV(std::string const& key, double value) {
-  std::stringstream ss;
-  ss << '"' << key << "\": ";
-
-  const auto max_digits10 = std::numeric_limits<decltype (value)>::max_digits10;
-  const auto max_fractional_digits10 = max_digits10 - 1;
-
-  ss << std::scientific << std::setprecision(max_fractional_digits10) << value;
-  return ss.str();
-}
-
-int64_t RoundDouble(double v) { return static_cast<int64_t>(v + 0.5); }
-
-}  // end namespace
-
-bool JSONReporter::ReportContext(const Context& context) {
-  std::ostream& out = GetOutputStream();
-
-  out << "{\n";
-  std::string inner_indent(2, ' ');
-
-  // Open context block and print context information.
-  out << inner_indent << "\"context\": {\n";
-  std::string indent(4, ' ');
-
-  std::string walltime_value = LocalDateTimeString();
-  out << indent << FormatKV("date", walltime_value) << ",\n";
-
-  if (Context::executable_name) {
-    out << indent << FormatKV("executable", Context::executable_name) << ",\n";
-  }
-
-  CPUInfo const& info = context.cpu_info;
-  out << indent << FormatKV("num_cpus", static_cast<int64_t>(info.num_cpus))
-      << ",\n";
-  out << indent
-      << FormatKV("mhz_per_cpu",
-                  RoundDouble(info.cycles_per_second / 1000000.0))
-      << ",\n";
-  out << indent << FormatKV("cpu_scaling_enabled", info.scaling_enabled)
-      << ",\n";
-
-  out << indent << "\"caches\": [\n";
-  indent = std::string(6, ' ');
-  std::string cache_indent(8, ' ');
-  for (size_t i = 0; i < info.caches.size(); ++i) {
-    auto& CI = info.caches[i];
-    out << indent << "{\n";
-    out << cache_indent << FormatKV("type", CI.type) << ",\n";
-    out << cache_indent << FormatKV("level", static_cast<int64_t>(CI.level))
-        << ",\n";
-    out << cache_indent
-        << FormatKV("size", static_cast<int64_t>(CI.size) * 1000u) << ",\n";
-    out << cache_indent
-        << FormatKV("num_sharing", static_cast<int64_t>(CI.num_sharing))
-        << "\n";
-    out << indent << "}";
-    if (i != info.caches.size() - 1) out << ",";
-    out << "\n";
-  }
-  indent = std::string(4, ' ');
-  out << indent << "],\n";
-
-#if defined(NDEBUG)
-  const char build_type[] = "release";
-#else
-  const char build_type[] = "debug";
-#endif
-  out << indent << FormatKV("library_build_type", build_type) << "\n";
-  // Close context block and open the list of benchmarks.
-  out << inner_indent << "},\n";
-  out << inner_indent << "\"benchmarks\": [\n";
-  return true;
-}
-
-void JSONReporter::ReportRuns(std::vector<Run> const& reports) {
-  if (reports.empty()) {
-    return;
-  }
-  std::string indent(4, ' ');
-  std::ostream& out = GetOutputStream();
-  if (!first_report_) {
-    out << ",\n";
-  }
-  first_report_ = false;
-
-  for (auto it = reports.begin(); it != reports.end(); ++it) {
-    out << indent << "{\n";
-    PrintRunData(*it);
-    out << indent << '}';
-    auto it_cp = it;
-    if (++it_cp != reports.end()) {
-      out << ",\n";
-    }
-  }
-}
-
-void JSONReporter::Finalize() {
-  // Close the list of benchmarks and the top level object.
-  GetOutputStream() << "\n  ]\n}\n";
-}
-
-void JSONReporter::PrintRunData(Run const& run) {
-  std::string indent(6, ' ');
-  std::ostream& out = GetOutputStream();
-  out << indent << FormatKV("name", run.benchmark_name) << ",\n";
-  if (run.error_occurred) {
-    out << indent << FormatKV("error_occurred", run.error_occurred) << ",\n";
-    out << indent << FormatKV("error_message", run.error_message) << ",\n";
-  }
-  if (!run.report_big_o && !run.report_rms) {
-    out << indent << FormatKV("iterations", run.iterations) << ",\n";
-    out << indent
-        << FormatKV("real_time", run.GetAdjustedRealTime())
-        << ",\n";
-    out << indent
-        << FormatKV("cpu_time", run.GetAdjustedCPUTime());
-    out << ",\n"
-        << indent << FormatKV("time_unit", GetTimeUnitString(run.time_unit));
-  } else if (run.report_big_o) {
-    out << indent
-        << FormatKV("cpu_coefficient", run.GetAdjustedCPUTime())
-        << ",\n";
-    out << indent
-        << FormatKV("real_coefficient", run.GetAdjustedRealTime())
-        << ",\n";
-    out << indent << FormatKV("big_o", GetBigOString(run.complexity)) << ",\n";
-    out << indent << FormatKV("time_unit", GetTimeUnitString(run.time_unit));
-  } else if (run.report_rms) {
-    out << indent
-        << FormatKV("rms", run.GetAdjustedCPUTime());
-  }
-  if (run.bytes_per_second > 0.0) {
-    out << ",\n"
-        << indent
-        << FormatKV("bytes_per_second", run.bytes_per_second);
-  }
-  if (run.items_per_second > 0.0) {
-    out << ",\n"
-        << indent
-        << FormatKV("items_per_second", run.items_per_second);
-  }
-  for(auto &c : run.counters) {
-    out << ",\n"
-        << indent
-        << FormatKV(c.first, c.second);
-  }
-  if (!run.report_label.empty()) {
-    out << ",\n" << indent << FormatKV("label", run.report_label);
-  }
-  out << '\n';
-}
-
-} // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/log.h b/llvm/utils/benchmark/src/log.h
deleted file mode 100644
index d06e1031db141..0000000000000
--- a/llvm/utils/benchmark/src/log.h
+++ /dev/null
@@ -1,73 +0,0 @@
-#ifndef BENCHMARK_LOG_H_
-#define BENCHMARK_LOG_H_
-
-#include <iostream>
-#include <ostream>
-
-#include "benchmark/benchmark.h"
-
-namespace benchmark {
-namespace internal {
-
-typedef std::basic_ostream<char>&(EndLType)(std::basic_ostream<char>&);
-
-class LogType {
-  friend LogType& GetNullLogInstance();
-  friend LogType& GetErrorLogInstance();
-
-  // FIXME: Add locking to output.
-  template <class Tp>
-  friend LogType& operator<<(LogType&, Tp const&);
-  friend LogType& operator<<(LogType&, EndLType*);
-
- private:
-  LogType(std::ostream* out) : out_(out) {}
-  std::ostream* out_;
-  BENCHMARK_DISALLOW_COPY_AND_ASSIGN(LogType);
-};
-
-template <class Tp>
-LogType& operator<<(LogType& log, Tp const& value) {
-  if (log.out_) {
-    *log.out_ << value;
-  }
-  return log;
-}
-
-inline LogType& operator<<(LogType& log, EndLType* m) {
-  if (log.out_) {
-    *log.out_ << m;
-  }
-  return log;
-}
-
-inline int& LogLevel() {
-  static int log_level = 0;
-  return log_level;
-}
-
-inline LogType& GetNullLogInstance() {
-  static LogType log(nullptr);
-  return log;
-}
-
-inline LogType& GetErrorLogInstance() {
-  static LogType log(&std::clog);
-  return log;
-}
-
-inline LogType& GetLogInstanceForLevel(int level) {
-  if (level <= LogLevel()) {
-    return GetErrorLogInstance();
-  }
-  return GetNullLogInstance();
-}
-
-}  // end namespace internal
-}  // end namespace benchmark
-
-#define VLOG(x)                                                               \
-  (::benchmark::internal::GetLogInstanceForLevel(x) << "-- LOG(" << x << "):" \
-                                                                         " ")
-
-#endif

diff  --git a/llvm/utils/benchmark/src/mutex.h b/llvm/utils/benchmark/src/mutex.h
deleted file mode 100644
index 5f461d05a0c64..0000000000000
--- a/llvm/utils/benchmark/src/mutex.h
+++ /dev/null
@@ -1,155 +0,0 @@
-#ifndef BENCHMARK_MUTEX_H_
-#define BENCHMARK_MUTEX_H_
-
-#include <condition_variable>
-#include <mutex>
-
-#include "check.h"
-
-// Enable thread safety attributes only with clang.
-// The attributes can be safely erased when compiling with other compilers.
-#if defined(HAVE_THREAD_SAFETY_ATTRIBUTES)
-#define THREAD_ANNOTATION_ATTRIBUTE__(x) __attribute__((x))
-#else
-#define THREAD_ANNOTATION_ATTRIBUTE__(x)  // no-op
-#endif
-
-#define CAPABILITY(x) THREAD_ANNOTATION_ATTRIBUTE__(capability(x))
-
-#define SCOPED_CAPABILITY THREAD_ANNOTATION_ATTRIBUTE__(scoped_lockable)
-
-#define GUARDED_BY(x) THREAD_ANNOTATION_ATTRIBUTE__(guarded_by(x))
-
-#define PT_GUARDED_BY(x) THREAD_ANNOTATION_ATTRIBUTE__(pt_guarded_by(x))
-
-#define ACQUIRED_BEFORE(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(acquired_before(__VA_ARGS__))
-
-#define ACQUIRED_AFTER(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(acquired_after(__VA_ARGS__))
-
-#define REQUIRES(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(requires_capability(__VA_ARGS__))
-
-#define REQUIRES_SHARED(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(requires_shared_capability(__VA_ARGS__))
-
-#define ACQUIRE(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(acquire_capability(__VA_ARGS__))
-
-#define ACQUIRE_SHARED(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(acquire_shared_capability(__VA_ARGS__))
-
-#define RELEASE(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(release_capability(__VA_ARGS__))
-
-#define RELEASE_SHARED(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(release_shared_capability(__VA_ARGS__))
-
-#define TRY_ACQUIRE(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(try_acquire_capability(__VA_ARGS__))
-
-#define TRY_ACQUIRE_SHARED(...) \
-  THREAD_ANNOTATION_ATTRIBUTE__(try_acquire_shared_capability(__VA_ARGS__))
-
-#define EXCLUDES(...) THREAD_ANNOTATION_ATTRIBUTE__(locks_excluded(__VA_ARGS__))
-
-#define ASSERT_CAPABILITY(x) THREAD_ANNOTATION_ATTRIBUTE__(assert_capability(x))
-
-#define ASSERT_SHARED_CAPABILITY(x) \
-  THREAD_ANNOTATION_ATTRIBUTE__(assert_shared_capability(x))
-
-#define RETURN_CAPABILITY(x) THREAD_ANNOTATION_ATTRIBUTE__(lock_returned(x))
-
-#define NO_THREAD_SAFETY_ANALYSIS \
-  THREAD_ANNOTATION_ATTRIBUTE__(no_thread_safety_analysis)
-
-namespace benchmark {
-
-typedef std::condition_variable Condition;
-
-// NOTE: Wrappers for std::mutex and std::unique_lock are provided so that
-// we can annotate them with thread safety attributes and use the
-// -Wthread-safety warning with clang. The standard library types cannot be
-// used directly because they do not provided the required annotations.
-class CAPABILITY("mutex") Mutex {
- public:
-  Mutex() {}
-
-  void lock() ACQUIRE() { mut_.lock(); }
-  void unlock() RELEASE() { mut_.unlock(); }
-  std::mutex& native_handle() { return mut_; }
-
- private:
-  std::mutex mut_;
-};
-
-class SCOPED_CAPABILITY MutexLock {
-  typedef std::unique_lock<std::mutex> MutexLockImp;
-
- public:
-  MutexLock(Mutex& m) ACQUIRE(m) : ml_(m.native_handle()) {}
-  ~MutexLock() RELEASE() {}
-  MutexLockImp& native_handle() { return ml_; }
-
- private:
-  MutexLockImp ml_;
-};
-
-class Barrier {
- public:
-  Barrier(int num_threads) : running_threads_(num_threads) {}
-
-  // Called by each thread
-  bool wait() EXCLUDES(lock_) {
-    bool last_thread = false;
-    {
-      MutexLock ml(lock_);
-      last_thread = createBarrier(ml);
-    }
-    if (last_thread) phase_condition_.notify_all();
-    return last_thread;
-  }
-
-  void removeThread() EXCLUDES(lock_) {
-    MutexLock ml(lock_);
-    --running_threads_;
-    if (entered_ != 0) phase_condition_.notify_all();
-  }
-
- private:
-  Mutex lock_;
-  Condition phase_condition_;
-  int running_threads_;
-
-  // State for barrier management
-  int phase_number_ = 0;
-  int entered_ = 0;  // Number of threads that have entered this barrier
-
-  // Enter the barrier and wait until all other threads have also
-  // entered the barrier.  Returns iff this is the last thread to
-  // enter the barrier.
-  bool createBarrier(MutexLock& ml) REQUIRES(lock_) {
-    CHECK_LT(entered_, running_threads_);
-    entered_++;
-    if (entered_ < running_threads_) {
-      // Wait for all threads to enter
-      int phase_number_cp = phase_number_;
-      auto cb = [this, phase_number_cp]() {
-        return this->phase_number_ > phase_number_cp ||
-               entered_ == running_threads_;  // A thread has aborted in error
-      };
-      phase_condition_.wait(ml.native_handle(), cb);
-      if (phase_number_ > phase_number_cp) return false;
-      // else (running_threads_ == entered_) and we are the last thread.
-    }
-    // Last thread has reached the barrier
-    phase_number_++;
-    entered_ = 0;
-    return true;
-  }
-};
-
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_MUTEX_H_

diff  --git a/llvm/utils/benchmark/src/re.h b/llvm/utils/benchmark/src/re.h
deleted file mode 100644
index 924d2f0ba7e51..0000000000000
--- a/llvm/utils/benchmark/src/re.h
+++ /dev/null
@@ -1,152 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#ifndef BENCHMARK_RE_H_
-#define BENCHMARK_RE_H_
-
-#include "internal_macros.h"
-
-#if !defined(HAVE_STD_REGEX) && \
-    !defined(HAVE_GNU_POSIX_REGEX) && \
-    !defined(HAVE_POSIX_REGEX)
-  // No explicit regex selection; detect based on builtin hints.
-  #if defined(BENCHMARK_OS_LINUX) || defined(BENCHMARK_OS_APPLE)
-    #define HAVE_POSIX_REGEX 1
-  #elif __cplusplus >= 199711L
-    #define HAVE_STD_REGEX 1
-  #endif
-#endif
-
-// Prefer C regex libraries when compiling w/o exceptions so that we can
-// correctly report errors.
-#if defined(BENCHMARK_HAS_NO_EXCEPTIONS) && \
-    defined(BENCHMARK_HAVE_STD_REGEX) && \
-    (defined(HAVE_GNU_POSIX_REGEX) || defined(HAVE_POSIX_REGEX))
-  #undef HAVE_STD_REGEX
-#endif
-
-#if defined(HAVE_STD_REGEX)
-  #include <regex>
-#elif defined(HAVE_GNU_POSIX_REGEX)
-  #include <gnuregex.h>
-#elif defined(HAVE_POSIX_REGEX)
-  #include <regex.h>
-#else
-#error No regular expression backend was found!
-#endif
-#include <string>
-
-#include "check.h"
-
-namespace benchmark {
-
-// A wrapper around the POSIX regular expression API that provides automatic
-// cleanup
-class Regex {
- public:
-  Regex() : init_(false) {}
-
-  ~Regex();
-
-  // Compile a regular expression matcher from spec.  Returns true on success.
-  //
-  // On failure (and if error is not nullptr), error is populated with a human
-  // readable error message if an error occurs.
-  bool Init(const std::string& spec, std::string* error);
-
-  // Returns whether str matches the compiled regular expression.
-  bool Match(const std::string& str);
-
- private:
-  bool init_;
-// Underlying regular expression object
-#if defined(HAVE_STD_REGEX)
-  std::regex re_;
-#elif defined(HAVE_POSIX_REGEX) || defined(HAVE_GNU_POSIX_REGEX)
-  regex_t re_;
-#else
-  #error No regular expression backend implementation available
-#endif
-};
-
-#if defined(HAVE_STD_REGEX)
-
-inline bool Regex::Init(const std::string& spec, std::string* error) {
-#ifdef BENCHMARK_HAS_NO_EXCEPTIONS
-  ((void)error); // suppress unused warning
-#else
-  try {
-#endif
-    re_ = std::regex(spec, std::regex_constants::extended);
-    init_ = true;
-#ifndef BENCHMARK_HAS_NO_EXCEPTIONS
-  } catch (const std::regex_error& e) {
-    if (error) {
-      *error = e.what();
-    }
-  }
-#endif
-  return init_;
-}
-
-inline Regex::~Regex() {}
-
-inline bool Regex::Match(const std::string& str) {
-  if (!init_) {
-    return false;
-  }
-  return std::regex_search(str, re_);
-}
-
-#else
-inline bool Regex::Init(const std::string& spec, std::string* error) {
-  int ec = regcomp(&re_, spec.c_str(), REG_EXTENDED | REG_NOSUB);
-  if (ec != 0) {
-    if (error) {
-      size_t needed = regerror(ec, &re_, nullptr, 0);
-      char* errbuf = new char[needed];
-      regerror(ec, &re_, errbuf, needed);
-
-      // regerror returns the number of bytes necessary to null terminate
-      // the string, so we move that when assigning to error.
-      CHECK_NE(needed, 0);
-      error->assign(errbuf, needed - 1);
-
-      delete[] errbuf;
-    }
-
-    return false;
-  }
-
-  init_ = true;
-  return true;
-}
-
-inline Regex::~Regex() {
-  if (init_) {
-    regfree(&re_);
-  }
-}
-
-inline bool Regex::Match(const std::string& str) {
-  if (!init_) {
-    return false;
-  }
-  return regexec(&re_, str.c_str(), 0, nullptr, 0) == 0;
-}
-#endif
-
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_RE_H_

diff  --git a/llvm/utils/benchmark/src/reporter.cc b/llvm/utils/benchmark/src/reporter.cc
deleted file mode 100644
index 4b40aaec8b94d..0000000000000
--- a/llvm/utils/benchmark/src/reporter.cc
+++ /dev/null
@@ -1,87 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-#include "timers.h"
-
-#include <cstdlib>
-
-#include <iostream>
-#include <tuple>
-#include <vector>
-
-#include "check.h"
-
-namespace benchmark {
-
-BenchmarkReporter::BenchmarkReporter()
-    : output_stream_(&std::cout), error_stream_(&std::cerr) {}
-
-BenchmarkReporter::~BenchmarkReporter() {}
-
-void BenchmarkReporter::PrintBasicContext(std::ostream *out,
-                                          Context const &context) {
-  CHECK(out) << "cannot be null";
-  auto &Out = *out;
-
-  Out << LocalDateTimeString() << "\n";
-
-  if (context.executable_name)
-    Out << "Running " << context.executable_name << "\n";
-
-  const CPUInfo &info = context.cpu_info;
-  Out << "Run on (" << info.num_cpus << " X "
-      << (info.cycles_per_second / 1000000.0) << " MHz CPU "
-      << ((info.num_cpus > 1) ? "s" : "") << ")\n";
-  if (info.caches.size() != 0) {
-    Out << "CPU Caches:\n";
-    for (auto &CInfo : info.caches) {
-      Out << "  L" << CInfo.level << " " << CInfo.type << " "
-          << (CInfo.size / 1000) << "K";
-      if (CInfo.num_sharing != 0)
-        Out << " (x" << (info.num_cpus / CInfo.num_sharing) << ")";
-      Out << "\n";
-    }
-  }
-
-  if (info.scaling_enabled) {
-    Out << "***WARNING*** CPU scaling is enabled, the benchmark "
-           "real time measurements may be noisy and will incur extra "
-           "overhead.\n";
-  }
-
-#ifndef NDEBUG
-  Out << "***WARNING*** Library was built as DEBUG. Timings may be "
-         "affected.\n";
-#endif
-}
-
-// No initializer because it's already initialized to NULL.
-const char* BenchmarkReporter::Context::executable_name;
-
-BenchmarkReporter::Context::Context() : cpu_info(CPUInfo::Get()) {}
-
-double BenchmarkReporter::Run::GetAdjustedRealTime() const {
-  double new_time = real_accumulated_time * GetTimeUnitMultiplier(time_unit);
-  if (iterations != 0) new_time /= static_cast<double>(iterations);
-  return new_time;
-}
-
-double BenchmarkReporter::Run::GetAdjustedCPUTime() const {
-  double new_time = cpu_accumulated_time * GetTimeUnitMultiplier(time_unit);
-  if (iterations != 0) new_time /= static_cast<double>(iterations);
-  return new_time;
-}
-
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/sleep.cc b/llvm/utils/benchmark/src/sleep.cc
deleted file mode 100644
index 1512ac90f7ead..0000000000000
--- a/llvm/utils/benchmark/src/sleep.cc
+++ /dev/null
@@ -1,51 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "sleep.h"
-
-#include <cerrno>
-#include <cstdlib>
-#include <ctime>
-
-#include "internal_macros.h"
-
-#ifdef BENCHMARK_OS_WINDOWS
-#include <windows.h>
-#endif
-
-namespace benchmark {
-#ifdef BENCHMARK_OS_WINDOWS
-// Window's Sleep takes milliseconds argument.
-void SleepForMilliseconds(int milliseconds) { Sleep(milliseconds); }
-void SleepForSeconds(double seconds) {
-  SleepForMilliseconds(static_cast<int>(kNumMillisPerSecond * seconds));
-}
-#else   // BENCHMARK_OS_WINDOWS
-void SleepForMicroseconds(int microseconds) {
-  struct timespec sleep_time;
-  sleep_time.tv_sec = microseconds / kNumMicrosPerSecond;
-  sleep_time.tv_nsec = (microseconds % kNumMicrosPerSecond) * kNumNanosPerMicro;
-  while (nanosleep(&sleep_time, &sleep_time) != 0 && errno == EINTR)
-    ;  // Ignore signals and wait for the full interval to elapse.
-}
-
-void SleepForMilliseconds(int milliseconds) {
-  SleepForMicroseconds(milliseconds * kNumMicrosPerMilli);
-}
-
-void SleepForSeconds(double seconds) {
-  SleepForMicroseconds(static_cast<int>(seconds * kNumMicrosPerSecond));
-}
-#endif  // BENCHMARK_OS_WINDOWS
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/sleep.h b/llvm/utils/benchmark/src/sleep.h
deleted file mode 100644
index f98551afe2849..0000000000000
--- a/llvm/utils/benchmark/src/sleep.h
+++ /dev/null
@@ -1,15 +0,0 @@
-#ifndef BENCHMARK_SLEEP_H_
-#define BENCHMARK_SLEEP_H_
-
-namespace benchmark {
-const int kNumMillisPerSecond = 1000;
-const int kNumMicrosPerMilli = 1000;
-const int kNumMicrosPerSecond = kNumMillisPerSecond * 1000;
-const int kNumNanosPerMicro = 1000;
-const int kNumNanosPerSecond = kNumNanosPerMicro * kNumMicrosPerSecond;
-
-void SleepForMilliseconds(int milliseconds);
-void SleepForSeconds(double seconds);
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_SLEEP_H_

diff  --git a/llvm/utils/benchmark/src/statistics.cc b/llvm/utils/benchmark/src/statistics.cc
deleted file mode 100644
index 1c91e1015ab6f..0000000000000
--- a/llvm/utils/benchmark/src/statistics.cc
+++ /dev/null
@@ -1,178 +0,0 @@
-// Copyright 2016 Ismael Jimenez Martinez. All rights reserved.
-// Copyright 2017 Roman Lebedev. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "benchmark/benchmark.h"
-
-#include <algorithm>
-#include <cmath>
-#include <string>
-#include <vector>
-#include <numeric>
-#include "check.h"
-#include "statistics.h"
-
-namespace benchmark {
-
-auto StatisticsSum = [](const std::vector<double>& v) {
-  return std::accumulate(v.begin(), v.end(), 0.0);
-};
-
-double StatisticsMean(const std::vector<double>& v) {
-  if (v.empty()) return 0.0;
-  return StatisticsSum(v) * (1.0 / v.size());
-}
-
-double StatisticsMedian(const std::vector<double>& v) {
-  if (v.size() < 3) return StatisticsMean(v);
-  std::vector<double> copy(v);
-
-  auto center = copy.begin() + v.size() / 2;
-  std::nth_element(copy.begin(), center, copy.end());
-
-  // did we have an odd number of samples?
-  // if yes, then center is the median
-  // it no, then we are looking for the average between center and the value before
-  if(v.size() % 2 == 1)
-    return *center;
-  auto center2 = copy.begin() + v.size() / 2 - 1;
-  std::nth_element(copy.begin(), center2, copy.end());
-  return (*center + *center2) / 2.0;
-}
-
-// Return the sum of the squares of this sample set
-auto SumSquares = [](const std::vector<double>& v) {
-  return std::inner_product(v.begin(), v.end(), v.begin(), 0.0);
-};
-
-auto Sqr = [](const double dat) { return dat * dat; };
-auto Sqrt = [](const double dat) {
-  // Avoid NaN due to imprecision in the calculations
-  if (dat < 0.0) return 0.0;
-  return std::sqrt(dat);
-};
-
-double StatisticsStdDev(const std::vector<double>& v) {
-  const auto mean = StatisticsMean(v);
-  if (v.empty()) return mean;
-
-  // Sample standard deviation is undefined for n = 1
-  if (v.size() == 1)
-    return 0.0;
-
-  const double avg_squares = SumSquares(v) * (1.0 / v.size());
-  return Sqrt(v.size() / (v.size() - 1.0) * (avg_squares - Sqr(mean)));
-}
-
-std::vector<BenchmarkReporter::Run> ComputeStats(
-    const std::vector<BenchmarkReporter::Run>& reports) {
-  typedef BenchmarkReporter::Run Run;
-  std::vector<Run> results;
-
-  auto error_count =
-      std::count_if(reports.begin(), reports.end(),
-                    [](Run const& run) { return run.error_occurred; });
-
-  if (reports.size() - error_count < 2) {
-    // We don't report aggregated data if there was a single run.
-    return results;
-  }
-
-  // Accumulators.
-  std::vector<double> real_accumulated_time_stat;
-  std::vector<double> cpu_accumulated_time_stat;
-  std::vector<double> bytes_per_second_stat;
-  std::vector<double> items_per_second_stat;
-
-  real_accumulated_time_stat.reserve(reports.size());
-  cpu_accumulated_time_stat.reserve(reports.size());
-  bytes_per_second_stat.reserve(reports.size());
-  items_per_second_stat.reserve(reports.size());
-
-  // All repetitions should be run with the same number of iterations so we
-  // can take this information from the first benchmark.
-  int64_t const run_iterations = reports.front().iterations;
-  // create stats for user counters
-  struct CounterStat {
-    Counter c;
-    std::vector<double> s;
-  };
-  std::map< std::string, CounterStat > counter_stats;
-  for(Run const& r : reports) {
-    for(auto const& cnt : r.counters) {
-      auto it = counter_stats.find(cnt.first);
-      if(it == counter_stats.end()) {
-        counter_stats.insert({cnt.first, {cnt.second, std::vector<double>{}}});
-        it = counter_stats.find(cnt.first);
-        it->second.s.reserve(reports.size());
-      } else {
-        CHECK_EQ(counter_stats[cnt.first].c.flags, cnt.second.flags);
-      }
-    }
-  }
-
-  // Populate the accumulators.
-  for (Run const& run : reports) {
-    CHECK_EQ(reports[0].benchmark_name, run.benchmark_name);
-    CHECK_EQ(run_iterations, run.iterations);
-    if (run.error_occurred) continue;
-    real_accumulated_time_stat.emplace_back(run.real_accumulated_time);
-    cpu_accumulated_time_stat.emplace_back(run.cpu_accumulated_time);
-    items_per_second_stat.emplace_back(run.items_per_second);
-    bytes_per_second_stat.emplace_back(run.bytes_per_second);
-    // user counters
-    for(auto const& cnt : run.counters) {
-      auto it = counter_stats.find(cnt.first);
-      CHECK_NE(it, counter_stats.end());
-      it->second.s.emplace_back(cnt.second);
-    }
-  }
-
-  // Only add label if it is same for all runs
-  std::string report_label = reports[0].report_label;
-  for (std::size_t i = 1; i < reports.size(); i++) {
-    if (reports[i].report_label != report_label) {
-      report_label = "";
-      break;
-    }
-  }
-
-  for(const auto& Stat : *reports[0].statistics) {
-    // Get the data from the accumulator to BenchmarkReporter::Run's.
-    Run data;
-    data.benchmark_name = reports[0].benchmark_name + "_" + Stat.name_;
-    data.report_label = report_label;
-    data.iterations = run_iterations;
-
-    data.real_accumulated_time = Stat.compute_(real_accumulated_time_stat);
-    data.cpu_accumulated_time = Stat.compute_(cpu_accumulated_time_stat);
-    data.bytes_per_second = Stat.compute_(bytes_per_second_stat);
-    data.items_per_second = Stat.compute_(items_per_second_stat);
-
-    data.time_unit = reports[0].time_unit;
-
-    // user counters
-    for(auto const& kv : counter_stats) {
-      const auto uc_stat = Stat.compute_(kv.second.s);
-      auto c = Counter(uc_stat, counter_stats[kv.first].c.flags);
-      data.counters[kv.first] = c;
-    }
-
-    results.push_back(data);
-  }
-
-  return results;
-}
-
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/statistics.h b/llvm/utils/benchmark/src/statistics.h
deleted file mode 100644
index 7eccc85536a5f..0000000000000
--- a/llvm/utils/benchmark/src/statistics.h
+++ /dev/null
@@ -1,37 +0,0 @@
-// Copyright 2016 Ismael Jimenez Martinez. All rights reserved.
-// Copyright 2017 Roman Lebedev. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#ifndef STATISTICS_H_
-#define STATISTICS_H_
-
-#include <vector>
-
-#include "benchmark/benchmark.h"
-
-namespace benchmark {
-
-// Return a vector containing the mean, median and standard devation information
-// (and any user-specified info) for the specified list of reports. If 'reports'
-// contains less than two non-errored runs an empty vector is returned
-std::vector<BenchmarkReporter::Run> ComputeStats(
-    const std::vector<BenchmarkReporter::Run>& reports);
-
-double StatisticsMean(const std::vector<double>& v);
-double StatisticsMedian(const std::vector<double>& v);
-double StatisticsStdDev(const std::vector<double>& v);
-
-}  // end namespace benchmark
-
-#endif  // STATISTICS_H_

diff  --git a/llvm/utils/benchmark/src/string_util.cc b/llvm/utils/benchmark/src/string_util.cc
deleted file mode 100644
index ebc3acebd2a80..0000000000000
--- a/llvm/utils/benchmark/src/string_util.cc
+++ /dev/null
@@ -1,172 +0,0 @@
-#include "string_util.h"
-
-#include <array>
-#include <cmath>
-#include <cstdarg>
-#include <cstdio>
-#include <memory>
-#include <sstream>
-
-#include "arraysize.h"
-
-namespace benchmark {
-namespace {
-
-// kilo, Mega, Giga, Tera, Peta, Exa, Zetta, Yotta.
-const char kBigSIUnits[] = "kMGTPEZY";
-// Kibi, Mebi, Gibi, Tebi, Pebi, Exbi, Zebi, Yobi.
-const char kBigIECUnits[] = "KMGTPEZY";
-// milli, micro, nano, pico, femto, atto, zepto, yocto.
-const char kSmallSIUnits[] = "munpfazy";
-
-// We require that all three arrays have the same size.
-static_assert(arraysize(kBigSIUnits) == arraysize(kBigIECUnits),
-              "SI and IEC unit arrays must be the same size");
-static_assert(arraysize(kSmallSIUnits) == arraysize(kBigSIUnits),
-              "Small SI and Big SI unit arrays must be the same size");
-
-static const int64_t kUnitsSize = arraysize(kBigSIUnits);
-
-void ToExponentAndMantissa(double val, double thresh, int precision,
-                           double one_k, std::string* mantissa,
-                           int64_t* exponent) {
-  std::stringstream mantissa_stream;
-
-  if (val < 0) {
-    mantissa_stream << "-";
-    val = -val;
-  }
-
-  // Adjust threshold so that it never excludes things which can't be rendered
-  // in 'precision' digits.
-  const double adjusted_threshold =
-      std::max(thresh, 1.0 / std::pow(10.0, precision));
-  const double big_threshold = adjusted_threshold * one_k;
-  const double small_threshold = adjusted_threshold;
-  // Values in ]simple_threshold,small_threshold[ will be printed as-is
-  const double simple_threshold = 0.01;
-
-  if (val > big_threshold) {
-    // Positive powers
-    double scaled = val;
-    for (size_t i = 0; i < arraysize(kBigSIUnits); ++i) {
-      scaled /= one_k;
-      if (scaled <= big_threshold) {
-        mantissa_stream << scaled;
-        *exponent = i + 1;
-        *mantissa = mantissa_stream.str();
-        return;
-      }
-    }
-    mantissa_stream << val;
-    *exponent = 0;
-  } else if (val < small_threshold) {
-    // Negative powers
-    if (val < simple_threshold) {
-      double scaled = val;
-      for (size_t i = 0; i < arraysize(kSmallSIUnits); ++i) {
-        scaled *= one_k;
-        if (scaled >= small_threshold) {
-          mantissa_stream << scaled;
-          *exponent = -static_cast<int64_t>(i + 1);
-          *mantissa = mantissa_stream.str();
-          return;
-        }
-      }
-    }
-    mantissa_stream << val;
-    *exponent = 0;
-  } else {
-    mantissa_stream << val;
-    *exponent = 0;
-  }
-  *mantissa = mantissa_stream.str();
-}
-
-std::string ExponentToPrefix(int64_t exponent, bool iec) {
-  if (exponent == 0) return "";
-
-  const int64_t index = (exponent > 0 ? exponent - 1 : -exponent - 1);
-  if (index >= kUnitsSize) return "";
-
-  const char* array =
-      (exponent > 0 ? (iec ? kBigIECUnits : kBigSIUnits) : kSmallSIUnits);
-  if (iec)
-    return array[index] + std::string("i");
-  else
-    return std::string(1, array[index]);
-}
-
-std::string ToBinaryStringFullySpecified(double value, double threshold,
-                                         int precision, double one_k = 1024.0) {
-  std::string mantissa;
-  int64_t exponent;
-  ToExponentAndMantissa(value, threshold, precision, one_k, &mantissa,
-                        &exponent);
-  return mantissa + ExponentToPrefix(exponent, false);
-}
-
-}  // end namespace
-
-void AppendHumanReadable(int n, std::string* str) {
-  std::stringstream ss;
-  // Round down to the nearest SI prefix.
-  ss << ToBinaryStringFullySpecified(n, 1.0, 0);
-  *str += ss.str();
-}
-
-std::string HumanReadableNumber(double n, double one_k) {
-  // 1.1 means that figures up to 1.1k should be shown with the next unit down;
-  // this softens edge effects.
-  // 1 means that we should show one decimal place of precision.
-  return ToBinaryStringFullySpecified(n, 1.1, 1, one_k);
-}
-
-std::string StrFormatImp(const char* msg, va_list args) {
-  // we might need a second shot at this, so pre-emptivly make a copy
-  va_list args_cp;
-  va_copy(args_cp, args);
-
-  // TODO(ericwf): use std::array for first attempt to avoid one memory
-  // allocation guess what the size might be
-  std::array<char, 256> local_buff;
-  std::size_t size = local_buff.size();
-  // 2015-10-08: vsnprintf is used instead of snd::vsnprintf due to a limitation
-  // in the android-ndk
-  auto ret = vsnprintf(local_buff.data(), size, msg, args_cp);
-
-  va_end(args_cp);
-
-  // handle empty expansion
-  if (ret == 0) return std::string{};
-  if (static_cast<std::size_t>(ret) < size)
-    return std::string(local_buff.data());
-
-  // we did not provide a long enough buffer on our first attempt.
-  // add 1 to size to account for null-byte in size cast to prevent overflow
-  size = static_cast<std::size_t>(ret) + 1;
-  auto buff_ptr = std::unique_ptr<char[]>(new char[size]);
-  // 2015-10-08: vsnprintf is used instead of snd::vsnprintf due to a limitation
-  // in the android-ndk
-  ret = vsnprintf(buff_ptr.get(), size, msg, args);
-  return std::string(buff_ptr.get());
-}
-
-std::string StrFormat(const char* format, ...) {
-  va_list args;
-  va_start(args, format);
-  std::string tmp = StrFormatImp(format, args);
-  va_end(args);
-  return tmp;
-}
-
-void ReplaceAll(std::string* str, const std::string& from,
-                const std::string& to) {
-  std::size_t start = 0;
-  while ((start = str->find(from, start)) != std::string::npos) {
-    str->replace(start, from.length(), to);
-    start += to.length();
-  }
-}
-
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/string_util.h b/llvm/utils/benchmark/src/string_util.h
deleted file mode 100644
index e70e76987245e..0000000000000
--- a/llvm/utils/benchmark/src/string_util.h
+++ /dev/null
@@ -1,40 +0,0 @@
-#ifndef BENCHMARK_STRING_UTIL_H_
-#define BENCHMARK_STRING_UTIL_H_
-
-#include <sstream>
-#include <string>
-#include <utility>
-#include "internal_macros.h"
-
-namespace benchmark {
-
-void AppendHumanReadable(int n, std::string* str);
-
-std::string HumanReadableNumber(double n, double one_k = 1024.0);
-
-std::string StrFormat(const char* format, ...);
-
-inline std::ostream& StrCatImp(std::ostream& out) BENCHMARK_NOEXCEPT {
-  return out;
-}
-
-template <class First, class... Rest>
-inline std::ostream& StrCatImp(std::ostream& out, First&& f,
-                                  Rest&&... rest) {
-  out << std::forward<First>(f);
-  return StrCatImp(out, std::forward<Rest>(rest)...);
-}
-
-template <class... Args>
-inline std::string StrCat(Args&&... args) {
-  std::ostringstream ss;
-  StrCatImp(ss, std::forward<Args>(args)...);
-  return ss.str();
-}
-
-void ReplaceAll(std::string* str, const std::string& from,
-                const std::string& to);
-
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_STRING_UTIL_H_

diff  --git a/llvm/utils/benchmark/src/sysinfo.cc b/llvm/utils/benchmark/src/sysinfo.cc
deleted file mode 100644
index 01dd8a0317b05..0000000000000
--- a/llvm/utils/benchmark/src/sysinfo.cc
+++ /dev/null
@@ -1,585 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "internal_macros.h"
-
-#ifdef BENCHMARK_OS_WINDOWS
-#include <shlwapi.h>
-#undef StrCat  // Don't let StrCat in string_util.h be renamed to lstrcatA
-#include <versionhelpers.h>
-#include <windows.h>
-#else
-#include <fcntl.h>
-#ifndef BENCHMARK_OS_FUCHSIA
-#include <sys/resource.h>
-#endif
-#include <sys/time.h>
-#include <sys/types.h>  // this header must be included before 'sys/sysctl.h' to avoid compilation error on FreeBSD
-#include <unistd.h>
-#if defined BENCHMARK_OS_FREEBSD || defined BENCHMARK_OS_MACOSX || \
-    defined BENCHMARK_OS_NETBSD || defined BENCHMARK_OS_OPENBSD
-#define BENCHMARK_HAS_SYSCTL
-#include <sys/sysctl.h>
-#endif
-#endif
-#if defined(BENCHMARK_OS_SOLARIS)
-#include <kstat.h>
-#endif
-
-#include <algorithm>
-#include <array>
-#include <bitset>
-#include <cerrno>
-#include <climits>
-#include <cstdint>
-#include <cstdio>
-#include <cstdlib>
-#include <cstring>
-#include <fstream>
-#include <iostream>
-#include <iterator>
-#include <limits>
-#include <memory>
-#include <sstream>
-
-#include "check.h"
-#include "cycleclock.h"
-#include "internal_macros.h"
-#include "log.h"
-#include "sleep.h"
-#include "string_util.h"
-
-namespace benchmark {
-namespace {
-
-void PrintImp(std::ostream& out) { out << std::endl; }
-
-template <class First, class... Rest>
-void PrintImp(std::ostream& out, First&& f, Rest&&... rest) {
-  out << std::forward<First>(f);
-  PrintImp(out, std::forward<Rest>(rest)...);
-}
-
-template <class... Args>
-BENCHMARK_NORETURN void PrintErrorAndDie(Args&&... args) {
-  PrintImp(std::cerr, std::forward<Args>(args)...);
-  std::exit(EXIT_FAILURE);
-}
-
-#ifdef BENCHMARK_HAS_SYSCTL
-
-/// ValueUnion - A type used to correctly alias the byte-for-byte output of
-/// `sysctl` with the result type it's to be interpreted as.
-struct ValueUnion {
-  union DataT {
-    uint32_t uint32_value;
-    uint64_t uint64_value;
-    // For correct aliasing of union members from bytes.
-    char bytes[8];
-  };
-  using DataPtr = std::unique_ptr<DataT, decltype(&std::free)>;
-
-  // The size of the data union member + its trailing array size.
-  size_t Size;
-  DataPtr Buff;
-
- public:
-  ValueUnion() : Size(0), Buff(nullptr, &std::free) {}
-
-  explicit ValueUnion(size_t BuffSize)
-      : Size(sizeof(DataT) + BuffSize),
-        Buff(::new (std::malloc(Size)) DataT(), &std::free) {}
-
-  ValueUnion(ValueUnion&& other) = default;
-
-  explicit operator bool() const { return bool(Buff); }
-
-  char* data() const { return Buff->bytes; }
-
-  std::string GetAsString() const { return std::string(data()); }
-
-  int64_t GetAsInteger() const {
-    if (Size == sizeof(Buff->uint32_value))
-      return static_cast<int32_t>(Buff->uint32_value);
-    else if (Size == sizeof(Buff->uint64_value))
-      return static_cast<int64_t>(Buff->uint64_value);
-    BENCHMARK_UNREACHABLE();
-  }
-
-  uint64_t GetAsUnsigned() const {
-    if (Size == sizeof(Buff->uint32_value))
-      return Buff->uint32_value;
-    else if (Size == sizeof(Buff->uint64_value))
-      return Buff->uint64_value;
-    BENCHMARK_UNREACHABLE();
-  }
-
-  template <class T, int N>
-  std::array<T, N> GetAsArray() {
-    const int ArrSize = sizeof(T) * N;
-    CHECK_LE(ArrSize, Size);
-    std::array<T, N> Arr;
-    std::memcpy(Arr.data(), data(), ArrSize);
-    return Arr;
-  }
-};
-
-ValueUnion GetSysctlImp(std::string const& Name) {
-#if defined BENCHMARK_OS_OPENBSD
-  int mib[2];
-
-  mib[0] = CTL_HW;
-  if ((Name == "hw.ncpu") || (Name == "hw.cpuspeed")){
-    ValueUnion buff(sizeof(int));
-
-    if (Name == "hw.ncpu") {
-      mib[1] = HW_NCPU;
-    } else {
-      mib[1] = HW_CPUSPEED;
-    }
-
-    if (sysctl(mib, 2, buff.data(), &buff.Size, nullptr, 0) == -1) {
-      return ValueUnion();
-    }
-    return buff;
-  }
-  return ValueUnion();
-#else
-  size_t CurBuffSize = 0;
-  if (sysctlbyname(Name.c_str(), nullptr, &CurBuffSize, nullptr, 0) == -1)
-    return ValueUnion();
-
-  ValueUnion buff(CurBuffSize);
-  if (sysctlbyname(Name.c_str(), buff.data(), &buff.Size, nullptr, 0) == 0)
-    return buff;
-  return ValueUnion();
-#endif
-}
-
-BENCHMARK_MAYBE_UNUSED
-bool GetSysctl(std::string const& Name, std::string* Out) {
-  Out->clear();
-  auto Buff = GetSysctlImp(Name);
-  if (!Buff) return false;
-  Out->assign(Buff.data());
-  return true;
-}
-
-template <class Tp,
-          class = typename std::enable_if<std::is_integral<Tp>::value>::type>
-bool GetSysctl(std::string const& Name, Tp* Out) {
-  *Out = 0;
-  auto Buff = GetSysctlImp(Name);
-  if (!Buff) return false;
-  *Out = static_cast<Tp>(Buff.GetAsUnsigned());
-  return true;
-}
-
-template <class Tp, size_t N>
-bool GetSysctl(std::string const& Name, std::array<Tp, N>* Out) {
-  auto Buff = GetSysctlImp(Name);
-  if (!Buff) return false;
-  *Out = Buff.GetAsArray<Tp, N>();
-  return true;
-}
-#endif
-
-template <class ArgT>
-bool ReadFromFile(std::string const& fname, ArgT* arg) {
-  *arg = ArgT();
-  std::ifstream f(fname.c_str());
-  if (!f.is_open()) return false;
-  f >> *arg;
-  return f.good();
-}
-
-bool CpuScalingEnabled(int num_cpus) {
-  // We don't have a valid CPU count, so don't even bother.
-  if (num_cpus <= 0) return false;
-#ifndef BENCHMARK_OS_WINDOWS
-  // On Linux, the CPUfreq subsystem exposes CPU information as files on the
-  // local file system. If reading the exported files fails, then we may not be
-  // running on Linux, so we silently ignore all the read errors.
-  std::string res;
-  for (int cpu = 0; cpu < num_cpus; ++cpu) {
-    std::string governor_file =
-        StrCat("/sys/devices/system/cpu/cpu", cpu, "/cpufreq/scaling_governor");
-    if (ReadFromFile(governor_file, &res) && res != "performance") return true;
-  }
-#endif
-  return false;
-}
-
-int CountSetBitsInCPUMap(std::string Val) {
-  auto CountBits = [](std::string Part) {
-    using CPUMask = std::bitset<sizeof(std::uintptr_t) * CHAR_BIT>;
-    Part = "0x" + Part;
-    CPUMask Mask(std::stoul(Part, nullptr, 16));
-    return static_cast<int>(Mask.count());
-  };
-  size_t Pos;
-  int total = 0;
-  while ((Pos = Val.find(',')) != std::string::npos) {
-    total += CountBits(Val.substr(0, Pos));
-    Val = Val.substr(Pos + 1);
-  }
-  if (!Val.empty()) {
-    total += CountBits(Val);
-  }
-  return total;
-}
-
-BENCHMARK_MAYBE_UNUSED
-std::vector<CPUInfo::CacheInfo> GetCacheSizesFromKVFS() {
-  std::vector<CPUInfo::CacheInfo> res;
-  std::string dir = "/sys/devices/system/cpu/cpu0/cache/";
-  int Idx = 0;
-  while (true) {
-    CPUInfo::CacheInfo info;
-    std::string FPath = StrCat(dir, "index", Idx++, "/");
-    std::ifstream f(StrCat(FPath, "size").c_str());
-    if (!f.is_open()) break;
-    std::string suffix;
-    f >> info.size;
-    if (f.fail())
-      PrintErrorAndDie("Failed while reading file '", FPath, "size'");
-    if (f.good()) {
-      f >> suffix;
-      if (f.bad())
-        PrintErrorAndDie(
-            "Invalid cache size format: failed to read size suffix");
-      else if (f && suffix != "K")
-        PrintErrorAndDie("Invalid cache size format: Expected bytes ", suffix);
-      else if (suffix == "K")
-        info.size *= 1000;
-    }
-    if (!ReadFromFile(StrCat(FPath, "type"), &info.type))
-      PrintErrorAndDie("Failed to read from file ", FPath, "type");
-    if (!ReadFromFile(StrCat(FPath, "level"), &info.level))
-      PrintErrorAndDie("Failed to read from file ", FPath, "level");
-    std::string map_str;
-    if (!ReadFromFile(StrCat(FPath, "shared_cpu_map"), &map_str))
-      PrintErrorAndDie("Failed to read from file ", FPath, "shared_cpu_map");
-    info.num_sharing = CountSetBitsInCPUMap(map_str);
-    res.push_back(info);
-  }
-
-  return res;
-}
-
-#ifdef BENCHMARK_OS_MACOSX
-std::vector<CPUInfo::CacheInfo> GetCacheSizesMacOSX() {
-  std::vector<CPUInfo::CacheInfo> res;
-  std::array<uint64_t, 4> CacheCounts{{0, 0, 0, 0}};
-  GetSysctl("hw.cacheconfig", &CacheCounts);
-
-  struct {
-    std::string name;
-    std::string type;
-    int level;
-    uint64_t num_sharing;
-  } Cases[] = {{"hw.l1dcachesize", "Data", 1, CacheCounts[1]},
-               {"hw.l1icachesize", "Instruction", 1, CacheCounts[1]},
-               {"hw.l2cachesize", "Unified", 2, CacheCounts[2]},
-               {"hw.l3cachesize", "Unified", 3, CacheCounts[3]}};
-  for (auto& C : Cases) {
-    int val;
-    if (!GetSysctl(C.name, &val)) continue;
-    CPUInfo::CacheInfo info;
-    info.type = C.type;
-    info.level = C.level;
-    info.size = val;
-    info.num_sharing = static_cast<int>(C.num_sharing);
-    res.push_back(std::move(info));
-  }
-  return res;
-}
-#elif defined(BENCHMARK_OS_WINDOWS)
-std::vector<CPUInfo::CacheInfo> GetCacheSizesWindows() {
-  std::vector<CPUInfo::CacheInfo> res;
-  DWORD buffer_size = 0;
-  using PInfo = SYSTEM_LOGICAL_PROCESSOR_INFORMATION;
-  using CInfo = CACHE_DESCRIPTOR;
-
-  using UPtr = std::unique_ptr<PInfo, decltype(&std::free)>;
-  GetLogicalProcessorInformation(nullptr, &buffer_size);
-  UPtr buff((PInfo*)malloc(buffer_size), &std::free);
-  if (!GetLogicalProcessorInformation(buff.get(), &buffer_size))
-    PrintErrorAndDie("Failed during call to GetLogicalProcessorInformation: ",
-                     GetLastError());
-
-  PInfo* it = buff.get();
-  PInfo* end = buff.get() + (buffer_size / sizeof(PInfo));
-
-  for (; it != end; ++it) {
-    if (it->Relationship != RelationCache) continue;
-    using BitSet = std::bitset<sizeof(ULONG_PTR) * CHAR_BIT>;
-    BitSet B(it->ProcessorMask);
-    // To prevent duplicates, only consider caches where CPU 0 is specified
-    if (!B.test(0)) continue;
-    CInfo* Cache = &it->Cache;
-    CPUInfo::CacheInfo C;
-    C.num_sharing = static_cast<int>(B.count());
-    C.level = Cache->Level;
-    C.size = Cache->Size;
-    C.type = "Unknown";
-    switch (Cache->Type) {
-      case CacheUnified:
-        C.type = "Unified";
-        break;
-      case CacheInstruction:
-        C.type = "Instruction";
-        break;
-      case CacheData:
-        C.type = "Data";
-        break;
-      case CacheTrace:
-        C.type = "Trace";
-        break;
-    }
-    res.push_back(C);
-  }
-  return res;
-}
-#endif
-
-std::vector<CPUInfo::CacheInfo> GetCacheSizes() {
-#ifdef BENCHMARK_OS_MACOSX
-  return GetCacheSizesMacOSX();
-#elif defined(BENCHMARK_OS_WINDOWS)
-  return GetCacheSizesWindows();
-#else
-  return GetCacheSizesFromKVFS();
-#endif
-}
-
-int GetNumCPUs() {
-#ifdef BENCHMARK_HAS_SYSCTL
-  int NumCPU = -1;
-  if (GetSysctl("hw.ncpu", &NumCPU)) return NumCPU;
-  fprintf(stderr, "Err: %s\n", strerror(errno));
-  std::exit(EXIT_FAILURE);
-#elif defined(BENCHMARK_OS_WINDOWS)
-  SYSTEM_INFO sysinfo;
-  // Use memset as opposed to = {} to avoid GCC missing initializer false
-  // positives.
-  std::memset(&sysinfo, 0, sizeof(SYSTEM_INFO));
-  GetSystemInfo(&sysinfo);
-  return sysinfo.dwNumberOfProcessors;  // number of logical
-                                        // processors in the current
-                                        // group
-#elif defined(BENCHMARK_OS_SOLARIS)
-  // Returns -1 in case of a failure.
-  int NumCPU = sysconf(_SC_NPROCESSORS_ONLN);
-  if (NumCPU < 0) {
-    fprintf(stderr,
-            "sysconf(_SC_NPROCESSORS_ONLN) failed with error: %s\n",
-            strerror(errno));
-  }
-  return NumCPU;
-#else
-  int NumCPUs = 0;
-  int MaxID = -1;
-  std::ifstream f("/proc/cpuinfo");
-  if (!f.is_open()) {
-    std::cerr << "failed to open /proc/cpuinfo\n";
-    return -1;
-  }
-  const std::string Key = "processor";
-  std::string ln;
-  while (std::getline(f, ln)) {
-    if (ln.empty()) continue;
-    size_t SplitIdx = ln.find(':');
-    std::string value;
-    if (SplitIdx != std::string::npos) value = ln.substr(SplitIdx + 1);
-    if (ln.size() >= Key.size() && ln.compare(0, Key.size(), Key) == 0) {
-      NumCPUs++;
-      if (!value.empty()) {
-        int CurID = std::stoi(value);
-        MaxID = std::max(CurID, MaxID);
-      }
-    }
-  }
-  if (f.bad()) {
-    std::cerr << "Failure reading /proc/cpuinfo\n";
-    return -1;
-  }
-  if (!f.eof()) {
-    std::cerr << "Failed to read to end of /proc/cpuinfo\n";
-    return -1;
-  }
-  f.close();
-
-  if ((MaxID + 1) != NumCPUs) {
-    fprintf(stderr,
-            "CPU ID assignments in /proc/cpuinfo seem messed up."
-            " This is usually caused by a bad BIOS.\n");
-  }
-  return NumCPUs;
-#endif
-  BENCHMARK_UNREACHABLE();
-}
-
-double GetCPUCyclesPerSecond() {
-#if defined BENCHMARK_OS_LINUX || defined BENCHMARK_OS_CYGWIN
-  long freq;
-
-  // If the kernel is exporting the tsc frequency use that. There are issues
-  // where cpuinfo_max_freq cannot be relied on because the BIOS may be
-  // exporintg an invalid p-state (on x86) or p-states may be used to put the
-  // processor in a new mode (turbo mode). Essentially, those frequencies
-  // cannot always be relied upon. The same reasons apply to /proc/cpuinfo as
-  // well.
-  if (ReadFromFile("/sys/devices/system/cpu/cpu0/tsc_freq_khz", &freq)
-      // If CPU scaling is in effect, we want to use the *maximum* frequency,
-      // not whatever CPU speed some random processor happens to be using now.
-      || ReadFromFile("/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq",
-                      &freq)) {
-    // The value is in kHz (as the file name suggests).  For example, on a
-    // 2GHz warpstation, the file contains the value "2000000".
-    return freq * 1000.0;
-  }
-
-  const double error_value = -1;
-  double bogo_clock = error_value;
-
-  std::ifstream f("/proc/cpuinfo");
-  if (!f.is_open()) {
-    std::cerr << "failed to open /proc/cpuinfo\n";
-    return error_value;
-  }
-
-  auto startsWithKey = [](std::string const& Value, std::string const& Key) {
-    if (Key.size() > Value.size()) return false;
-    auto Cmp = [&](char X, char Y) {
-      return std::tolower(X) == std::tolower(Y);
-    };
-    return std::equal(Key.begin(), Key.end(), Value.begin(), Cmp);
-  };
-
-  std::string ln;
-  while (std::getline(f, ln)) {
-    if (ln.empty()) continue;
-    size_t SplitIdx = ln.find(':');
-    std::string value;
-    if (SplitIdx != std::string::npos) value = ln.substr(SplitIdx + 1);
-    // When parsing the "cpu MHz" and "bogomips" (fallback) entries, we only
-    // accept positive values. Some environments (virtual machines) report zero,
-    // which would cause infinite looping in WallTime_Init.
-    if (startsWithKey(ln, "cpu MHz")) {
-      if (!value.empty()) {
-        double cycles_per_second = std::stod(value) * 1000000.0;
-        if (cycles_per_second > 0) return cycles_per_second;
-      }
-    } else if (startsWithKey(ln, "bogomips")) {
-      if (!value.empty()) {
-        bogo_clock = std::stod(value) * 1000000.0;
-        if (bogo_clock < 0.0) bogo_clock = error_value;
-      }
-    }
-  }
-  if (f.bad()) {
-    std::cerr << "Failure reading /proc/cpuinfo\n";
-    return error_value;
-  }
-  if (!f.eof()) {
-    std::cerr << "Failed to read to end of /proc/cpuinfo\n";
-    return error_value;
-  }
-  f.close();
-  // If we found the bogomips clock, but nothing better, we'll use it (but
-  // we're not happy about it); otherwise, fallback to the rough estimation
-  // below.
-  if (bogo_clock >= 0.0) return bogo_clock;
-
-#elif defined BENCHMARK_HAS_SYSCTL
-  constexpr auto* FreqStr =
-#if defined(BENCHMARK_OS_FREEBSD) || defined(BENCHMARK_OS_NETBSD)
-      "machdep.tsc_freq";
-#elif defined BENCHMARK_OS_OPENBSD
-      "hw.cpuspeed";
-#else
-      "hw.cpufrequency";
-#endif
-  unsigned long long hz = 0;
-#if defined BENCHMARK_OS_OPENBSD
-  if (GetSysctl(FreqStr, &hz)) return hz * 1000000;
-#else
-  if (GetSysctl(FreqStr, &hz)) return hz;
-#endif
-  fprintf(stderr, "Unable to determine clock rate from sysctl: %s: %s\n",
-          FreqStr, strerror(errno));
-
-#elif defined BENCHMARK_OS_WINDOWS
-  // In NT, read MHz from the registry. If we fail to do so or we're in win9x
-  // then make a crude estimate.
-  DWORD data, data_size = sizeof(data);
-  if (IsWindowsXPOrGreater() &&
-      SUCCEEDED(
-          SHGetValueA(HKEY_LOCAL_MACHINE,
-                      "HARDWARE\\DESCRIPTION\\System\\CentralProcessor\\0",
-                      "~MHz", nullptr, &data, &data_size)))
-    return static_cast<double>((int64_t)data *
-                               (int64_t)(1000 * 1000));  // was mhz
-#elif defined (BENCHMARK_OS_SOLARIS)
-  kstat_ctl_t *kc = kstat_open();
-  if (!kc) {
-    std::cerr << "failed to open /dev/kstat\n";
-    return -1;
-  }
-  kstat_t *ksp = kstat_lookup(kc, (char*)"cpu_info", -1, (char*)"cpu_info0");
-  if (!ksp) {
-    std::cerr << "failed to lookup in /dev/kstat\n";
-    return -1;
-  }
-  if (kstat_read(kc, ksp, NULL) < 0) {
-    std::cerr << "failed to read from /dev/kstat\n";
-    return -1;
-  }
-  kstat_named_t *knp =
-      (kstat_named_t*)kstat_data_lookup(ksp, (char*)"current_clock_Hz");
-  if (!knp) {
-    std::cerr << "failed to lookup data in /dev/kstat\n";
-    return -1;
-  }
-  if (knp->data_type != KSTAT_DATA_UINT64) {
-    std::cerr << "current_clock_Hz is of unexpected data type: "
-              << knp->data_type << "\n";
-    return -1;
-  }
-  double clock_hz = knp->value.ui64;
-  kstat_close(kc);
-  return clock_hz;
-#endif
-  // If we've fallen through, attempt to roughly estimate the CPU clock rate.
-  const int estimate_time_ms = 1000;
-  const auto start_ticks = cycleclock::Now();
-  SleepForMilliseconds(estimate_time_ms);
-  return static_cast<double>(cycleclock::Now() - start_ticks);
-}
-
-}  // end namespace
-
-const CPUInfo& CPUInfo::Get() {
-  static const CPUInfo* info = new CPUInfo();
-  return *info;
-}
-
-CPUInfo::CPUInfo()
-    : num_cpus(GetNumCPUs()),
-      cycles_per_second(GetCPUCyclesPerSecond()),
-      caches(GetCacheSizes()),
-      scaling_enabled(CpuScalingEnabled(num_cpus)) {}
-
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/thread_manager.h b/llvm/utils/benchmark/src/thread_manager.h
deleted file mode 100644
index 82b4d72b62fbe..0000000000000
--- a/llvm/utils/benchmark/src/thread_manager.h
+++ /dev/null
@@ -1,66 +0,0 @@
-#ifndef BENCHMARK_THREAD_MANAGER_H
-#define BENCHMARK_THREAD_MANAGER_H
-
-#include <atomic>
-
-#include "benchmark/benchmark.h"
-#include "mutex.h"
-
-namespace benchmark {
-namespace internal {
-
-class ThreadManager {
- public:
-  ThreadManager(int num_threads)
-      : alive_threads_(num_threads), start_stop_barrier_(num_threads) {}
-
-  Mutex& GetBenchmarkMutex() const RETURN_CAPABILITY(benchmark_mutex_) {
-    return benchmark_mutex_;
-  }
-
-  bool StartStopBarrier() EXCLUDES(end_cond_mutex_) {
-    return start_stop_barrier_.wait();
-  }
-
-  void NotifyThreadComplete() EXCLUDES(end_cond_mutex_) {
-    start_stop_barrier_.removeThread();
-    if (--alive_threads_ == 0) {
-      MutexLock lock(end_cond_mutex_);
-      end_condition_.notify_all();
-    }
-  }
-
-  void WaitForAllThreads() EXCLUDES(end_cond_mutex_) {
-    MutexLock lock(end_cond_mutex_);
-    end_condition_.wait(lock.native_handle(),
-                        [this]() { return alive_threads_ == 0; });
-  }
-
- public:
-  struct Result {
-    int64_t iterations = 0;
-    double real_time_used = 0;
-    double cpu_time_used = 0;
-    double manual_time_used = 0;
-    int64_t bytes_processed = 0;
-    int64_t items_processed = 0;
-    int64_t complexity_n = 0;
-    std::string report_label_;
-    std::string error_message_;
-    bool has_error_ = false;
-    UserCounters counters;
-  };
-  GUARDED_BY(GetBenchmarkMutex()) Result results;
-
- private:
-  mutable Mutex benchmark_mutex_;
-  std::atomic<int> alive_threads_;
-  Barrier start_stop_barrier_;
-  Mutex end_cond_mutex_;
-  Condition end_condition_;
-};
-
-}  // namespace internal
-}  // namespace benchmark
-
-#endif  // BENCHMARK_THREAD_MANAGER_H

diff  --git a/llvm/utils/benchmark/src/thread_timer.h b/llvm/utils/benchmark/src/thread_timer.h
deleted file mode 100644
index eaf108e017dc5..0000000000000
--- a/llvm/utils/benchmark/src/thread_timer.h
+++ /dev/null
@@ -1,69 +0,0 @@
-#ifndef BENCHMARK_THREAD_TIMER_H
-#define BENCHMARK_THREAD_TIMER_H
-
-#include "check.h"
-#include "timers.h"
-
-namespace benchmark {
-namespace internal {
-
-class ThreadTimer {
- public:
-  ThreadTimer() = default;
-
-  // Called by each thread
-  void StartTimer() {
-    running_ = true;
-    start_real_time_ = ChronoClockNow();
-    start_cpu_time_ = ThreadCPUUsage();
-  }
-
-  // Called by each thread
-  void StopTimer() {
-    CHECK(running_);
-    running_ = false;
-    real_time_used_ += ChronoClockNow() - start_real_time_;
-    // Floating point error can result in the subtraction producing a negative
-    // time. Guard against that.
-    cpu_time_used_ += std::max<double>(ThreadCPUUsage() - start_cpu_time_, 0);
-  }
-
-  // Called by each thread
-  void SetIterationTime(double seconds) { manual_time_used_ += seconds; }
-
-  bool running() const { return running_; }
-
-  // REQUIRES: timer is not running
-  double real_time_used() {
-    CHECK(!running_);
-    return real_time_used_;
-  }
-
-  // REQUIRES: timer is not running
-  double cpu_time_used() {
-    CHECK(!running_);
-    return cpu_time_used_;
-  }
-
-  // REQUIRES: timer is not running
-  double manual_time_used() {
-    CHECK(!running_);
-    return manual_time_used_;
-  }
-
- private:
-  bool running_ = false;        // Is the timer running
-  double start_real_time_ = 0;  // If running_
-  double start_cpu_time_ = 0;   // If running_
-
-  // Accumulated time so far (does not contain current slice if running_)
-  double real_time_used_ = 0;
-  double cpu_time_used_ = 0;
-  // Manually set iteration time. User sets this with SetIterationTime(seconds).
-  double manual_time_used_ = 0;
-};
-
-}  // namespace internal
-}  // namespace benchmark
-
-#endif  // BENCHMARK_THREAD_TIMER_H

diff  --git a/llvm/utils/benchmark/src/timers.cc b/llvm/utils/benchmark/src/timers.cc
deleted file mode 100644
index 7613ff92c6ef0..0000000000000
--- a/llvm/utils/benchmark/src/timers.cc
+++ /dev/null
@@ -1,217 +0,0 @@
-// Copyright 2015 Google Inc. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-//     http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "timers.h"
-#include "internal_macros.h"
-
-#ifdef BENCHMARK_OS_WINDOWS
-#include <shlwapi.h>
-#undef StrCat  // Don't let StrCat in string_util.h be renamed to lstrcatA
-#include <versionhelpers.h>
-#include <windows.h>
-#else
-#include <fcntl.h>
-#ifndef BENCHMARK_OS_FUCHSIA
-#include <sys/resource.h>
-#endif
-#include <sys/time.h>
-#include <sys/types.h>  // this header must be included before 'sys/sysctl.h' to avoid compilation error on FreeBSD
-#include <unistd.h>
-#if defined BENCHMARK_OS_FREEBSD || defined BENCHMARK_OS_MACOSX
-#include <sys/sysctl.h>
-#endif
-#if defined(BENCHMARK_OS_MACOSX)
-#include <mach/mach_init.h>
-#include <mach/mach_port.h>
-#include <mach/thread_act.h>
-#endif
-#endif
-
-#ifdef BENCHMARK_OS_EMSCRIPTEN
-#include <emscripten.h>
-#endif
-
-#include <cerrno>
-#include <cstdint>
-#include <cstdio>
-#include <cstdlib>
-#include <cstring>
-#include <ctime>
-#include <iostream>
-#include <limits>
-#include <mutex>
-
-#include "check.h"
-#include "log.h"
-#include "sleep.h"
-#include "string_util.h"
-
-namespace benchmark {
-
-// Suppress unused warnings on helper functions.
-#if defined(__GNUC__)
-#pragma GCC diagnostic ignored "-Wunused-function"
-#endif
-
-namespace {
-#if defined(BENCHMARK_OS_WINDOWS)
-double MakeTime(FILETIME const& kernel_time, FILETIME const& user_time) {
-  ULARGE_INTEGER kernel;
-  ULARGE_INTEGER user;
-  kernel.HighPart = kernel_time.dwHighDateTime;
-  kernel.LowPart = kernel_time.dwLowDateTime;
-  user.HighPart = user_time.dwHighDateTime;
-  user.LowPart = user_time.dwLowDateTime;
-  return (static_cast<double>(kernel.QuadPart) +
-          static_cast<double>(user.QuadPart)) *
-         1e-7;
-}
-#elif !defined(BENCHMARK_OS_FUCHSIA)
-double MakeTime(struct rusage const& ru) {
-  return (static_cast<double>(ru.ru_utime.tv_sec) +
-          static_cast<double>(ru.ru_utime.tv_usec) * 1e-6 +
-          static_cast<double>(ru.ru_stime.tv_sec) +
-          static_cast<double>(ru.ru_stime.tv_usec) * 1e-6);
-}
-#endif
-#if defined(BENCHMARK_OS_MACOSX)
-double MakeTime(thread_basic_info_data_t const& info) {
-  return (static_cast<double>(info.user_time.seconds) +
-          static_cast<double>(info.user_time.microseconds) * 1e-6 +
-          static_cast<double>(info.system_time.seconds) +
-          static_cast<double>(info.system_time.microseconds) * 1e-6);
-}
-#endif
-#if defined(CLOCK_PROCESS_CPUTIME_ID) || defined(CLOCK_THREAD_CPUTIME_ID)
-double MakeTime(struct timespec const& ts) {
-  return ts.tv_sec + (static_cast<double>(ts.tv_nsec) * 1e-9);
-}
-#endif
-
-BENCHMARK_NORETURN static void DiagnoseAndExit(const char* msg) {
-  std::cerr << "ERROR: " << msg << std::endl;
-  std::exit(EXIT_FAILURE);
-}
-
-}  // end namespace
-
-double ProcessCPUUsage() {
-#if defined(BENCHMARK_OS_WINDOWS)
-  HANDLE proc = GetCurrentProcess();
-  FILETIME creation_time;
-  FILETIME exit_time;
-  FILETIME kernel_time;
-  FILETIME user_time;
-  if (GetProcessTimes(proc, &creation_time, &exit_time, &kernel_time,
-                      &user_time))
-    return MakeTime(kernel_time, user_time);
-  DiagnoseAndExit("GetProccessTimes() failed");
-#elif defined(BENCHMARK_OS_EMSCRIPTEN)
-  // clock_gettime(CLOCK_PROCESS_CPUTIME_ID, ...) returns 0 on Emscripten.
-  // Use Emscripten-specific API. Reported CPU time would be exactly the
-  // same as total time, but this is ok because there aren't long-latency
-  // syncronous system calls in Emscripten.
-  return emscripten_get_now() * 1e-3;
-#elif defined(CLOCK_PROCESS_CPUTIME_ID) && !defined(BENCHMARK_OS_MACOSX)
-  // FIXME We want to use clock_gettime, but its not available in MacOS 10.11. See
-  // https://github.com/google/benchmark/pull/292
-  struct timespec spec;
-  if (clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &spec) == 0)
-    return MakeTime(spec);
-  DiagnoseAndExit("clock_gettime(CLOCK_PROCESS_CPUTIME_ID, ...) failed");
-#else
-  struct rusage ru;
-  if (getrusage(RUSAGE_SELF, &ru) == 0) return MakeTime(ru);
-  DiagnoseAndExit("getrusage(RUSAGE_SELF, ...) failed");
-#endif
-}
-
-double ThreadCPUUsage() {
-#if defined(BENCHMARK_OS_WINDOWS)
-  HANDLE this_thread = GetCurrentThread();
-  FILETIME creation_time;
-  FILETIME exit_time;
-  FILETIME kernel_time;
-  FILETIME user_time;
-  GetThreadTimes(this_thread, &creation_time, &exit_time, &kernel_time,
-                 &user_time);
-  return MakeTime(kernel_time, user_time);
-#elif defined(BENCHMARK_OS_MACOSX)
-  // FIXME We want to use clock_gettime, but its not available in MacOS 10.11. See
-  // https://github.com/google/benchmark/pull/292
-  mach_msg_type_number_t count = THREAD_BASIC_INFO_COUNT;
-  thread_basic_info_data_t info;
-  mach_port_t thread = pthread_mach_thread_np(pthread_self());
-  if (thread_info(thread, THREAD_BASIC_INFO, (thread_info_t)&info, &count) ==
-      KERN_SUCCESS) {
-    return MakeTime(info);
-  }
-  DiagnoseAndExit("ThreadCPUUsage() failed when evaluating thread_info");
-#elif defined(BENCHMARK_OS_EMSCRIPTEN)
-  // Emscripten doesn't support traditional threads
-  return ProcessCPUUsage();
-#elif defined(BENCHMARK_OS_RTEMS)
-  // RTEMS doesn't support CLOCK_THREAD_CPUTIME_ID. See
-  // https://github.com/RTEMS/rtems/blob/master/cpukit/posix/src/clockgettime.c
-  return ProcessCPUUsage();
-#elif defined(BENCHMARK_OS_SOLARIS)
-  struct rusage ru;
-  if (getrusage(RUSAGE_LWP, &ru) == 0) return MakeTime(ru);
-  DiagnoseAndExit("getrusage(RUSAGE_LWP, ...) failed");
-#elif defined(CLOCK_THREAD_CPUTIME_ID)
-  struct timespec ts;
-  if (clock_gettime(CLOCK_THREAD_CPUTIME_ID, &ts) == 0) return MakeTime(ts);
-  DiagnoseAndExit("clock_gettime(CLOCK_THREAD_CPUTIME_ID, ...) failed");
-#else
-#error Per-thread timing is not available on your system.
-#endif
-}
-
-namespace {
-
-std::string DateTimeString(bool local) {
-  typedef std::chrono::system_clock Clock;
-  std::time_t now = Clock::to_time_t(Clock::now());
-  const std::size_t kStorageSize = 128;
-  char storage[kStorageSize];
-  std::size_t written;
-
-  if (local) {
-#if defined(BENCHMARK_OS_WINDOWS)
-    written =
-        std::strftime(storage, sizeof(storage), "%x %X", ::localtime(&now));
-#else
-    std::tm timeinfo;
-    ::localtime_r(&now, &timeinfo);
-    written = std::strftime(storage, sizeof(storage), "%F %T", &timeinfo);
-#endif
-  } else {
-#if defined(BENCHMARK_OS_WINDOWS)
-    written = std::strftime(storage, sizeof(storage), "%x %X", ::gmtime(&now));
-#else
-    std::tm timeinfo;
-    ::gmtime_r(&now, &timeinfo);
-    written = std::strftime(storage, sizeof(storage), "%F %T", &timeinfo);
-#endif
-  }
-  CHECK(written < kStorageSize);
-  ((void)written);  // prevent unused variable in optimized mode.
-  return std::string(storage);
-}
-
-}  // end namespace
-
-std::string LocalDateTimeString() { return DateTimeString(true); }
-
-}  // end namespace benchmark

diff  --git a/llvm/utils/benchmark/src/timers.h b/llvm/utils/benchmark/src/timers.h
deleted file mode 100644
index 65606ccd93d14..0000000000000
--- a/llvm/utils/benchmark/src/timers.h
+++ /dev/null
@@ -1,48 +0,0 @@
-#ifndef BENCHMARK_TIMERS_H
-#define BENCHMARK_TIMERS_H
-
-#include <chrono>
-#include <string>
-
-namespace benchmark {
-
-// Return the CPU usage of the current process
-double ProcessCPUUsage();
-
-// Return the CPU usage of the children of the current process
-double ChildrenCPUUsage();
-
-// Return the CPU usage of the current thread
-double ThreadCPUUsage();
-
-#if defined(HAVE_STEADY_CLOCK)
-template <bool HighResIsSteady = std::chrono::high_resolution_clock::is_steady>
-struct ChooseSteadyClock {
-  typedef std::chrono::high_resolution_clock type;
-};
-
-template <>
-struct ChooseSteadyClock<false> {
-  typedef std::chrono::steady_clock type;
-};
-#endif
-
-struct ChooseClockType {
-#if defined(HAVE_STEADY_CLOCK)
-  typedef ChooseSteadyClock<>::type type;
-#else
-  typedef std::chrono::high_resolution_clock type;
-#endif
-};
-
-inline double ChronoClockNow() {
-  typedef ChooseClockType::type ClockType;
-  using FpSeconds = std::chrono::duration<double, std::chrono::seconds::period>;
-  return FpSeconds(ClockType::now().time_since_epoch()).count();
-}
-
-std::string LocalDateTimeString();
-
-}  // end namespace benchmark
-
-#endif  // BENCHMARK_TIMERS_H

diff  --git a/llvm/utils/benchmark/test/AssemblyTests.cmake b/llvm/utils/benchmark/test/AssemblyTests.cmake
deleted file mode 100644
index 8605221ff7107..0000000000000
--- a/llvm/utils/benchmark/test/AssemblyTests.cmake
+++ /dev/null
@@ -1,45 +0,0 @@
-
-include(split_list)
-
-set(ASM_TEST_FLAGS "")
-check_cxx_compiler_flag(-O3 BENCHMARK_HAS_O3_FLAG)
-if (BENCHMARK_HAS_O3_FLAG)
-  list(APPEND ASM_TEST_FLAGS -O3)
-endif()
-
-check_cxx_compiler_flag(-g0 BENCHMARK_HAS_G0_FLAG)
-if (BENCHMARK_HAS_G0_FLAG)
-  list(APPEND ASM_TEST_FLAGS -g0)
-endif()
-
-check_cxx_compiler_flag(-fno-stack-protector BENCHMARK_HAS_FNO_STACK_PROTECTOR_FLAG)
-if (BENCHMARK_HAS_FNO_STACK_PROTECTOR_FLAG)
-  list(APPEND ASM_TEST_FLAGS -fno-stack-protector)
-endif()
-
-split_list(ASM_TEST_FLAGS)
-string(TOUPPER "${CMAKE_CXX_COMPILER_ID}" ASM_TEST_COMPILER)
-
-macro(add_filecheck_test name)
-  cmake_parse_arguments(ARG "" "" "CHECK_PREFIXES" ${ARGV})
-  add_library(${name} OBJECT ${name}.cc)
-  set_target_properties(${name} PROPERTIES COMPILE_FLAGS "-S ${ASM_TEST_FLAGS}")
-  set(ASM_OUTPUT_FILE "${CMAKE_CURRENT_BINARY_DIR}/${name}.s")
-  add_custom_target(copy_${name} ALL
-      COMMAND ${PROJECT_SOURCE_DIR}/tools/strip_asm.py
-        $<TARGET_OBJECTS:${name}>
-        ${ASM_OUTPUT_FILE}
-      BYPRODUCTS ${ASM_OUTPUT_FILE})
-  add_dependencies(copy_${name} ${name})
-  if (NOT ARG_CHECK_PREFIXES)
-    set(ARG_CHECK_PREFIXES "CHECK")
-  endif()
-  foreach(prefix ${ARG_CHECK_PREFIXES})
-    add_test(NAME run_${name}_${prefix}
-        COMMAND
-          ${LLVM_FILECHECK_EXE} ${name}.cc
-          --input-file=${ASM_OUTPUT_FILE}
-          --check-prefixes=CHECK,CHECK-${ASM_TEST_COMPILER}
-        WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR})
-  endforeach()
-endmacro()

diff  --git a/llvm/utils/benchmark/test/CMakeLists.txt b/llvm/utils/benchmark/test/CMakeLists.txt
deleted file mode 100644
index 05ae804bfe3fb..0000000000000
--- a/llvm/utils/benchmark/test/CMakeLists.txt
+++ /dev/null
@@ -1,247 +0,0 @@
-# Enable the tests
-
-find_package(Threads REQUIRED)
-include(CheckCXXCompilerFlag)
-
-# NOTE: Some tests use `<cassert>` to perform the test. Therefore we must
-# strip -DNDEBUG from the default CMake flags in DEBUG mode.
-string(TOUPPER "${CMAKE_BUILD_TYPE}" uppercase_CMAKE_BUILD_TYPE)
-if( NOT uppercase_CMAKE_BUILD_TYPE STREQUAL "DEBUG" )
-  add_definitions( -UNDEBUG )
-  add_definitions(-DTEST_BENCHMARK_LIBRARY_HAS_NO_ASSERTIONS)
-  # Also remove /D NDEBUG to avoid MSVC warnings about conflicting defines.
-  foreach (flags_var_to_scrub
-      CMAKE_CXX_FLAGS_RELEASE
-      CMAKE_CXX_FLAGS_RELWITHDEBINFO
-      CMAKE_CXX_FLAGS_MINSIZEREL
-      CMAKE_C_FLAGS_RELEASE
-      CMAKE_C_FLAGS_RELWITHDEBINFO
-      CMAKE_C_FLAGS_MINSIZEREL)
-    string (REGEX REPLACE "(^| )[/-]D *NDEBUG($| )" " "
-      "${flags_var_to_scrub}" "${${flags_var_to_scrub}}")
-  endforeach()
-endif()
-
-check_cxx_compiler_flag(-O3 BENCHMARK_HAS_O3_FLAG)
-set(BENCHMARK_O3_FLAG "")
-if (BENCHMARK_HAS_O3_FLAG)
-  set(BENCHMARK_O3_FLAG "-O3")
-endif()
-
-# NOTE: These flags must be added after find_package(Threads REQUIRED) otherwise
-# they will break the configuration check.
-if (DEFINED BENCHMARK_CXX_LINKER_FLAGS)
-  list(APPEND CMAKE_EXE_LINKER_FLAGS ${BENCHMARK_CXX_LINKER_FLAGS})
-endif()
-
-add_library(output_test_helper STATIC output_test_helper.cc output_test.h)
-
-macro(compile_benchmark_test name)
-  add_executable(${name} "${name}.cc")
-  target_link_libraries(${name} benchmark ${CMAKE_THREAD_LIBS_INIT})
-endmacro(compile_benchmark_test)
-
-macro(compile_benchmark_test_with_main name)
-  add_executable(${name} "${name}.cc")
-  target_link_libraries(${name} benchmark_main)
-endmacro(compile_benchmark_test_with_main)
-
-macro(compile_output_test name)
-  add_executable(${name} "${name}.cc" output_test.h)
-  target_link_libraries(${name} output_test_helper benchmark
-          ${BENCHMARK_CXX_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
-endmacro(compile_output_test)
-
-# Demonstration executable
-compile_benchmark_test(benchmark_test)
-add_test(benchmark benchmark_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(filter_test)
-macro(add_filter_test name filter expect)
-  add_test(${name} filter_test --benchmark_min_time=0.01 --benchmark_filter=${filter} ${expect})
-  add_test(${name}_list_only filter_test --benchmark_list_tests --benchmark_filter=${filter} ${expect})
-endmacro(add_filter_test)
-
-add_filter_test(filter_simple "Foo" 3)
-add_filter_test(filter_simple_negative "-Foo" 2)
-add_filter_test(filter_suffix "BM_.*" 4)
-add_filter_test(filter_suffix_negative "-BM_.*" 1)
-add_filter_test(filter_regex_all ".*" 5)
-add_filter_test(filter_regex_all_negative "-.*" 0)
-add_filter_test(filter_regex_blank "" 5)
-add_filter_test(filter_regex_blank_negative "-" 0)
-add_filter_test(filter_regex_none "monkey" 0)
-add_filter_test(filter_regex_none_negative "-monkey" 5)
-add_filter_test(filter_regex_wildcard ".*Foo.*" 3)
-add_filter_test(filter_regex_wildcard_negative "-.*Foo.*" 2)
-add_filter_test(filter_regex_begin "^BM_.*" 4)
-add_filter_test(filter_regex_begin_negative "-^BM_.*" 1)
-add_filter_test(filter_regex_begin2 "^N" 1)
-add_filter_test(filter_regex_begin2_negative "-^N" 4)
-add_filter_test(filter_regex_end ".*Ba$" 1)
-add_filter_test(filter_regex_end_negative "-.*Ba$" 4)
-
-compile_benchmark_test(options_test)
-add_test(options_benchmarks options_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(basic_test)
-add_test(basic_benchmark basic_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(diagnostics_test)
-add_test(diagnostics_test diagnostics_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(skip_with_error_test)
-add_test(skip_with_error_test skip_with_error_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(donotoptimize_test)
-# Some of the issues with DoNotOptimize only occur when optimization is enabled
-check_cxx_compiler_flag(-O3 BENCHMARK_HAS_O3_FLAG)
-if (BENCHMARK_HAS_O3_FLAG)
-  set_target_properties(donotoptimize_test PROPERTIES COMPILE_FLAGS "-O3")
-endif()
-add_test(donotoptimize_test donotoptimize_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(fixture_test)
-add_test(fixture_test fixture_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(register_benchmark_test)
-add_test(register_benchmark_test register_benchmark_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(map_test)
-add_test(map_test map_test --benchmark_min_time=0.01)
-
-compile_benchmark_test(multiple_ranges_test)
-add_test(multiple_ranges_test multiple_ranges_test --benchmark_min_time=0.01)
-
-compile_benchmark_test_with_main(link_main_test)
-add_test(link_main_test link_main_test --benchmark_min_time=0.01)
-
-compile_output_test(reporter_output_test)
-add_test(reporter_output_test reporter_output_test --benchmark_min_time=0.01)
-
-compile_output_test(templated_fixture_test)
-add_test(templated_fixture_test templated_fixture_test --benchmark_min_time=0.01)
-
-compile_output_test(user_counters_test)
-add_test(user_counters_test user_counters_test --benchmark_min_time=0.01)
-
-compile_output_test(user_counters_tabular_test)
-add_test(user_counters_tabular_test user_counters_tabular_test --benchmark_counters_tabular=true --benchmark_min_time=0.01)
-
-check_cxx_compiler_flag(-std=c++03 BENCHMARK_HAS_CXX03_FLAG)
-if (BENCHMARK_HAS_CXX03_FLAG)
-  compile_benchmark_test(cxx03_test)
-  set_target_properties(cxx03_test
-      PROPERTIES
-      COMPILE_FLAGS "-std=c++03")
-  # libstdc++ provides 
diff erent definitions within <map> between dialects. When
-  # LTO is enabled and -Werror is specified GCC diagnoses this ODR violation
-  # causing the test to fail to compile. To prevent this we explicitly disable
-  # the warning.
-  check_cxx_compiler_flag(-Wno-odr BENCHMARK_HAS_WNO_ODR)
-  if (BENCHMARK_ENABLE_LTO AND BENCHMARK_HAS_WNO_ODR)
-    set_target_properties(cxx03_test
-        PROPERTIES
-        LINK_FLAGS "-Wno-odr")
-  endif()
-  add_test(cxx03 cxx03_test --benchmark_min_time=0.01)
-endif()
-
-# Attempt to work around flaky test failures when running on Appveyor servers.
-if (DEFINED ENV{APPVEYOR})
-  set(COMPLEXITY_MIN_TIME "0.5")
-else()
-  set(COMPLEXITY_MIN_TIME "0.01")
-endif()
-compile_output_test(complexity_test)
-add_test(complexity_benchmark complexity_test --benchmark_min_time=${COMPLEXITY_MIN_TIME})
-
-###############################################################################
-# GoogleTest Unit Tests
-###############################################################################
-
-if (BENCHMARK_ENABLE_GTEST_TESTS)
-  macro(compile_gtest name)
-    add_executable(${name} "${name}.cc")
-    if (TARGET googletest)
-      add_dependencies(${name} googletest)
-    endif()
-    if (GTEST_INCLUDE_DIRS)
-      target_include_directories(${name} PRIVATE ${GTEST_INCLUDE_DIRS})
-    endif()
-    target_link_libraries(${name} benchmark
-        ${GTEST_BOTH_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
-  endmacro(compile_gtest)
-
-  macro(add_gtest name)
-    compile_gtest(${name})
-    add_test(${name} ${name})
-  endmacro()
-
-  add_gtest(benchmark_gtest)
-  add_gtest(statistics_gtest)
-endif(BENCHMARK_ENABLE_GTEST_TESTS)
-
-###############################################################################
-# Assembly Unit Tests
-###############################################################################
-
-if (BENCHMARK_ENABLE_ASSEMBLY_TESTS)
-  if (NOT LLVM_FILECHECK_EXE)
-    message(FATAL_ERROR "LLVM FileCheck is required when including this file")
-  endif()
-  include(AssemblyTests.cmake)
-  add_filecheck_test(donotoptimize_assembly_test)
-  add_filecheck_test(state_assembly_test)
-  add_filecheck_test(clobber_memory_assembly_test)
-endif()
-
-
-
-###############################################################################
-# Code Coverage Configuration
-###############################################################################
-
-# Add the coverage command(s)
-if(CMAKE_BUILD_TYPE)
-  string(TOLOWER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_LOWER)
-endif()
-if (${CMAKE_BUILD_TYPE_LOWER} MATCHES "coverage")
-  find_program(GCOV gcov)
-  find_program(LCOV lcov)
-  find_program(GENHTML genhtml)
-  find_program(CTEST ctest)
-  if (GCOV AND LCOV AND GENHTML AND CTEST AND HAVE_CXX_FLAG_COVERAGE)
-    add_custom_command(
-      OUTPUT ${CMAKE_BINARY_DIR}/lcov/index.html
-      COMMAND ${LCOV} -q -z -d .
-      COMMAND ${LCOV} -q --no-external -c -b "${CMAKE_SOURCE_DIR}" -d . -o before.lcov -i
-      COMMAND ${CTEST} --force-new-ctest-process
-      COMMAND ${LCOV} -q --no-external -c -b "${CMAKE_SOURCE_DIR}" -d . -o after.lcov
-      COMMAND ${LCOV} -q -a before.lcov -a after.lcov --output-file final.lcov
-      COMMAND ${LCOV} -q -r final.lcov "'${CMAKE_SOURCE_DIR}/test/*'" -o final.lcov
-      COMMAND ${GENHTML} final.lcov -o lcov --demangle-cpp --sort -p "${CMAKE_BINARY_DIR}" -t benchmark
-      DEPENDS filter_test benchmark_test options_test basic_test fixture_test cxx03_test complexity_test
-      WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
-      COMMENT "Running LCOV"
-    )
-    add_custom_target(coverage
-      DEPENDS ${CMAKE_BINARY_DIR}/lcov/index.html
-      COMMENT "LCOV report at lcov/index.html"
-    )
-    message(STATUS "Coverage command added")
-  else()
-    if (HAVE_CXX_FLAG_COVERAGE)
-      set(CXX_FLAG_COVERAGE_MESSAGE supported)
-    else()
-      set(CXX_FLAG_COVERAGE_MESSAGE unavailable)
-    endif()
-    message(WARNING
-      "Coverage not available:\n"
-      "  gcov: ${GCOV}\n"
-      "  lcov: ${LCOV}\n"
-      "  genhtml: ${GENHTML}\n"
-      "  ctest: ${CTEST}\n"
-      "  --coverage flag: ${CXX_FLAG_COVERAGE_MESSAGE}")
-  endif()
-endif()

diff  --git a/llvm/utils/benchmark/test/basic_test.cc b/llvm/utils/benchmark/test/basic_test.cc
deleted file mode 100644
index d07fbc00b1516..0000000000000
--- a/llvm/utils/benchmark/test/basic_test.cc
+++ /dev/null
@@ -1,136 +0,0 @@
-
-#include "benchmark/benchmark.h"
-
-#define BASIC_BENCHMARK_TEST(x) BENCHMARK(x)->Arg(8)->Arg(512)->Arg(8192)
-
-void BM_empty(benchmark::State& state) {
-  for (auto _ : state) {
-    benchmark::DoNotOptimize(state.iterations());
-  }
-}
-BENCHMARK(BM_empty);
-BENCHMARK(BM_empty)->ThreadPerCpu();
-
-void BM_spin_empty(benchmark::State& state) {
-  for (auto _ : state) {
-    for (int x = 0; x < state.range(0); ++x) {
-      benchmark::DoNotOptimize(x);
-    }
-  }
-}
-BASIC_BENCHMARK_TEST(BM_spin_empty);
-BASIC_BENCHMARK_TEST(BM_spin_empty)->ThreadPerCpu();
-
-void BM_spin_pause_before(benchmark::State& state) {
-  for (int i = 0; i < state.range(0); ++i) {
-    benchmark::DoNotOptimize(i);
-  }
-  for (auto _ : state) {
-    for (int i = 0; i < state.range(0); ++i) {
-      benchmark::DoNotOptimize(i);
-    }
-  }
-}
-BASIC_BENCHMARK_TEST(BM_spin_pause_before);
-BASIC_BENCHMARK_TEST(BM_spin_pause_before)->ThreadPerCpu();
-
-void BM_spin_pause_during(benchmark::State& state) {
-  for (auto _ : state) {
-    state.PauseTiming();
-    for (int i = 0; i < state.range(0); ++i) {
-      benchmark::DoNotOptimize(i);
-    }
-    state.ResumeTiming();
-    for (int i = 0; i < state.range(0); ++i) {
-      benchmark::DoNotOptimize(i);
-    }
-  }
-}
-BASIC_BENCHMARK_TEST(BM_spin_pause_during);
-BASIC_BENCHMARK_TEST(BM_spin_pause_during)->ThreadPerCpu();
-
-void BM_pause_during(benchmark::State& state) {
-  for (auto _ : state) {
-    state.PauseTiming();
-    state.ResumeTiming();
-  }
-}
-BENCHMARK(BM_pause_during);
-BENCHMARK(BM_pause_during)->ThreadPerCpu();
-BENCHMARK(BM_pause_during)->UseRealTime();
-BENCHMARK(BM_pause_during)->UseRealTime()->ThreadPerCpu();
-
-void BM_spin_pause_after(benchmark::State& state) {
-  for (auto _ : state) {
-    for (int i = 0; i < state.range(0); ++i) {
-      benchmark::DoNotOptimize(i);
-    }
-  }
-  for (int i = 0; i < state.range(0); ++i) {
-    benchmark::DoNotOptimize(i);
-  }
-}
-BASIC_BENCHMARK_TEST(BM_spin_pause_after);
-BASIC_BENCHMARK_TEST(BM_spin_pause_after)->ThreadPerCpu();
-
-void BM_spin_pause_before_and_after(benchmark::State& state) {
-  for (int i = 0; i < state.range(0); ++i) {
-    benchmark::DoNotOptimize(i);
-  }
-  for (auto _ : state) {
-    for (int i = 0; i < state.range(0); ++i) {
-      benchmark::DoNotOptimize(i);
-    }
-  }
-  for (int i = 0; i < state.range(0); ++i) {
-    benchmark::DoNotOptimize(i);
-  }
-}
-BASIC_BENCHMARK_TEST(BM_spin_pause_before_and_after);
-BASIC_BENCHMARK_TEST(BM_spin_pause_before_and_after)->ThreadPerCpu();
-
-void BM_empty_stop_start(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_empty_stop_start);
-BENCHMARK(BM_empty_stop_start)->ThreadPerCpu();
-
-
-void BM_KeepRunning(benchmark::State& state) {
-  size_t iter_count = 0;
-  assert(iter_count == state.iterations());
-  while (state.KeepRunning()) {
-    ++iter_count;
-  }
-  assert(iter_count == state.iterations());
-}
-BENCHMARK(BM_KeepRunning);
-
-void BM_KeepRunningBatch(benchmark::State& state) {
-  // Choose a prime batch size to avoid evenly dividing max_iterations.
-  const size_t batch_size = 101;
-  size_t iter_count = 0;
-  while (state.KeepRunningBatch(batch_size)) {
-    iter_count += batch_size;
-  }
-  assert(state.iterations() == iter_count);
-}
-BENCHMARK(BM_KeepRunningBatch);
-
-void BM_RangedFor(benchmark::State& state) {
-  size_t iter_count = 0;
-  for (auto _ : state) {
-    ++iter_count;
-  }
-  assert(iter_count == state.max_iterations);
-}
-BENCHMARK(BM_RangedFor);
-
-// Ensure that StateIterator provides all the necessary typedefs required to
-// instantiate std::iterator_traits.
-static_assert(std::is_same<
-  typename std::iterator_traits<benchmark::State::StateIterator>::value_type,
-  typename benchmark::State::StateIterator::value_type>::value, "");
-
-BENCHMARK_MAIN();

diff  --git a/llvm/utils/benchmark/test/benchmark_gtest.cc b/llvm/utils/benchmark/test/benchmark_gtest.cc
deleted file mode 100644
index 10683b433ab54..0000000000000
--- a/llvm/utils/benchmark/test/benchmark_gtest.cc
+++ /dev/null
@@ -1,33 +0,0 @@
-#include <vector>
-
-#include "../src/benchmark_register.h"
-#include "gmock/gmock.h"
-#include "gtest/gtest.h"
-
-namespace {
-
-TEST(AddRangeTest, Simple) {
-  std::vector<int> dst;
-  AddRange(&dst, 1, 2, 2);
-  EXPECT_THAT(dst, testing::ElementsAre(1, 2));
-}
-
-TEST(AddRangeTest, Simple64) {
-  std::vector<int64_t> dst;
-  AddRange(&dst, static_cast<int64_t>(1), static_cast<int64_t>(2), 2);
-  EXPECT_THAT(dst, testing::ElementsAre(1, 2));
-}
-
-TEST(AddRangeTest, Advanced) {
-  std::vector<int> dst;
-  AddRange(&dst, 5, 15, 2);
-  EXPECT_THAT(dst, testing::ElementsAre(5, 8, 15));
-}
-
-TEST(AddRangeTest, Advanced64) {
-  std::vector<int64_t> dst;
-  AddRange(&dst, static_cast<int64_t>(5), static_cast<int64_t>(15), 2);
-  EXPECT_THAT(dst, testing::ElementsAre(5, 8, 15));
-}
-
-}  // end namespace

diff  --git a/llvm/utils/benchmark/test/benchmark_test.cc b/llvm/utils/benchmark/test/benchmark_test.cc
deleted file mode 100644
index 3cd4f5565fa1c..0000000000000
--- a/llvm/utils/benchmark/test/benchmark_test.cc
+++ /dev/null
@@ -1,245 +0,0 @@
-#include "benchmark/benchmark.h"
-
-#include <assert.h>
-#include <math.h>
-#include <stdint.h>
-
-#include <chrono>
-#include <cstdlib>
-#include <iostream>
-#include <limits>
-#include <list>
-#include <map>
-#include <mutex>
-#include <set>
-#include <sstream>
-#include <string>
-#include <thread>
-#include <utility>
-#include <vector>
-
-#if defined(__GNUC__)
-#define BENCHMARK_NOINLINE __attribute__((noinline))
-#else
-#define BENCHMARK_NOINLINE
-#endif
-
-namespace {
-
-int BENCHMARK_NOINLINE Factorial(uint32_t n) {
-  return (n == 1) ? 1 : n * Factorial(n - 1);
-}
-
-double CalculatePi(int depth) {
-  double pi = 0.0;
-  for (int i = 0; i < depth; ++i) {
-    double numerator = static_cast<double>(((i % 2) * 2) - 1);
-    double denominator = static_cast<double>((2 * i) - 1);
-    pi += numerator / denominator;
-  }
-  return (pi - 1.0) * 4;
-}
-
-std::set<int64_t> ConstructRandomSet(int64_t size) {
-  std::set<int64_t> s;
-  for (int i = 0; i < size; ++i) s.insert(s.end(), i);
-  return s;
-}
-
-std::mutex test_vector_mu;
-std::vector<int>* test_vector = nullptr;
-
-}  // end namespace
-
-static void BM_Factorial(benchmark::State& state) {
-  int fac_42 = 0;
-  for (auto _ : state) fac_42 = Factorial(8);
-  // Prevent compiler optimizations
-  std::stringstream ss;
-  ss << fac_42;
-  state.SetLabel(ss.str());
-}
-BENCHMARK(BM_Factorial);
-BENCHMARK(BM_Factorial)->UseRealTime();
-
-static void BM_CalculatePiRange(benchmark::State& state) {
-  double pi = 0.0;
-  for (auto _ : state) pi = CalculatePi(static_cast<int>(state.range(0)));
-  std::stringstream ss;
-  ss << pi;
-  state.SetLabel(ss.str());
-}
-BENCHMARK_RANGE(BM_CalculatePiRange, 1, 1024 * 1024);
-
-static void BM_CalculatePi(benchmark::State& state) {
-  static const int depth = 1024;
-  for (auto _ : state) {
-    benchmark::DoNotOptimize(CalculatePi(static_cast<int>(depth)));
-  }
-}
-BENCHMARK(BM_CalculatePi)->Threads(8);
-BENCHMARK(BM_CalculatePi)->ThreadRange(1, 32);
-BENCHMARK(BM_CalculatePi)->ThreadPerCpu();
-
-static void BM_SetInsert(benchmark::State& state) {
-  std::set<int64_t> data;
-  for (auto _ : state) {
-    state.PauseTiming();
-    data = ConstructRandomSet(state.range(0));
-    state.ResumeTiming();
-    for (int j = 0; j < state.range(1); ++j) data.insert(rand());
-  }
-  state.SetItemsProcessed(state.iterations() * state.range(1));
-  state.SetBytesProcessed(state.iterations() * state.range(1) * sizeof(int));
-}
-
-// Test many inserts at once to reduce the total iterations needed. Otherwise, the slower,
-// non-timed part of each iteration will make the benchmark take forever.
-BENCHMARK(BM_SetInsert)->Ranges({{1 << 10, 8 << 10}, {128, 512}});
-
-template <typename Container,
-          typename ValueType = typename Container::value_type>
-static void BM_Sequential(benchmark::State& state) {
-  ValueType v = 42;
-  for (auto _ : state) {
-    Container c;
-    for (int64_t i = state.range(0); --i;) c.push_back(v);
-  }
-  const int64_t items_processed = state.iterations() * state.range(0);
-  state.SetItemsProcessed(items_processed);
-  state.SetBytesProcessed(items_processed * sizeof(v));
-}
-BENCHMARK_TEMPLATE2(BM_Sequential, std::vector<int>, int)
-    ->Range(1 << 0, 1 << 10);
-BENCHMARK_TEMPLATE(BM_Sequential, std::list<int>)->Range(1 << 0, 1 << 10);
-// Test the variadic version of BENCHMARK_TEMPLATE in C++11 and beyond.
-#ifdef BENCHMARK_HAS_CXX11
-BENCHMARK_TEMPLATE(BM_Sequential, std::vector<int>, int)->Arg(512);
-#endif
-
-static void BM_StringCompare(benchmark::State& state) {
-  size_t len = static_cast<size_t>(state.range(0));
-  std::string s1(len, '-');
-  std::string s2(len, '-');
-  for (auto _ : state) benchmark::DoNotOptimize(s1.compare(s2));
-}
-BENCHMARK(BM_StringCompare)->Range(1, 1 << 20);
-
-static void BM_SetupTeardown(benchmark::State& state) {
-  if (state.thread_index == 0) {
-    // No need to lock test_vector_mu here as this is running single-threaded.
-    test_vector = new std::vector<int>();
-  }
-  int i = 0;
-  for (auto _ : state) {
-    std::lock_guard<std::mutex> l(test_vector_mu);
-    if (i % 2 == 0)
-      test_vector->push_back(i);
-    else
-      test_vector->pop_back();
-    ++i;
-  }
-  if (state.thread_index == 0) {
-    delete test_vector;
-  }
-}
-BENCHMARK(BM_SetupTeardown)->ThreadPerCpu();
-
-static void BM_LongTest(benchmark::State& state) {
-  double tracker = 0.0;
-  for (auto _ : state) {
-    for (int i = 0; i < state.range(0); ++i)
-      benchmark::DoNotOptimize(tracker += i);
-  }
-}
-BENCHMARK(BM_LongTest)->Range(1 << 16, 1 << 28);
-
-static void BM_ParallelMemset(benchmark::State& state) {
-  int64_t size = state.range(0) / static_cast<int64_t>(sizeof(int));
-  int thread_size = static_cast<int>(size) / state.threads;
-  int from = thread_size * state.thread_index;
-  int to = from + thread_size;
-
-  if (state.thread_index == 0) {
-    test_vector = new std::vector<int>(static_cast<size_t>(size));
-  }
-
-  for (auto _ : state) {
-    for (int i = from; i < to; i++) {
-      // No need to lock test_vector_mu as ranges
-      // do not overlap between threads.
-      benchmark::DoNotOptimize(test_vector->at(i) = 1);
-    }
-  }
-
-  if (state.thread_index == 0) {
-    delete test_vector;
-  }
-}
-BENCHMARK(BM_ParallelMemset)->Arg(10 << 20)->ThreadRange(1, 4);
-
-static void BM_ManualTiming(benchmark::State& state) {
-  int64_t slept_for = 0;
-  int64_t microseconds = state.range(0);
-  std::chrono::duration<double, std::micro> sleep_duration{
-      static_cast<double>(microseconds)};
-
-  for (auto _ : state) {
-    auto start = std::chrono::high_resolution_clock::now();
-    // Simulate some useful workload with a sleep
-    std::this_thread::sleep_for(
-        std::chrono::duration_cast<std::chrono::nanoseconds>(sleep_duration));
-    auto end = std::chrono::high_resolution_clock::now();
-
-    auto elapsed =
-        std::chrono::duration_cast<std::chrono::duration<double>>(end - start);
-
-    state.SetIterationTime(elapsed.count());
-    slept_for += microseconds;
-  }
-  state.SetItemsProcessed(slept_for);
-}
-BENCHMARK(BM_ManualTiming)->Range(1, 1 << 14)->UseRealTime();
-BENCHMARK(BM_ManualTiming)->Range(1, 1 << 14)->UseManualTime();
-
-#ifdef BENCHMARK_HAS_CXX11
-
-template <class... Args>
-void BM_with_args(benchmark::State& state, Args&&...) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK_CAPTURE(BM_with_args, int_test, 42, 43, 44);
-BENCHMARK_CAPTURE(BM_with_args, string_and_pair_test, std::string("abc"),
-                  std::pair<int, double>(42, 3.8));
-
-void BM_non_template_args(benchmark::State& state, int, double) {
-  while(state.KeepRunning()) {}
-}
-BENCHMARK_CAPTURE(BM_non_template_args, basic_test, 0, 0);
-
-#endif  // BENCHMARK_HAS_CXX11
-
-static void BM_DenseThreadRanges(benchmark::State& st) {
-  switch (st.range(0)) {
-    case 1:
-      assert(st.threads == 1 || st.threads == 2 || st.threads == 3);
-      break;
-    case 2:
-      assert(st.threads == 1 || st.threads == 3 || st.threads == 4);
-      break;
-    case 3:
-      assert(st.threads == 5 || st.threads == 8 || st.threads == 11 ||
-             st.threads == 14);
-      break;
-    default:
-      assert(false && "Invalid test case number");
-  }
-  while (st.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_DenseThreadRanges)->Arg(1)->DenseThreadRange(1, 3);
-BENCHMARK(BM_DenseThreadRanges)->Arg(2)->DenseThreadRange(1, 4, 2);
-BENCHMARK(BM_DenseThreadRanges)->Arg(3)->DenseThreadRange(5, 14, 3);
-
-BENCHMARK_MAIN();

diff  --git a/llvm/utils/benchmark/test/clobber_memory_assembly_test.cc b/llvm/utils/benchmark/test/clobber_memory_assembly_test.cc
deleted file mode 100644
index f41911a39ce73..0000000000000
--- a/llvm/utils/benchmark/test/clobber_memory_assembly_test.cc
+++ /dev/null
@@ -1,64 +0,0 @@
-#include <benchmark/benchmark.h>
-
-#ifdef __clang__
-#pragma clang diagnostic ignored "-Wreturn-type"
-#endif
-
-extern "C" {
-
-extern int ExternInt;
-extern int ExternInt2;
-extern int ExternInt3;
-
-}
-
-// CHECK-LABEL: test_basic:
-extern "C" void test_basic() {
-  int x;
-  benchmark::DoNotOptimize(&x);
-  x = 101;
-  benchmark::ClobberMemory();
-  // CHECK: leaq [[DEST:[^,]+]], %rax
-  // CHECK: movl $101, [[DEST]]
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_redundant_store:
-extern "C" void test_redundant_store() {
-  ExternInt = 3;
-  benchmark::ClobberMemory();
-  ExternInt = 51;
-  // CHECK-DAG: ExternInt
-  // CHECK-DAG: movl $3
-  // CHECK: movl $51
-}
-
-// CHECK-LABEL: test_redundant_read:
-extern "C" void test_redundant_read() {
-  int x;
-  benchmark::DoNotOptimize(&x);
-  x = ExternInt;
-  benchmark::ClobberMemory();
-  x = ExternInt2;
-  // CHECK: leaq [[DEST:[^,]+]], %rax
-  // CHECK: ExternInt(%rip)
-  // CHECK: movl %eax, [[DEST]]
-  // CHECK-NOT: ExternInt2
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_redundant_read2:
-extern "C" void test_redundant_read2() {
-  int x;
-  benchmark::DoNotOptimize(&x);
-  x = ExternInt;
-  benchmark::ClobberMemory();
-  x = ExternInt2;
-  benchmark::ClobberMemory();
-  // CHECK: leaq [[DEST:[^,]+]], %rax
-  // CHECK: ExternInt(%rip)
-  // CHECK: movl %eax, [[DEST]]
-  // CHECK: ExternInt2(%rip)
-  // CHECK: movl %eax, [[DEST]]
-  // CHECK: ret
-}

diff  --git a/llvm/utils/benchmark/test/complexity_test.cc b/llvm/utils/benchmark/test/complexity_test.cc
deleted file mode 100644
index ab832861ece29..0000000000000
--- a/llvm/utils/benchmark/test/complexity_test.cc
+++ /dev/null
@@ -1,167 +0,0 @@
-#undef NDEBUG
-#include <algorithm>
-#include <cassert>
-#include <cmath>
-#include <cstdlib>
-#include <vector>
-#include "benchmark/benchmark.h"
-#include "output_test.h"
-
-namespace {
-
-#define ADD_COMPLEXITY_CASES(...) \
-  int CONCAT(dummy, __LINE__) = AddComplexityTest(__VA_ARGS__)
-
-int AddComplexityTest(std::string big_o_test_name, std::string rms_test_name,
-                      std::string big_o) {
-  SetSubstitutions({{"%bigo_name", big_o_test_name},
-                    {"%rms_name", rms_test_name},
-                    {"%bigo_str", "[ ]* %float " + big_o},
-                    {"%bigo", big_o},
-                    {"%rms", "[ ]*[0-9]+ %"}});
-  AddCases(
-      TC_ConsoleOut,
-      {{"^%bigo_name %bigo_str %bigo_str[ ]*$"},
-       {"^%bigo_name", MR_Not},  // Assert we we didn't only matched a name.
-       {"^%rms_name %rms %rms[ ]*$", MR_Next}});
-  AddCases(TC_JSONOut, {{"\"name\": \"%bigo_name\",$"},
-                        {"\"cpu_coefficient\": %float,$", MR_Next},
-                        {"\"real_coefficient\": %float,$", MR_Next},
-                        {"\"big_o\": \"%bigo\",$", MR_Next},
-                        {"\"time_unit\": \"ns\"$", MR_Next},
-                        {"}", MR_Next},
-                        {"\"name\": \"%rms_name\",$"},
-                        {"\"rms\": %float$", MR_Next},
-                        {"}", MR_Next}});
-  AddCases(TC_CSVOut, {{"^\"%bigo_name\",,%float,%float,%bigo,,,,,$"},
-                       {"^\"%bigo_name\"", MR_Not},
-                       {"^\"%rms_name\",,%float,%float,,,,,,$", MR_Next}});
-  return 0;
-}
-
-}  // end namespace
-
-// ========================================================================= //
-// --------------------------- Testing BigO O(1) --------------------------- //
-// ========================================================================= //
-
-void BM_Complexity_O1(benchmark::State& state) {
-  for (auto _ : state) {
-    for (int i = 0; i < 1024; ++i) {
-      benchmark::DoNotOptimize(&i);
-    }
-  }
-  state.SetComplexityN(state.range(0));
-}
-BENCHMARK(BM_Complexity_O1)->Range(1, 1 << 18)->Complexity(benchmark::o1);
-BENCHMARK(BM_Complexity_O1)->Range(1, 1 << 18)->Complexity();
-BENCHMARK(BM_Complexity_O1)->Range(1, 1 << 18)->Complexity([](int64_t) {
-  return 1.0;
-});
-
-const char *big_o_1_test_name = "BM_Complexity_O1_BigO";
-const char *rms_o_1_test_name = "BM_Complexity_O1_RMS";
-const char *enum_big_o_1 = "\\([0-9]+\\)";
-// FIXME: Tolerate both '(1)' and 'lgN' as output when the complexity is auto
-// deduced.
-// See https://github.com/google/benchmark/issues/272
-const char *auto_big_o_1 = "(\\([0-9]+\\))|(lgN)";
-const char *lambda_big_o_1 = "f\\(N\\)";
-
-// Add enum tests
-ADD_COMPLEXITY_CASES(big_o_1_test_name, rms_o_1_test_name, enum_big_o_1);
-
-// Add auto enum tests
-ADD_COMPLEXITY_CASES(big_o_1_test_name, rms_o_1_test_name, auto_big_o_1);
-
-// Add lambda tests
-ADD_COMPLEXITY_CASES(big_o_1_test_name, rms_o_1_test_name, lambda_big_o_1);
-
-// ========================================================================= //
-// --------------------------- Testing BigO O(N) --------------------------- //
-// ========================================================================= //
-
-std::vector<int> ConstructRandomVector(int64_t size) {
-  std::vector<int> v;
-  v.reserve(static_cast<int>(size));
-  for (int i = 0; i < size; ++i) {
-    v.push_back(std::rand() % size);
-  }
-  return v;
-}
-
-void BM_Complexity_O_N(benchmark::State& state) {
-  auto v = ConstructRandomVector(state.range(0));
-  // Test worst case scenario (item not in vector)
-  const int64_t item_not_in_vector = state.range(0) * 2;
-  for (auto _ : state) {
-    benchmark::DoNotOptimize(std::find(v.begin(), v.end(), item_not_in_vector));
-  }
-  state.SetComplexityN(state.range(0));
-}
-BENCHMARK(BM_Complexity_O_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity(benchmark::oN);
-BENCHMARK(BM_Complexity_O_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity([](int64_t n) -> double { return n; });
-BENCHMARK(BM_Complexity_O_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity();
-
-const char *big_o_n_test_name = "BM_Complexity_O_N_BigO";
-const char *rms_o_n_test_name = "BM_Complexity_O_N_RMS";
-const char *enum_auto_big_o_n = "N";
-const char *lambda_big_o_n = "f\\(N\\)";
-
-// Add enum tests
-ADD_COMPLEXITY_CASES(big_o_n_test_name, rms_o_n_test_name, enum_auto_big_o_n);
-
-// Add lambda tests
-ADD_COMPLEXITY_CASES(big_o_n_test_name, rms_o_n_test_name, lambda_big_o_n);
-
-// ========================================================================= //
-// ------------------------- Testing BigO O(N*lgN) ------------------------- //
-// ========================================================================= //
-
-static void BM_Complexity_O_N_log_N(benchmark::State& state) {
-  auto v = ConstructRandomVector(state.range(0));
-  for (auto _ : state) {
-    std::sort(v.begin(), v.end());
-  }
-  state.SetComplexityN(state.range(0));
-}
-BENCHMARK(BM_Complexity_O_N_log_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity(benchmark::oNLogN);
-BENCHMARK(BM_Complexity_O_N_log_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity([](int64_t n) { return n * log2(n); });
-BENCHMARK(BM_Complexity_O_N_log_N)
-    ->RangeMultiplier(2)
-    ->Range(1 << 10, 1 << 16)
-    ->Complexity();
-
-const char *big_o_n_lg_n_test_name = "BM_Complexity_O_N_log_N_BigO";
-const char *rms_o_n_lg_n_test_name = "BM_Complexity_O_N_log_N_RMS";
-const char *enum_auto_big_o_n_lg_n = "NlgN";
-const char *lambda_big_o_n_lg_n = "f\\(N\\)";
-
-// Add enum tests
-ADD_COMPLEXITY_CASES(big_o_n_lg_n_test_name, rms_o_n_lg_n_test_name,
-                     enum_auto_big_o_n_lg_n);
-
-// Add lambda tests
-ADD_COMPLEXITY_CASES(big_o_n_lg_n_test_name, rms_o_n_lg_n_test_name,
-                     lambda_big_o_n_lg_n);
-
-// ========================================================================= //
-// --------------------------- TEST CASES END ------------------------------ //
-// ========================================================================= //
-
-int main(int argc, char *argv[]) { RunOutputTests(argc, argv); }

diff  --git a/llvm/utils/benchmark/test/cxx03_test.cc b/llvm/utils/benchmark/test/cxx03_test.cc
deleted file mode 100644
index baa9ed9262baa..0000000000000
--- a/llvm/utils/benchmark/test/cxx03_test.cc
+++ /dev/null
@@ -1,63 +0,0 @@
-#undef NDEBUG
-#include <cassert>
-#include <cstddef>
-
-#include "benchmark/benchmark.h"
-
-#if __cplusplus >= 201103L
-#error C++11 or greater detected. Should be C++03.
-#endif
-
-#ifdef BENCHMARK_HAS_CXX11
-#error C++11 or greater detected by the library. BENCHMARK_HAS_CXX11 is defined.
-#endif
-
-void BM_empty(benchmark::State& state) {
-  while (state.KeepRunning()) {
-    volatile std::size_t x = state.iterations();
-    ((void)x);
-  }
-}
-BENCHMARK(BM_empty);
-
-// The new C++11 interface for args/ranges requires initializer list support.
-// Therefore we provide the old interface to support C++03.
-void BM_old_arg_range_interface(benchmark::State& state) {
-  assert((state.range(0) == 1 && state.range(1) == 2) ||
-         (state.range(0) == 5 && state.range(1) == 6));
-  while (state.KeepRunning()) {
-  }
-}
-BENCHMARK(BM_old_arg_range_interface)->ArgPair(1, 2)->RangePair(5, 5, 6, 6);
-
-template <class T, class U>
-void BM_template2(benchmark::State& state) {
-  BM_empty(state);
-}
-BENCHMARK_TEMPLATE2(BM_template2, int, long);
-
-template <class T>
-void BM_template1(benchmark::State& state) {
-  BM_empty(state);
-}
-BENCHMARK_TEMPLATE(BM_template1, long);
-BENCHMARK_TEMPLATE1(BM_template1, int);
-
-template <class T>
-struct BM_Fixture : public ::benchmark::Fixture {
-};
-
-BENCHMARK_TEMPLATE_F(BM_Fixture, BM_template1, long)(benchmark::State& state) {
-  BM_empty(state);
-}
-BENCHMARK_TEMPLATE1_F(BM_Fixture, BM_template2, int)(benchmark::State& state) {
-  BM_empty(state);
-}
-
-void BM_counters(benchmark::State& state) {
-    BM_empty(state);
-    state.counters["Foo"] = 2;
-}
-BENCHMARK(BM_counters);
-
-BENCHMARK_MAIN();

diff  --git a/llvm/utils/benchmark/test/diagnostics_test.cc b/llvm/utils/benchmark/test/diagnostics_test.cc
deleted file mode 100644
index dd64a33655315..0000000000000
--- a/llvm/utils/benchmark/test/diagnostics_test.cc
+++ /dev/null
@@ -1,80 +0,0 @@
-// Testing:
-//   State::PauseTiming()
-//   State::ResumeTiming()
-// Test that CHECK's within these function diagnose when they are called
-// outside of the KeepRunning() loop.
-//
-// NOTE: Users should NOT include or use src/check.h. This is only done in
-// order to test library internals.
-
-#include <cstdlib>
-#include <stdexcept>
-
-#include "../src/check.h"
-#include "benchmark/benchmark.h"
-
-#if defined(__GNUC__) && !defined(__EXCEPTIONS)
-#define TEST_HAS_NO_EXCEPTIONS
-#endif
-
-void TestHandler() {
-#ifndef TEST_HAS_NO_EXCEPTIONS
-  throw std::logic_error("");
-#else
-  std::abort();
-#endif
-}
-
-void try_invalid_pause_resume(benchmark::State& state) {
-#if !defined(TEST_BENCHMARK_LIBRARY_HAS_NO_ASSERTIONS) && !defined(TEST_HAS_NO_EXCEPTIONS)
-  try {
-    state.PauseTiming();
-    std::abort();
-  } catch (std::logic_error const&) {
-  }
-  try {
-    state.ResumeTiming();
-    std::abort();
-  } catch (std::logic_error const&) {
-  }
-#else
-  (void)state;  // avoid unused warning
-#endif
-}
-
-void BM_diagnostic_test(benchmark::State& state) {
-  static bool called_once = false;
-
-  if (called_once == false) try_invalid_pause_resume(state);
-
-  for (auto _ : state) {
-    benchmark::DoNotOptimize(state.iterations());
-  }
-
-  if (called_once == false) try_invalid_pause_resume(state);
-
-  called_once = true;
-}
-BENCHMARK(BM_diagnostic_test);
-
-
-void BM_diagnostic_test_keep_running(benchmark::State& state) {
-  static bool called_once = false;
-
-  if (called_once == false) try_invalid_pause_resume(state);
-
-  while(state.KeepRunning()) {
-    benchmark::DoNotOptimize(state.iterations());
-  }
-
-  if (called_once == false) try_invalid_pause_resume(state);
-
-  called_once = true;
-}
-BENCHMARK(BM_diagnostic_test_keep_running);
-
-int main(int argc, char* argv[]) {
-  benchmark::internal::GetAbortHandler() = &TestHandler;
-  benchmark::Initialize(&argc, argv);
-  benchmark::RunSpecifiedBenchmarks();
-}

diff  --git a/llvm/utils/benchmark/test/donotoptimize_assembly_test.cc b/llvm/utils/benchmark/test/donotoptimize_assembly_test.cc
deleted file mode 100644
index d4b0bab70e773..0000000000000
--- a/llvm/utils/benchmark/test/donotoptimize_assembly_test.cc
+++ /dev/null
@@ -1,163 +0,0 @@
-#include <benchmark/benchmark.h>
-
-#ifdef __clang__
-#pragma clang diagnostic ignored "-Wreturn-type"
-#endif
-
-extern "C" {
-
-extern int ExternInt;
-extern int ExternInt2;
-extern int ExternInt3;
-
-inline int Add42(int x) { return x + 42; }
-
-struct NotTriviallyCopyable {
-  NotTriviallyCopyable();
-  explicit NotTriviallyCopyable(int x) : value(x) {}
-  NotTriviallyCopyable(NotTriviallyCopyable const&);
-  int value;
-};
-
-struct Large {
-  int value;
-  int data[2];
-};
-
-}
-// CHECK-LABEL: test_with_rvalue:
-extern "C" void test_with_rvalue() {
-  benchmark::DoNotOptimize(Add42(0));
-  // CHECK: movl $42, %eax
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_with_large_rvalue:
-extern "C" void test_with_large_rvalue() {
-  benchmark::DoNotOptimize(Large{ExternInt, {ExternInt, ExternInt}});
-  // CHECK: ExternInt(%rip)
-  // CHECK: movl %eax, -{{[0-9]+}}(%[[REG:[a-z]+]]
-  // CHECK: movl %eax, -{{[0-9]+}}(%[[REG]])
-  // CHECK: movl %eax, -{{[0-9]+}}(%[[REG]])
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_with_non_trivial_rvalue:
-extern "C" void test_with_non_trivial_rvalue() {
-  benchmark::DoNotOptimize(NotTriviallyCopyable(ExternInt));
-  // CHECK: mov{{l|q}} ExternInt(%rip)
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_with_lvalue:
-extern "C" void test_with_lvalue() {
-  int x = 101;
-  benchmark::DoNotOptimize(x);
-  // CHECK-GNU: movl $101, %eax
-  // CHECK-CLANG: movl $101, -{{[0-9]+}}(%[[REG:[a-z]+]])
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_with_large_lvalue:
-extern "C" void test_with_large_lvalue() {
-  Large L{ExternInt, {ExternInt, ExternInt}};
-  benchmark::DoNotOptimize(L);
-  // CHECK: ExternInt(%rip)
-  // CHECK: movl %eax, -{{[0-9]+}}(%[[REG:[a-z]+]])
-  // CHECK: movl %eax, -{{[0-9]+}}(%[[REG]])
-  // CHECK: movl %eax, -{{[0-9]+}}(%[[REG]])
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_with_non_trivial_lvalue:
-extern "C" void test_with_non_trivial_lvalue() {
-  NotTriviallyCopyable NTC(ExternInt);
-  benchmark::DoNotOptimize(NTC);
-  // CHECK: ExternInt(%rip)
-  // CHECK: movl %eax, -{{[0-9]+}}(%[[REG:[a-z]+]])
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_with_const_lvalue:
-extern "C" void test_with_const_lvalue() {
-  const int x = 123;
-  benchmark::DoNotOptimize(x);
-  // CHECK: movl $123, %eax
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_with_large_const_lvalue:
-extern "C" void test_with_large_const_lvalue() {
-  const Large L{ExternInt, {ExternInt, ExternInt}};
-  benchmark::DoNotOptimize(L);
-  // CHECK: ExternInt(%rip)
-  // CHECK: movl %eax, -{{[0-9]+}}(%[[REG:[a-z]+]])
-  // CHECK: movl %eax, -{{[0-9]+}}(%[[REG]])
-  // CHECK: movl %eax, -{{[0-9]+}}(%[[REG]])
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_with_non_trivial_const_lvalue:
-extern "C" void test_with_non_trivial_const_lvalue() {
-  const NotTriviallyCopyable Obj(ExternInt);
-  benchmark::DoNotOptimize(Obj);
-  // CHECK: mov{{q|l}} ExternInt(%rip)
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_div_by_two:
-extern "C" int test_div_by_two(int input) {
-  int divisor = 2;
-  benchmark::DoNotOptimize(divisor);
-  return input / divisor;
-  // CHECK: movl $2, [[DEST:.*]]
-  // CHECK: idivl [[DEST]]
-  // CHECK: ret
-}
-
-// CHECK-LABEL: test_inc_integer:
-extern "C" int test_inc_integer() {
-  int x = 0;
-  for (int i=0; i < 5; ++i)
-    benchmark::DoNotOptimize(++x);
-  // CHECK: movl $1, [[DEST:.*]]
-  // CHECK: {{(addl \$1,|incl)}} [[DEST]]
-  // CHECK: {{(addl \$1,|incl)}} [[DEST]]
-  // CHECK: {{(addl \$1,|incl)}} [[DEST]]
-  // CHECK: {{(addl \$1,|incl)}} [[DEST]]
-  // CHECK-CLANG: movl [[DEST]], %eax
-  // CHECK: ret
-  return x;
-}
-
-// CHECK-LABEL: test_pointer_rvalue
-extern "C" void test_pointer_rvalue() {
-  // CHECK: movl $42, [[DEST:.*]]
-  // CHECK: leaq [[DEST]], %rax
-  // CHECK-CLANG: movq %rax, -{{[0-9]+}}(%[[REG:[a-z]+]])
-  // CHECK: ret
-  int x = 42;
-  benchmark::DoNotOptimize(&x);
-}
-
-// CHECK-LABEL: test_pointer_const_lvalue:
-extern "C" void test_pointer_const_lvalue() {
-  // CHECK: movl $42, [[DEST:.*]]
-  // CHECK: leaq [[DEST]], %rax
-  // CHECK-CLANG: movq %rax, -{{[0-9]+}}(%[[REG:[a-z]+]])
-  // CHECK: ret
-  int x = 42;
-  int * const xp = &x;
-  benchmark::DoNotOptimize(xp);
-}
-
-// CHECK-LABEL: test_pointer_lvalue:
-extern "C" void test_pointer_lvalue() {
-  // CHECK: movl $42, [[DEST:.*]]
-  // CHECK: leaq [[DEST]], %rax
-  // CHECK-CLANG: movq %rax, -{{[0-9]+}}(%[[REG:[a-z+]+]])
-  // CHECK: ret
-  int x = 42;
-  int *xp = &x;
-  benchmark::DoNotOptimize(xp);
-}

diff  --git a/llvm/utils/benchmark/test/donotoptimize_test.cc b/llvm/utils/benchmark/test/donotoptimize_test.cc
deleted file mode 100644
index 2ce92d1c72bed..0000000000000
--- a/llvm/utils/benchmark/test/donotoptimize_test.cc
+++ /dev/null
@@ -1,52 +0,0 @@
-#include "benchmark/benchmark.h"
-
-#include <cstdint>
-
-namespace {
-#if defined(__GNUC__)
-std::uint64_t double_up(const std::uint64_t x) __attribute__((const));
-#endif
-std::uint64_t double_up(const std::uint64_t x) { return x * 2; }
-}
-
-// Using DoNotOptimize on types like BitRef seem to cause a lot of problems
-// with the inline assembly on both GCC and Clang.
-struct BitRef {
-  int index;
-  unsigned char &byte;
-
-public:
-  static BitRef Make() {
-    static unsigned char arr[2] = {};
-    BitRef b(1, arr[0]);
-    return b;
-  }
-private:
-  BitRef(int i, unsigned char& b) : index(i), byte(b) {}
-};
-
-int main(int, char*[]) {
-  // this test verifies compilation of DoNotOptimize() for some types
-
-  char buffer8[8] = "";
-  benchmark::DoNotOptimize(buffer8);
-
-  char buffer20[20] = "";
-  benchmark::DoNotOptimize(buffer20);
-
-  char buffer1024[1024] = "";
-  benchmark::DoNotOptimize(buffer1024);
-  benchmark::DoNotOptimize(&buffer1024[0]);
-
-  int x = 123;
-  benchmark::DoNotOptimize(x);
-  benchmark::DoNotOptimize(&x);
-  benchmark::DoNotOptimize(x += 42);
-
-  benchmark::DoNotOptimize(double_up(x));
-
-  // These tests are to e
-  benchmark::DoNotOptimize(BitRef::Make());
-  BitRef lval = BitRef::Make();
-  benchmark::DoNotOptimize(lval);
-}

diff  --git a/llvm/utils/benchmark/test/filter_test.cc b/llvm/utils/benchmark/test/filter_test.cc
deleted file mode 100644
index 0e27065c1558e..0000000000000
--- a/llvm/utils/benchmark/test/filter_test.cc
+++ /dev/null
@@ -1,104 +0,0 @@
-#include "benchmark/benchmark.h"
-
-#include <cassert>
-#include <cmath>
-#include <cstdint>
-#include <cstdlib>
-
-#include <iostream>
-#include <limits>
-#include <sstream>
-#include <string>
-
-namespace {
-
-class TestReporter : public benchmark::ConsoleReporter {
- public:
-  virtual bool ReportContext(const Context& context) {
-    return ConsoleReporter::ReportContext(context);
-  };
-
-  virtual void ReportRuns(const std::vector<Run>& report) {
-    ++count_;
-    ConsoleReporter::ReportRuns(report);
-  };
-
-  TestReporter() : count_(0) {}
-
-  virtual ~TestReporter() {}
-
-  size_t GetCount() const { return count_; }
-
- private:
-  mutable size_t count_;
-};
-
-}  // end namespace
-
-static void NoPrefix(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(NoPrefix);
-
-static void BM_Foo(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_Foo);
-
-static void BM_Bar(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_Bar);
-
-static void BM_FooBar(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_FooBar);
-
-static void BM_FooBa(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_FooBa);
-
-int main(int argc, char **argv) {
-  bool list_only = false;
-  for (int i = 0; i < argc; ++i)
-    list_only |= std::string(argv[i]).find("--benchmark_list_tests") !=
-                 std::string::npos;
-
-  benchmark::Initialize(&argc, argv);
-
-  TestReporter test_reporter;
-  const size_t returned_count =
-      benchmark::RunSpecifiedBenchmarks(&test_reporter);
-
-  if (argc == 2) {
-    // Make sure we ran all of the tests
-    std::stringstream ss(argv[1]);
-    size_t expected_return;
-    ss >> expected_return;
-
-    if (returned_count != expected_return) {
-      std::cerr << "ERROR: Expected " << expected_return
-                << " tests to match the filter but returned_count = "
-                << returned_count << std::endl;
-      return -1;
-    }
-
-    const size_t expected_reports = list_only ? 0 : expected_return;
-    const size_t reports_count = test_reporter.GetCount();
-    if (reports_count != expected_reports) {
-      std::cerr << "ERROR: Expected " << expected_reports
-                << " tests to be run but reported_count = " << reports_count
-                << std::endl;
-      return -1;
-    }
-  }
-
-  return 0;
-}

diff  --git a/llvm/utils/benchmark/test/fixture_test.cc b/llvm/utils/benchmark/test/fixture_test.cc
deleted file mode 100644
index 1462b10f02f96..0000000000000
--- a/llvm/utils/benchmark/test/fixture_test.cc
+++ /dev/null
@@ -1,49 +0,0 @@
-
-#include "benchmark/benchmark.h"
-
-#include <cassert>
-#include <memory>
-
-class MyFixture : public ::benchmark::Fixture {
- public:
-  void SetUp(const ::benchmark::State& state) {
-    if (state.thread_index == 0) {
-      assert(data.get() == nullptr);
-      data.reset(new int(42));
-    }
-  }
-
-  void TearDown(const ::benchmark::State& state) {
-    if (state.thread_index == 0) {
-      assert(data.get() != nullptr);
-      data.reset();
-    }
-  }
-
-  ~MyFixture() { assert(data == nullptr); }
-
-  std::unique_ptr<int> data;
-};
-
-BENCHMARK_F(MyFixture, Foo)(benchmark::State &st) {
-  assert(data.get() != nullptr);
-  assert(*data == 42);
-  for (auto _ : st) {
-  }
-}
-
-BENCHMARK_DEFINE_F(MyFixture, Bar)(benchmark::State& st) {
-  if (st.thread_index == 0) {
-    assert(data.get() != nullptr);
-    assert(*data == 42);
-  }
-  for (auto _ : st) {
-    assert(data.get() != nullptr);
-    assert(*data == 42);
-  }
-  st.SetItemsProcessed(st.range(0));
-}
-BENCHMARK_REGISTER_F(MyFixture, Bar)->Arg(42);
-BENCHMARK_REGISTER_F(MyFixture, Bar)->Arg(42)->ThreadPerCpu();
-
-BENCHMARK_MAIN();

diff  --git a/llvm/utils/benchmark/test/link_main_test.cc b/llvm/utils/benchmark/test/link_main_test.cc
deleted file mode 100644
index 241ad5c3905e9..0000000000000
--- a/llvm/utils/benchmark/test/link_main_test.cc
+++ /dev/null
@@ -1,8 +0,0 @@
-#include "benchmark/benchmark.h"
-
-void BM_empty(benchmark::State& state) {
-  for (auto _ : state) {
-    benchmark::DoNotOptimize(state.iterations());
-  }
-}
-BENCHMARK(BM_empty);

diff  --git a/llvm/utils/benchmark/test/map_test.cc b/llvm/utils/benchmark/test/map_test.cc
deleted file mode 100644
index dbf7982a3686e..0000000000000
--- a/llvm/utils/benchmark/test/map_test.cc
+++ /dev/null
@@ -1,57 +0,0 @@
-#include "benchmark/benchmark.h"
-
-#include <cstdlib>
-#include <map>
-
-namespace {
-
-std::map<int, int> ConstructRandomMap(int size) {
-  std::map<int, int> m;
-  for (int i = 0; i < size; ++i) {
-    m.insert(std::make_pair(std::rand() % size, std::rand() % size));
-  }
-  return m;
-}
-
-}  // namespace
-
-// Basic version.
-static void BM_MapLookup(benchmark::State& state) {
-  const int size = static_cast<int>(state.range(0));
-  std::map<int, int> m;
-  for (auto _ : state) {
-    state.PauseTiming();
-    m = ConstructRandomMap(size);
-    state.ResumeTiming();
-    for (int i = 0; i < size; ++i) {
-      benchmark::DoNotOptimize(m.find(std::rand() % size));
-    }
-  }
-  state.SetItemsProcessed(state.iterations() * size);
-}
-BENCHMARK(BM_MapLookup)->Range(1 << 3, 1 << 12);
-
-// Using fixtures.
-class MapFixture : public ::benchmark::Fixture {
- public:
-  void SetUp(const ::benchmark::State& st) {
-    m = ConstructRandomMap(static_cast<int>(st.range(0)));
-  }
-
-  void TearDown(const ::benchmark::State&) { m.clear(); }
-
-  std::map<int, int> m;
-};
-
-BENCHMARK_DEFINE_F(MapFixture, Lookup)(benchmark::State& state) {
-  const int size = static_cast<int>(state.range(0));
-  for (auto _ : state) {
-    for (int i = 0; i < size; ++i) {
-      benchmark::DoNotOptimize(m.find(std::rand() % size));
-    }
-  }
-  state.SetItemsProcessed(state.iterations() * size);
-}
-BENCHMARK_REGISTER_F(MapFixture, Lookup)->Range(1 << 3, 1 << 12);
-
-BENCHMARK_MAIN();

diff  --git a/llvm/utils/benchmark/test/multiple_ranges_test.cc b/llvm/utils/benchmark/test/multiple_ranges_test.cc
deleted file mode 100644
index c64acabc25c98..0000000000000
--- a/llvm/utils/benchmark/test/multiple_ranges_test.cc
+++ /dev/null
@@ -1,97 +0,0 @@
-#include "benchmark/benchmark.h"
-
-#include <cassert>
-#include <iostream>
-#include <set>
-#include <vector>
-
-class MultipleRangesFixture : public ::benchmark::Fixture {
- public:
-  MultipleRangesFixture()
-      : expectedValues({{1, 3, 5},
-                        {1, 3, 8},
-                        {1, 3, 15},
-                        {2, 3, 5},
-                        {2, 3, 8},
-                        {2, 3, 15},
-                        {1, 4, 5},
-                        {1, 4, 8},
-                        {1, 4, 15},
-                        {2, 4, 5},
-                        {2, 4, 8},
-                        {2, 4, 15},
-                        {1, 7, 5},
-                        {1, 7, 8},
-                        {1, 7, 15},
-                        {2, 7, 5},
-                        {2, 7, 8},
-                        {2, 7, 15},
-                        {7, 6, 3}}) {}
-
-  void SetUp(const ::benchmark::State& state) {
-    std::vector<int64_t> ranges = {state.range(0), state.range(1),
-                                   state.range(2)};
-
-    assert(expectedValues.find(ranges) != expectedValues.end());
-
-    actualValues.insert(ranges);
-  }
-
-  // NOTE: This is not TearDown as we want to check after _all_ runs are
-  // complete.
-  virtual ~MultipleRangesFixture() {
-    assert(actualValues.size() == expectedValues.size());
-    if (actualValues.size() != expectedValues.size()) {
-      std::cout << "EXPECTED\n";
-      for (auto v : expectedValues) {
-        std::cout << "{";
-        for (int64_t iv : v) {
-          std::cout << iv << ", ";
-        }
-        std::cout << "}\n";
-      }
-      std::cout << "ACTUAL\n";
-      for (auto v : actualValues) {
-        std::cout << "{";
-        for (int64_t iv : v) {
-          std::cout << iv << ", ";
-        }
-        std::cout << "}\n";
-      }
-    }
-  }
-
-  std::set<std::vector<int64_t>> expectedValues;
-  std::set<std::vector<int64_t>> actualValues;
-};
-
-BENCHMARK_DEFINE_F(MultipleRangesFixture, Empty)(benchmark::State& state) {
-  for (auto _ : state) {
-    int64_t product = state.range(0) * state.range(1) * state.range(2);
-    for (int64_t x = 0; x < product; x++) {
-      benchmark::DoNotOptimize(x);
-    }
-  }
-}
-
-BENCHMARK_REGISTER_F(MultipleRangesFixture, Empty)
-    ->RangeMultiplier(2)
-    ->Ranges({{1, 2}, {3, 7}, {5, 15}})
-    ->Args({7, 6, 3});
-
-void BM_CheckDefaultArgument(benchmark::State& state) {
-  // Test that the 'range()' without an argument is the same as 'range(0)'.
-  assert(state.range() == state.range(0));
-  assert(state.range() != state.range(1));
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_CheckDefaultArgument)->Ranges({{1, 5}, {6, 10}});
-
-static void BM_MultipleRanges(benchmark::State& st) {
-  for (auto _ : st) {
-  }
-}
-BENCHMARK(BM_MultipleRanges)->Ranges({{5, 5}, {6, 6}});
-
-BENCHMARK_MAIN();

diff  --git a/llvm/utils/benchmark/test/options_test.cc b/llvm/utils/benchmark/test/options_test.cc
deleted file mode 100644
index fdec69174eec0..0000000000000
--- a/llvm/utils/benchmark/test/options_test.cc
+++ /dev/null
@@ -1,65 +0,0 @@
-#include "benchmark/benchmark.h"
-#include <chrono>
-#include <thread>
-
-#if defined(NDEBUG)
-#undef NDEBUG
-#endif
-#include <cassert>
-
-void BM_basic(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-
-void BM_basic_slow(benchmark::State& state) {
-  std::chrono::milliseconds sleep_duration(state.range(0));
-  for (auto _ : state) {
-    std::this_thread::sleep_for(
-        std::chrono::duration_cast<std::chrono::nanoseconds>(sleep_duration));
-  }
-}
-
-BENCHMARK(BM_basic);
-BENCHMARK(BM_basic)->Arg(42);
-BENCHMARK(BM_basic_slow)->Arg(10)->Unit(benchmark::kNanosecond);
-BENCHMARK(BM_basic_slow)->Arg(100)->Unit(benchmark::kMicrosecond);
-BENCHMARK(BM_basic_slow)->Arg(1000)->Unit(benchmark::kMillisecond);
-BENCHMARK(BM_basic)->Range(1, 8);
-BENCHMARK(BM_basic)->RangeMultiplier(2)->Range(1, 8);
-BENCHMARK(BM_basic)->DenseRange(10, 15);
-BENCHMARK(BM_basic)->Args({42, 42});
-BENCHMARK(BM_basic)->Ranges({{64, 512}, {64, 512}});
-BENCHMARK(BM_basic)->MinTime(0.7);
-BENCHMARK(BM_basic)->UseRealTime();
-BENCHMARK(BM_basic)->ThreadRange(2, 4);
-BENCHMARK(BM_basic)->ThreadPerCpu();
-BENCHMARK(BM_basic)->Repetitions(3);
-
-void CustomArgs(benchmark::internal::Benchmark* b) {
-  for (int i = 0; i < 10; ++i) {
-    b->Arg(i);
-  }
-}
-
-BENCHMARK(BM_basic)->Apply(CustomArgs);
-
-void BM_explicit_iteration_count(benchmark::State& state) {
-  // Test that benchmarks specified with an explicit iteration count are
-  // only run once.
-  static bool invoked_before = false;
-  assert(!invoked_before);
-  invoked_before = true;
-
-  // Test that the requested iteration count is respected.
-  assert(state.max_iterations == 42);
-  size_t actual_iterations = 0;
-  for (auto _ : state)
-    ++actual_iterations;
-  assert(state.iterations() == state.max_iterations);
-  assert(state.iterations() == 42);
-
-}
-BENCHMARK(BM_explicit_iteration_count)->Iterations(42);
-
-BENCHMARK_MAIN();

diff  --git a/llvm/utils/benchmark/test/output_test.h b/llvm/utils/benchmark/test/output_test.h
deleted file mode 100644
index 897a13866baec..0000000000000
--- a/llvm/utils/benchmark/test/output_test.h
+++ /dev/null
@@ -1,201 +0,0 @@
-#ifndef TEST_OUTPUT_TEST_H
-#define TEST_OUTPUT_TEST_H
-
-#undef NDEBUG
-#include <initializer_list>
-#include <memory>
-#include <string>
-#include <utility>
-#include <vector>
-#include <functional>
-#include <sstream>
-
-#include "../src/re.h"
-#include "benchmark/benchmark.h"
-
-#define CONCAT2(x, y) x##y
-#define CONCAT(x, y) CONCAT2(x, y)
-
-#define ADD_CASES(...) int CONCAT(dummy, __LINE__) = ::AddCases(__VA_ARGS__)
-
-#define SET_SUBSTITUTIONS(...) \
-  int CONCAT(dummy, __LINE__) = ::SetSubstitutions(__VA_ARGS__)
-
-enum MatchRules {
-  MR_Default,  // Skip non-matching lines until a match is found.
-  MR_Next,     // Match must occur on the next line.
-  MR_Not  // No line between the current position and the next match matches
-          // the regex
-};
-
-struct TestCase {
-  TestCase(std::string re, int rule = MR_Default);
-
-  std::string regex_str;
-  int match_rule;
-  std::string substituted_regex;
-  std::shared_ptr<benchmark::Regex> regex;
-};
-
-enum TestCaseID {
-  TC_ConsoleOut,
-  TC_ConsoleErr,
-  TC_JSONOut,
-  TC_JSONErr,
-  TC_CSVOut,
-  TC_CSVErr,
-
-  TC_NumID  // PRIVATE
-};
-
-// Add a list of test cases to be run against the output specified by
-// 'ID'
-int AddCases(TestCaseID ID, std::initializer_list<TestCase> il);
-
-// Add or set a list of substitutions to be performed on constructed regex's
-// See 'output_test_helper.cc' for a list of default substitutions.
-int SetSubstitutions(
-    std::initializer_list<std::pair<std::string, std::string>> il);
-
-// Run all output tests.
-void RunOutputTests(int argc, char* argv[]);
-
-// ========================================================================= //
-// ------------------------- Results checking ------------------------------ //
-// ========================================================================= //
-
-// Call this macro to register a benchmark for checking its results. This
-// should be all that's needed. It subscribes a function to check the (CSV)
-// results of a benchmark. This is done only after verifying that the output
-// strings are really as expected.
-// bm_name_pattern: a name or a regex pattern which will be matched against
-//                  all the benchmark names. Matching benchmarks
-//                  will be the subject of a call to checker_function
-// checker_function: should be of type ResultsCheckFn (see below)
-#define CHECK_BENCHMARK_RESULTS(bm_name_pattern, checker_function) \
-    size_t CONCAT(dummy, __LINE__) = AddChecker(bm_name_pattern, checker_function)
-
-struct Results;
-typedef std::function< void(Results const&) > ResultsCheckFn;
-
-size_t AddChecker(const char* bm_name_pattern, ResultsCheckFn fn);
-
-// Class holding the results of a benchmark.
-// It is passed in calls to checker functions.
-struct Results {
-
-  // the benchmark name
-  std::string name;
-  // the benchmark fields
-  std::map< std::string, std::string > values;
-
-  Results(const std::string& n) : name(n) {}
-
-  int NumThreads() const;
-
-  typedef enum { kCpuTime, kRealTime } BenchmarkTime;
-
-  // get cpu_time or real_time in seconds
-  double GetTime(BenchmarkTime which) const;
-
-  // get the real_time duration of the benchmark in seconds.
-  // it is better to use fuzzy float checks for this, as the float
-  // ASCII formatting is lossy.
-  double DurationRealTime() const {
-    return GetAs< double >("iterations") * GetTime(kRealTime);
-  }
-  // get the cpu_time duration of the benchmark in seconds
-  double DurationCPUTime() const {
-    return GetAs< double >("iterations") * GetTime(kCpuTime);
-  }
-
-  // get the string for a result by name, or nullptr if the name
-  // is not found
-  const std::string* Get(const char* entry_name) const {
-    auto it = values.find(entry_name);
-    if(it == values.end()) return nullptr;
-    return &it->second;
-  }
-
-  // get a result by name, parsed as a specific type.
-  // NOTE: for counters, use GetCounterAs instead.
-  template <class T>
-  T GetAs(const char* entry_name) const;
-
-  // counters are written as doubles, so they have to be read first
-  // as a double, and only then converted to the asked type.
-  template <class T>
-  T GetCounterAs(const char* entry_name) const {
-    double dval = GetAs< double >(entry_name);
-    T tval = static_cast< T >(dval);
-    return tval;
-  }
-};
-
-template <class T>
-T Results::GetAs(const char* entry_name) const {
-  auto *sv = Get(entry_name);
-  CHECK(sv != nullptr && !sv->empty());
-  std::stringstream ss;
-  ss << *sv;
-  T out;
-  ss >> out;
-  CHECK(!ss.fail());
-  return out;
-}
-
-//----------------------------------
-// Macros to help in result checking. Do not use them with arguments causing
-// side-effects.
-
-#define _CHECK_RESULT_VALUE(entry, getfn, var_type, var_name, relationship, value) \
-    CONCAT(CHECK_, relationship)                                        \
-    (entry.getfn< var_type >(var_name), (value)) << "\n"                \
-    << __FILE__ << ":" << __LINE__ << ": " << (entry).name << ":\n"     \
-    << __FILE__ << ":" << __LINE__ << ": "                              \
-    << "expected (" << #var_type << ")" << (var_name)                   \
-    << "=" << (entry).getfn< var_type >(var_name)                       \
-    << " to be " #relationship " to " << (value) << "\n"
-
-// check with tolerance. eps_factor is the tolerance window, which is
-// interpreted relative to value (eg, 0.1 means 10% of value).
-#define _CHECK_FLOAT_RESULT_VALUE(entry, getfn, var_type, var_name, relationship, value, eps_factor) \
-    CONCAT(CHECK_FLOAT_, relationship)                                  \
-    (entry.getfn< var_type >(var_name), (value), (eps_factor) * (value)) << "\n" \
-    << __FILE__ << ":" << __LINE__ << ": " << (entry).name << ":\n"     \
-    << __FILE__ << ":" << __LINE__ << ": "                              \
-    << "expected (" << #var_type << ")" << (var_name)                   \
-    << "=" << (entry).getfn< var_type >(var_name)                       \
-    << " to be " #relationship " to " << (value) << "\n"                \
-    << __FILE__ << ":" << __LINE__ << ": "                              \
-    << "with tolerance of " << (eps_factor) * (value)                   \
-    << " (" << (eps_factor)*100. << "%), "                              \
-    << "but delta was " << ((entry).getfn< var_type >(var_name) - (value)) \
-    << " (" << (((entry).getfn< var_type >(var_name) - (value))         \
-               /                                                        \
-               ((value) > 1.e-5 || value < -1.e-5 ? value : 1.e-5)*100.) \
-    << "%)"
-
-#define CHECK_RESULT_VALUE(entry, var_type, var_name, relationship, value) \
-    _CHECK_RESULT_VALUE(entry, GetAs, var_type, var_name, relationship, value)
-
-#define CHECK_COUNTER_VALUE(entry, var_type, var_name, relationship, value) \
-    _CHECK_RESULT_VALUE(entry, GetCounterAs, var_type, var_name, relationship, value)
-
-#define CHECK_FLOAT_RESULT_VALUE(entry, var_name, relationship, value, eps_factor) \
-    _CHECK_FLOAT_RESULT_VALUE(entry, GetAs, double, var_name, relationship, value, eps_factor)
-
-#define CHECK_FLOAT_COUNTER_VALUE(entry, var_name, relationship, value, eps_factor) \
-    _CHECK_FLOAT_RESULT_VALUE(entry, GetCounterAs, double, var_name, relationship, value, eps_factor)
-
-// ========================================================================= //
-// --------------------------- Misc Utilities ------------------------------ //
-// ========================================================================= //
-
-namespace {
-
-const char* const dec_re = "[0-9]*[.]?[0-9]+([eE][-+][0-9]+)?";
-
-}  //  end namespace
-
-#endif  // TEST_OUTPUT_TEST_H

diff  --git a/llvm/utils/benchmark/test/output_test_helper.cc b/llvm/utils/benchmark/test/output_test_helper.cc
deleted file mode 100644
index 6b18fe43593fc..0000000000000
--- a/llvm/utils/benchmark/test/output_test_helper.cc
+++ /dev/null
@@ -1,423 +0,0 @@
-#include <iostream>
-#include <map>
-#include <memory>
-#include <sstream>
-#include <cstring>
-
-#include "../src/check.h"  // NOTE: check.h is for internal use only!
-#include "../src/re.h"     // NOTE: re.h is for internal use only
-#include "output_test.h"
-#include "../src/benchmark_api_internal.h"
-
-// ========================================================================= //
-// ------------------------------ Internals -------------------------------- //
-// ========================================================================= //
-namespace internal {
-namespace {
-
-using TestCaseList = std::vector<TestCase>;
-
-// Use a vector because the order elements are added matters during iteration.
-// std::map/unordered_map don't guarantee that.
-// For example:
-//  SetSubstitutions({{"%HelloWorld", "Hello"}, {"%Hello", "Hi"}});
-//     Substitute("%HelloWorld") // Always expands to Hello.
-using SubMap = std::vector<std::pair<std::string, std::string>>;
-
-TestCaseList& GetTestCaseList(TestCaseID ID) {
-  // Uses function-local statics to ensure initialization occurs
-  // before first use.
-  static TestCaseList lists[TC_NumID];
-  return lists[ID];
-}
-
-SubMap& GetSubstitutions() {
-  // Don't use 'dec_re' from header because it may not yet be initialized.
-  static std::string safe_dec_re = "[0-9]*[.]?[0-9]+([eE][-+][0-9]+)?";
-  static SubMap map = {
-      {"%float", "[0-9]*[.]?[0-9]+([eE][-+][0-9]+)?"},
-      // human-readable float
-      {"%hrfloat", "[0-9]*[.]?[0-9]+([eE][-+][0-9]+)?[kMGTPEZYmunpfazy]?"},
-      {"%int", "[ ]*[0-9]+"},
-      {" %s ", "[ ]+"},
-      {"%time", "[ ]*[0-9]{1,6} ns"},
-      {"%console_report", "[ ]*[0-9]{1,6} ns [ ]*[0-9]{1,6} ns [ ]*[0-9]+"},
-      {"%console_us_report", "[ ]*[0-9] us [ ]*[0-9] us [ ]*[0-9]+"},
-      {"%csv_header",
-       "name,iterations,real_time,cpu_time,time_unit,bytes_per_second,"
-       "items_per_second,label,error_occurred,error_message"},
-      {"%csv_report", "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",ns,,,,,"},
-      {"%csv_us_report", "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",us,,,,,"},
-      {"%csv_bytes_report",
-       "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",ns," + safe_dec_re + ",,,,"},
-      {"%csv_items_report",
-       "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",ns,," + safe_dec_re + ",,,"},
-      {"%csv_bytes_items_report",
-       "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",ns," + safe_dec_re +
-       "," + safe_dec_re + ",,,"},
-      {"%csv_label_report_begin", "[0-9]+," + safe_dec_re + "," + safe_dec_re + ",ns,,,"},
-      {"%csv_label_report_end", ",,"}};
-  return map;
-}
-
-std::string PerformSubstitutions(std::string source) {
-  SubMap const& subs = GetSubstitutions();
-  using SizeT = std::string::size_type;
-  for (auto const& KV : subs) {
-    SizeT pos;
-    SizeT next_start = 0;
-    while ((pos = source.find(KV.first, next_start)) != std::string::npos) {
-      next_start = pos + KV.second.size();
-      source.replace(pos, KV.first.size(), KV.second);
-    }
-  }
-  return source;
-}
-
-void CheckCase(std::stringstream& remaining_output, TestCase const& TC,
-               TestCaseList const& not_checks) {
-  std::string first_line;
-  bool on_first = true;
-  std::string line;
-  while (remaining_output.eof() == false) {
-    CHECK(remaining_output.good());
-    std::getline(remaining_output, line);
-    if (on_first) {
-      first_line = line;
-      on_first = false;
-    }
-    for (const auto& NC : not_checks) {
-      CHECK(!NC.regex->Match(line))
-          << "Unexpected match for line \"" << line << "\" for MR_Not regex \""
-          << NC.regex_str << "\""
-          << "\n    actual regex string \"" << TC.substituted_regex << "\""
-          << "\n    started matching near: " << first_line;
-    }
-    if (TC.regex->Match(line)) return;
-    CHECK(TC.match_rule != MR_Next)
-        << "Expected line \"" << line << "\" to match regex \"" << TC.regex_str
-        << "\""
-        << "\n    actual regex string \"" << TC.substituted_regex << "\""
-        << "\n    started matching near: " << first_line;
-  }
-  CHECK(remaining_output.eof() == false)
-      << "End of output reached before match for regex \"" << TC.regex_str
-      << "\" was found"
-      << "\n    actual regex string \"" << TC.substituted_regex << "\""
-      << "\n    started matching near: " << first_line;
-}
-
-void CheckCases(TestCaseList const& checks, std::stringstream& output) {
-  std::vector<TestCase> not_checks;
-  for (size_t i = 0; i < checks.size(); ++i) {
-    const auto& TC = checks[i];
-    if (TC.match_rule == MR_Not) {
-      not_checks.push_back(TC);
-      continue;
-    }
-    CheckCase(output, TC, not_checks);
-    not_checks.clear();
-  }
-}
-
-class TestReporter : public benchmark::BenchmarkReporter {
- public:
-  TestReporter(std::vector<benchmark::BenchmarkReporter*> reps)
-      : reporters_(reps) {}
-
-  virtual bool ReportContext(const Context& context) {
-    bool last_ret = false;
-    bool first = true;
-    for (auto rep : reporters_) {
-      bool new_ret = rep->ReportContext(context);
-      CHECK(first || new_ret == last_ret)
-          << "Reports return 
diff erent values for ReportContext";
-      first = false;
-      last_ret = new_ret;
-    }
-    (void)first;
-    return last_ret;
-  }
-
-  void ReportRuns(const std::vector<Run>& report) {
-    for (auto rep : reporters_) rep->ReportRuns(report);
-  }
-  void Finalize() {
-    for (auto rep : reporters_) rep->Finalize();
-  }
-
- private:
-  std::vector<benchmark::BenchmarkReporter *> reporters_;
-};
-}
-
-}  // end namespace internal
-
-// ========================================================================= //
-// -------------------------- Results checking ----------------------------- //
-// ========================================================================= //
-
-namespace internal {
-
-// Utility class to manage subscribers for checking benchmark results.
-// It works by parsing the CSV output to read the results.
-class ResultsChecker {
- public:
-
-  struct PatternAndFn : public TestCase { // reusing TestCase for its regexes
-    PatternAndFn(const std::string& rx, ResultsCheckFn fn_)
-    : TestCase(rx), fn(fn_) {}
-    ResultsCheckFn fn;
-  };
-
-  std::vector< PatternAndFn > check_patterns;
-  std::vector< Results > results;
-  std::vector< std::string > field_names;
-
-  void Add(const std::string& entry_pattern, ResultsCheckFn fn);
-
-  void CheckResults(std::stringstream& output);
-
- private:
-
-  void SetHeader_(const std::string& csv_header);
-  void SetValues_(const std::string& entry_csv_line);
-
-  std::vector< std::string > SplitCsv_(const std::string& line);
-
-};
-
-// store the static ResultsChecker in a function to prevent initialization
-// order problems
-ResultsChecker& GetResultsChecker() {
-  static ResultsChecker rc;
-  return rc;
-}
-
-// add a results checker for a benchmark
-void ResultsChecker::Add(const std::string& entry_pattern, ResultsCheckFn fn) {
-  check_patterns.emplace_back(entry_pattern, fn);
-}
-
-// check the results of all subscribed benchmarks
-void ResultsChecker::CheckResults(std::stringstream& output) {
-  // first reset the stream to the start
-  {
-    auto start = std::ios::streampos(0);
-    // clear before calling tellg()
-    output.clear();
-    // seek to zero only when needed
-    if(output.tellg() > start) output.seekg(start);
-    // and just in case
-    output.clear();
-  }
-  // now go over every line and publish it to the ResultsChecker
-  std::string line;
-  bool on_first = true;
-  while (output.eof() == false) {
-    CHECK(output.good());
-    std::getline(output, line);
-    if (on_first) {
-      SetHeader_(line); // this is important
-      on_first = false;
-      continue;
-    }
-    SetValues_(line);
-  }
-  // finally we can call the subscribed check functions
-  for(const auto& p : check_patterns) {
-    VLOG(2) << "--------------------------------\n";
-    VLOG(2) << "checking for benchmarks matching " << p.regex_str << "...\n";
-    for(const auto& r : results) {
-      if(!p.regex->Match(r.name)) {
-        VLOG(2) << p.regex_str << " is not matched by " << r.name << "\n";
-        continue;
-      } else {
-        VLOG(2) << p.regex_str << " is matched by " << r.name << "\n";
-      }
-      VLOG(1) << "Checking results of " << r.name << ": ... \n";
-      p.fn(r);
-      VLOG(1) << "Checking results of " << r.name << ": OK.\n";
-    }
-  }
-}
-
-// prepare for the names in this header
-void ResultsChecker::SetHeader_(const std::string& csv_header) {
-  field_names = SplitCsv_(csv_header);
-}
-
-// set the values for a benchmark
-void ResultsChecker::SetValues_(const std::string& entry_csv_line) {
-  if(entry_csv_line.empty()) return; // some lines are empty
-  CHECK(!field_names.empty());
-  auto vals = SplitCsv_(entry_csv_line);
-  CHECK_EQ(vals.size(), field_names.size());
-  results.emplace_back(vals[0]); // vals[0] is the benchmark name
-  auto &entry = results.back();
-  for (size_t i = 1, e = vals.size(); i < e; ++i) {
-    entry.values[field_names[i]] = vals[i];
-  }
-}
-
-// a quick'n'dirty csv splitter (eliminating quotes)
-std::vector< std::string > ResultsChecker::SplitCsv_(const std::string& line) {
-  std::vector< std::string > out;
-  if(line.empty()) return out;
-  if(!field_names.empty()) out.reserve(field_names.size());
-  size_t prev = 0, pos = line.find_first_of(','), curr = pos;
-  while(pos != line.npos) {
-    CHECK(curr > 0);
-    if(line[prev] == '"') ++prev;
-    if(line[curr-1] == '"') --curr;
-    out.push_back(line.substr(prev, curr-prev));
-    prev = pos + 1;
-    pos = line.find_first_of(',', pos + 1);
-    curr = pos;
-  }
-  curr = line.size();
-  if(line[prev] == '"') ++prev;
-  if(line[curr-1] == '"') --curr;
-  out.push_back(line.substr(prev, curr-prev));
-  return out;
-}
-
-}  // end namespace internal
-
-size_t AddChecker(const char* bm_name, ResultsCheckFn fn)
-{
-  auto &rc = internal::GetResultsChecker();
-  rc.Add(bm_name, fn);
-  return rc.results.size();
-}
-
-int Results::NumThreads() const {
-  auto pos = name.find("/threads:");
-  if(pos == name.npos) return 1;
-  auto end = name.find('/', pos + 9);
-  std::stringstream ss;
-  ss << name.substr(pos + 9, end);
-  int num = 1;
-  ss >> num;
-  CHECK(!ss.fail());
-  return num;
-}
-
-double Results::GetTime(BenchmarkTime which) const {
-  CHECK(which == kCpuTime || which == kRealTime);
-  const char *which_str = which == kCpuTime ? "cpu_time" : "real_time";
-  double val = GetAs< double >(which_str);
-  auto unit = Get("time_unit");
-  CHECK(unit);
-  if(*unit == "ns") {
-    return val * 1.e-9;
-  } else if(*unit == "us") {
-    return val * 1.e-6;
-  } else if(*unit == "ms") {
-    return val * 1.e-3;
-  } else if(*unit == "s") {
-    return val;
-  } else {
-    CHECK(1 == 0) << "unknown time unit: " << *unit;
-    return 0;
-  }
-}
-
-// ========================================================================= //
-// -------------------------- Public API Definitions------------------------ //
-// ========================================================================= //
-
-TestCase::TestCase(std::string re, int rule)
-    : regex_str(std::move(re)),
-      match_rule(rule),
-      substituted_regex(internal::PerformSubstitutions(regex_str)),
-      regex(std::make_shared<benchmark::Regex>()) {
-  std::string err_str;
-  regex->Init(substituted_regex,& err_str);
-  CHECK(err_str.empty()) << "Could not construct regex \"" << substituted_regex
-                         << "\""
-                         << "\n    originally \"" << regex_str << "\""
-                         << "\n    got error: " << err_str;
-}
-
-int AddCases(TestCaseID ID, std::initializer_list<TestCase> il) {
-  auto& L = internal::GetTestCaseList(ID);
-  L.insert(L.end(), il);
-  return 0;
-}
-
-int SetSubstitutions(
-    std::initializer_list<std::pair<std::string, std::string>> il) {
-  auto& subs = internal::GetSubstitutions();
-  for (auto KV : il) {
-    bool exists = false;
-    KV.second = internal::PerformSubstitutions(KV.second);
-    for (auto& EKV : subs) {
-      if (EKV.first == KV.first) {
-        EKV.second = std::move(KV.second);
-        exists = true;
-        break;
-      }
-    }
-    if (!exists) subs.push_back(std::move(KV));
-  }
-  return 0;
-}
-
-void RunOutputTests(int argc, char* argv[]) {
-  using internal::GetTestCaseList;
-  benchmark::Initialize(&argc, argv);
-  auto options = benchmark::internal::GetOutputOptions(/*force_no_color*/true);
-  benchmark::ConsoleReporter CR(options);
-  benchmark::JSONReporter JR;
-  benchmark::CSVReporter CSVR;
-  struct ReporterTest {
-    const char* name;
-    std::vector<TestCase>& output_cases;
-    std::vector<TestCase>& error_cases;
-    benchmark::BenchmarkReporter& reporter;
-    std::stringstream out_stream;
-    std::stringstream err_stream;
-
-    ReporterTest(const char* n, std::vector<TestCase>& out_tc,
-                 std::vector<TestCase>& err_tc,
-                 benchmark::BenchmarkReporter& br)
-        : name(n), output_cases(out_tc), error_cases(err_tc), reporter(br) {
-      reporter.SetOutputStream(&out_stream);
-      reporter.SetErrorStream(&err_stream);
-    }
-  } TestCases[] = {
-      {"ConsoleReporter", GetTestCaseList(TC_ConsoleOut),
-       GetTestCaseList(TC_ConsoleErr), CR},
-      {"JSONReporter", GetTestCaseList(TC_JSONOut), GetTestCaseList(TC_JSONErr),
-       JR},
-      {"CSVReporter", GetTestCaseList(TC_CSVOut), GetTestCaseList(TC_CSVErr),
-       CSVR},
-  };
-
-  // Create the test reporter and run the benchmarks.
-  std::cout << "Running benchmarks...\n";
-  internal::TestReporter test_rep({&CR, &JR, &CSVR});
-  benchmark::RunSpecifiedBenchmarks(&test_rep);
-
-  for (auto& rep_test : TestCases) {
-    std::string msg = std::string("\nTesting ") + rep_test.name + " Output\n";
-    std::string banner(msg.size() - 1, '-');
-    std::cout << banner << msg << banner << "\n";
-
-    std::cerr << rep_test.err_stream.str();
-    std::cout << rep_test.out_stream.str();
-
-    internal::CheckCases(rep_test.error_cases, rep_test.err_stream);
-    internal::CheckCases(rep_test.output_cases, rep_test.out_stream);
-
-    std::cout << "\n";
-  }
-
-  // now that we know the output is as expected, we can dispatch
-  // the checks to subscribees.
-  auto &csv = TestCases[2];
-  // would use == but gcc spits a warning
-  CHECK(std::strcmp(csv.name, "CSVReporter") == 0);
-  internal::GetResultsChecker().CheckResults(csv.out_stream);
-}

diff  --git a/llvm/utils/benchmark/test/register_benchmark_test.cc b/llvm/utils/benchmark/test/register_benchmark_test.cc
deleted file mode 100644
index 8ab2c299393fa..0000000000000
--- a/llvm/utils/benchmark/test/register_benchmark_test.cc
+++ /dev/null
@@ -1,182 +0,0 @@
-
-#undef NDEBUG
-#include <cassert>
-#include <vector>
-
-#include "../src/check.h"  // NOTE: check.h is for internal use only!
-#include "benchmark/benchmark.h"
-
-namespace {
-
-class TestReporter : public benchmark::ConsoleReporter {
- public:
-  virtual void ReportRuns(const std::vector<Run>& report) {
-    all_runs_.insert(all_runs_.end(), begin(report), end(report));
-    ConsoleReporter::ReportRuns(report);
-  }
-
-  std::vector<Run> all_runs_;
-};
-
-struct TestCase {
-  std::string name;
-  const char* label;
-  // Note: not explicit as we rely on it being converted through ADD_CASES.
-  TestCase(const char* xname) : TestCase(xname, nullptr) {}
-  TestCase(const char* xname, const char* xlabel)
-      : name(xname), label(xlabel) {}
-
-  typedef benchmark::BenchmarkReporter::Run Run;
-
-  void CheckRun(Run const& run) const {
-    CHECK(name == run.benchmark_name) << "expected " << name << " got "
-                                      << run.benchmark_name;
-    if (label) {
-      CHECK(run.report_label == label) << "expected " << label << " got "
-                                       << run.report_label;
-    } else {
-      CHECK(run.report_label == "");
-    }
-  }
-};
-
-std::vector<TestCase> ExpectedResults;
-
-int AddCases(std::initializer_list<TestCase> const& v) {
-  for (auto N : v) {
-    ExpectedResults.push_back(N);
-  }
-  return 0;
-}
-
-#define CONCAT(x, y) CONCAT2(x, y)
-#define CONCAT2(x, y) x##y
-#define ADD_CASES(...) int CONCAT(dummy, __LINE__) = AddCases({__VA_ARGS__})
-
-}  // end namespace
-
-typedef benchmark::internal::Benchmark* ReturnVal;
-
-//----------------------------------------------------------------------------//
-// Test RegisterBenchmark with no additional arguments
-//----------------------------------------------------------------------------//
-void BM_function(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_function);
-ReturnVal dummy = benchmark::RegisterBenchmark(
-    "BM_function_manual_registration", BM_function);
-ADD_CASES({"BM_function"}, {"BM_function_manual_registration"});
-
-//----------------------------------------------------------------------------//
-// Test RegisterBenchmark with additional arguments
-// Note: GCC <= 4.8 do not support this form of RegisterBenchmark because they
-//       reject the variadic pack expansion of lambda captures.
-//----------------------------------------------------------------------------//
-#ifndef BENCHMARK_HAS_NO_VARIADIC_REGISTER_BENCHMARK
-
-void BM_extra_args(benchmark::State& st, const char* label) {
-  for (auto _ : st) {
-  }
-  st.SetLabel(label);
-}
-int RegisterFromFunction() {
-  std::pair<const char*, const char*> cases[] = {
-      {"test1", "One"}, {"test2", "Two"}, {"test3", "Three"}};
-  for (auto const& c : cases)
-    benchmark::RegisterBenchmark(c.first, &BM_extra_args, c.second);
-  return 0;
-}
-int dummy2 = RegisterFromFunction();
-ADD_CASES({"test1", "One"}, {"test2", "Two"}, {"test3", "Three"});
-
-#endif  // BENCHMARK_HAS_NO_VARIADIC_REGISTER_BENCHMARK
-
-//----------------------------------------------------------------------------//
-// Test RegisterBenchmark with 
diff erent callable types
-//----------------------------------------------------------------------------//
-
-struct CustomFixture {
-  void operator()(benchmark::State& st) {
-    for (auto _ : st) {
-    }
-  }
-};
-
-void TestRegistrationAtRuntime() {
-#ifdef BENCHMARK_HAS_CXX11
-  {
-    CustomFixture fx;
-    benchmark::RegisterBenchmark("custom_fixture", fx);
-    AddCases({"custom_fixture"});
-  }
-#endif
-#ifndef BENCHMARK_HAS_NO_VARIADIC_REGISTER_BENCHMARK
-  {
-    const char* x = "42";
-    auto capturing_lam = [=](benchmark::State& st) {
-      for (auto _ : st) {
-      }
-      st.SetLabel(x);
-    };
-    benchmark::RegisterBenchmark("lambda_benchmark", capturing_lam);
-    AddCases({{"lambda_benchmark", x}});
-  }
-#endif
-}
-
-// Test that all benchmarks, registered at either during static init or runtime,
-// are run and the results are passed to the reported.
-void RunTestOne() {
-  TestRegistrationAtRuntime();
-
-  TestReporter test_reporter;
-  benchmark::RunSpecifiedBenchmarks(&test_reporter);
-
-  typedef benchmark::BenchmarkReporter::Run Run;
-  auto EB = ExpectedResults.begin();
-
-  for (Run const& run : test_reporter.all_runs_) {
-    assert(EB != ExpectedResults.end());
-    EB->CheckRun(run);
-    ++EB;
-  }
-  assert(EB == ExpectedResults.end());
-}
-
-// Test that ClearRegisteredBenchmarks() clears all previously registered
-// benchmarks.
-// Also test that new benchmarks can be registered and ran afterwards.
-void RunTestTwo() {
-  assert(ExpectedResults.size() != 0 &&
-         "must have at least one registered benchmark");
-  ExpectedResults.clear();
-  benchmark::ClearRegisteredBenchmarks();
-
-  TestReporter test_reporter;
-  size_t num_ran = benchmark::RunSpecifiedBenchmarks(&test_reporter);
-  assert(num_ran == 0);
-  assert(test_reporter.all_runs_.begin() == test_reporter.all_runs_.end());
-
-  TestRegistrationAtRuntime();
-  num_ran = benchmark::RunSpecifiedBenchmarks(&test_reporter);
-  assert(num_ran == ExpectedResults.size());
-
-  typedef benchmark::BenchmarkReporter::Run Run;
-  auto EB = ExpectedResults.begin();
-
-  for (Run const& run : test_reporter.all_runs_) {
-    assert(EB != ExpectedResults.end());
-    EB->CheckRun(run);
-    ++EB;
-  }
-  assert(EB == ExpectedResults.end());
-}
-
-int main(int argc, char* argv[]) {
-  benchmark::Initialize(&argc, argv);
-
-  RunTestOne();
-  RunTestTwo();
-}

diff  --git a/llvm/utils/benchmark/test/reporter_output_test.cc b/llvm/utils/benchmark/test/reporter_output_test.cc
deleted file mode 100644
index 23eb1baf63376..0000000000000
--- a/llvm/utils/benchmark/test/reporter_output_test.cc
+++ /dev/null
@@ -1,382 +0,0 @@
-
-#undef NDEBUG
-#include <utility>
-
-#include "benchmark/benchmark.h"
-#include "output_test.h"
-
-// ========================================================================= //
-// ---------------------- Testing Prologue Output -------------------------- //
-// ========================================================================= //
-
-ADD_CASES(TC_ConsoleOut,
-          {{"^[-]+$", MR_Next},
-           {"^Benchmark %s Time %s CPU %s Iterations$", MR_Next},
-           {"^[-]+$", MR_Next}});
-static int AddContextCases() {
-  AddCases(TC_ConsoleErr,
-           {
-               {"%int[-/]%int[-/]%int %int:%int:%int$", MR_Default},
-               {"Running .*/reporter_output_test(\\.exe)?$", MR_Next},
-               {"Run on \\(%int X %float MHz CPU s\\)", MR_Next},
-           });
-  AddCases(TC_JSONOut, {{"^\\{", MR_Default},
-                        {"\"context\":", MR_Next},
-                        {"\"date\": \"", MR_Next},
-                        {"\"executable\": \".*/reporter_output_test(\\.exe)?\",", MR_Next},
-                        {"\"num_cpus\": %int,$", MR_Next},
-                        {"\"mhz_per_cpu\": %float,$", MR_Next},
-                        {"\"cpu_scaling_enabled\": ", MR_Next},
-                        {"\"caches\": \\[$", MR_Next}});
-  auto const& Caches = benchmark::CPUInfo::Get().caches;
-  if (!Caches.empty()) {
-    AddCases(TC_ConsoleErr, {{"CPU Caches:$", MR_Next}});
-  }
-  for (size_t I = 0; I < Caches.size(); ++I) {
-    std::string num_caches_str =
-        Caches[I].num_sharing != 0 ? " \\(x%int\\)$" : "$";
-    AddCases(
-        TC_ConsoleErr,
-        {{"L%int (Data|Instruction|Unified) %intK" + num_caches_str, MR_Next}});
-    AddCases(TC_JSONOut, {{"\\{$", MR_Next},
-                          {"\"type\": \"", MR_Next},
-                          {"\"level\": %int,$", MR_Next},
-                          {"\"size\": %int,$", MR_Next},
-                          {"\"num_sharing\": %int$", MR_Next},
-                          {"}[,]{0,1}$", MR_Next}});
-  }
-
-  AddCases(TC_JSONOut, {{"],$"}});
-  return 0;
-}
-int dummy_register = AddContextCases();
-ADD_CASES(TC_CSVOut, {{"%csv_header"}});
-
-// ========================================================================= //
-// ------------------------ Testing Basic Output --------------------------- //
-// ========================================================================= //
-
-void BM_basic(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_basic);
-
-ADD_CASES(TC_ConsoleOut, {{"^BM_basic %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_basic\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\"$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_basic\",%csv_report$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Bytes per Second Output ---------------- //
-// ========================================================================= //
-
-void BM_bytes_per_second(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  state.SetBytesProcessed(1);
-}
-BENCHMARK(BM_bytes_per_second);
-
-ADD_CASES(TC_ConsoleOut,
-          {{"^BM_bytes_per_second %console_report +%float[kM]{0,1}B/s$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_bytes_per_second\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bytes_per_second\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_bytes_per_second\",%csv_bytes_report$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Items per Second Output ---------------- //
-// ========================================================================= //
-
-void BM_items_per_second(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  state.SetItemsProcessed(1);
-}
-BENCHMARK(BM_items_per_second);
-
-ADD_CASES(TC_ConsoleOut,
-          {{"^BM_items_per_second %console_report +%float[kM]{0,1} items/s$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_items_per_second\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"items_per_second\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_items_per_second\",%csv_items_report$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Label Output --------------------------- //
-// ========================================================================= //
-
-void BM_label(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  state.SetLabel("some label");
-}
-BENCHMARK(BM_label);
-
-ADD_CASES(TC_ConsoleOut, {{"^BM_label %console_report some label$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_label\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"label\": \"some label\"$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_label\",%csv_label_report_begin\"some "
-                       "label\"%csv_label_report_end$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Error Output --------------------------- //
-// ========================================================================= //
-
-void BM_error(benchmark::State& state) {
-  state.SkipWithError("message");
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_error);
-ADD_CASES(TC_ConsoleOut, {{"^BM_error[ ]+ERROR OCCURRED: 'message'$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_error\",$"},
-                       {"\"error_occurred\": true,$", MR_Next},
-                       {"\"error_message\": \"message\",$", MR_Next}});
-
-ADD_CASES(TC_CSVOut, {{"^\"BM_error\",,,,,,,,true,\"message\"$"}});
-
-// ========================================================================= //
-// ------------------------ Testing No Arg Name Output -----------------------
-// //
-// ========================================================================= //
-
-void BM_no_arg_name(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_no_arg_name)->Arg(3);
-ADD_CASES(TC_ConsoleOut, {{"^BM_no_arg_name/3 %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_no_arg_name/3\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_no_arg_name/3\",%csv_report$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Arg Name Output ----------------------- //
-// ========================================================================= //
-
-void BM_arg_name(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_arg_name)->ArgName("first")->Arg(3);
-ADD_CASES(TC_ConsoleOut, {{"^BM_arg_name/first:3 %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_arg_name/first:3\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_arg_name/first:3\",%csv_report$"}});
-
-// ========================================================================= //
-// ------------------------ Testing Arg Names Output ----------------------- //
-// ========================================================================= //
-
-void BM_arg_names(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_arg_names)->Args({2, 5, 4})->ArgNames({"first", "", "third"});
-ADD_CASES(TC_ConsoleOut,
-          {{"^BM_arg_names/first:2/5/third:4 %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_arg_names/first:2/5/third:4\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_arg_names/first:2/5/third:4\",%csv_report$"}});
-
-// ========================================================================= //
-// ----------------------- Testing Complexity Output ----------------------- //
-// ========================================================================= //
-
-void BM_Complexity_O1(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  state.SetComplexityN(state.range(0));
-}
-BENCHMARK(BM_Complexity_O1)->Range(1, 1 << 18)->Complexity(benchmark::o1);
-SET_SUBSTITUTIONS({{"%bigOStr", "[ ]* %float \\([0-9]+\\)"},
-                   {"%RMS", "[ ]*[0-9]+ %"}});
-ADD_CASES(TC_ConsoleOut, {{"^BM_Complexity_O1_BigO %bigOStr %bigOStr[ ]*$"},
-                          {"^BM_Complexity_O1_RMS %RMS %RMS[ ]*$"}});
-
-// ========================================================================= //
-// ----------------------- Testing Aggregate Output ------------------------ //
-// ========================================================================= //
-
-// Test that non-aggregate data is printed by default
-void BM_Repeat(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-// need two repetitions min to be able to output any aggregate output
-BENCHMARK(BM_Repeat)->Repetitions(2);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Repeat/repeats:2 %console_report$"},
-                          {"^BM_Repeat/repeats:2 %console_report$"},
-                          {"^BM_Repeat/repeats:2_mean %console_report$"},
-                          {"^BM_Repeat/repeats:2_median %console_report$"},
-                          {"^BM_Repeat/repeats:2_stddev %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Repeat/repeats:2\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:2\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:2_mean\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:2_median\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:2_stddev\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Repeat/repeats:2\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:2\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:2_mean\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:2_median\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:2_stddev\",%csv_report$"}});
-// but for two repetitions, mean and median is the same, so let's repeat..
-BENCHMARK(BM_Repeat)->Repetitions(3);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Repeat/repeats:3 %console_report$"},
-                          {"^BM_Repeat/repeats:3 %console_report$"},
-                          {"^BM_Repeat/repeats:3 %console_report$"},
-                          {"^BM_Repeat/repeats:3_mean %console_report$"},
-                          {"^BM_Repeat/repeats:3_median %console_report$"},
-                          {"^BM_Repeat/repeats:3_stddev %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Repeat/repeats:3\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:3\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:3\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:3_mean\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:3_median\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:3_stddev\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Repeat/repeats:3\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:3\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:3\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:3_mean\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:3_median\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:3_stddev\",%csv_report$"}});
-// median 
diff ers between even/odd number of repetitions, so just to be sure
-BENCHMARK(BM_Repeat)->Repetitions(4);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Repeat/repeats:4 %console_report$"},
-                          {"^BM_Repeat/repeats:4 %console_report$"},
-                          {"^BM_Repeat/repeats:4 %console_report$"},
-                          {"^BM_Repeat/repeats:4 %console_report$"},
-                          {"^BM_Repeat/repeats:4_mean %console_report$"},
-                          {"^BM_Repeat/repeats:4_median %console_report$"},
-                          {"^BM_Repeat/repeats:4_stddev %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Repeat/repeats:4\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:4\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:4\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:4\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:4_mean\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:4_median\",$"},
-                       {"\"name\": \"BM_Repeat/repeats:4_stddev\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Repeat/repeats:4\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:4\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:4\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:4\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:4_mean\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:4_median\",%csv_report$"},
-                      {"^\"BM_Repeat/repeats:4_stddev\",%csv_report$"}});
-
-// Test that a non-repeated test still prints non-aggregate results even when
-// only-aggregate reports have been requested
-void BM_RepeatOnce(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_RepeatOnce)->Repetitions(1)->ReportAggregatesOnly();
-ADD_CASES(TC_ConsoleOut, {{"^BM_RepeatOnce/repeats:1 %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_RepeatOnce/repeats:1\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_RepeatOnce/repeats:1\",%csv_report$"}});
-
-// Test that non-aggregate data is not reported
-void BM_SummaryRepeat(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_SummaryRepeat)->Repetitions(3)->ReportAggregatesOnly();
-ADD_CASES(TC_ConsoleOut,
-          {{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
-           {"^BM_SummaryRepeat/repeats:3_mean %console_report$"},
-           {"^BM_SummaryRepeat/repeats:3_median %console_report$"},
-           {"^BM_SummaryRepeat/repeats:3_stddev %console_report$"}});
-ADD_CASES(TC_JSONOut, {{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
-                       {"\"name\": \"BM_SummaryRepeat/repeats:3_mean\",$"},
-                       {"\"name\": \"BM_SummaryRepeat/repeats:3_median\",$"},
-                       {"\"name\": \"BM_SummaryRepeat/repeats:3_stddev\",$"}});
-ADD_CASES(TC_CSVOut, {{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
-                      {"^\"BM_SummaryRepeat/repeats:3_mean\",%csv_report$"},
-                      {"^\"BM_SummaryRepeat/repeats:3_median\",%csv_report$"},
-                      {"^\"BM_SummaryRepeat/repeats:3_stddev\",%csv_report$"}});
-
-void BM_RepeatTimeUnit(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_RepeatTimeUnit)
-    ->Repetitions(3)
-    ->ReportAggregatesOnly()
-    ->Unit(benchmark::kMicrosecond);
-ADD_CASES(TC_ConsoleOut,
-          {{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
-           {"^BM_RepeatTimeUnit/repeats:3_mean %console_us_report$"},
-           {"^BM_RepeatTimeUnit/repeats:3_median %console_us_report$"},
-           {"^BM_RepeatTimeUnit/repeats:3_stddev %console_us_report$"}});
-ADD_CASES(TC_JSONOut, {{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
-                       {"\"name\": \"BM_RepeatTimeUnit/repeats:3_mean\",$"},
-                       {"\"time_unit\": \"us\",?$"},
-                       {"\"name\": \"BM_RepeatTimeUnit/repeats:3_median\",$"},
-                       {"\"time_unit\": \"us\",?$"},
-                       {"\"name\": \"BM_RepeatTimeUnit/repeats:3_stddev\",$"},
-                       {"\"time_unit\": \"us\",?$"}});
-ADD_CASES(TC_CSVOut,
-          {{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
-           {"^\"BM_RepeatTimeUnit/repeats:3_mean\",%csv_us_report$"},
-           {"^\"BM_RepeatTimeUnit/repeats:3_median\",%csv_us_report$"},
-           {"^\"BM_RepeatTimeUnit/repeats:3_stddev\",%csv_us_report$"}});
-
-// ========================================================================= //
-// -------------------- Testing user-provided statistics ------------------- //
-// ========================================================================= //
-
-const auto UserStatistics = [](const std::vector<double>& v) {
-  return v.back();
-};
-void BM_UserStats(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-}
-BENCHMARK(BM_UserStats)
-    ->Repetitions(3)
-    ->ComputeStatistics("", UserStatistics);
-// check that user-provided stats is calculated, and is after the default-ones
-// empty string as name is intentional, it would sort before anything else
-ADD_CASES(TC_ConsoleOut, {{"^BM_UserStats/repeats:3 %console_report$"},
-                          {"^BM_UserStats/repeats:3 %console_report$"},
-                          {"^BM_UserStats/repeats:3 %console_report$"},
-                          {"^BM_UserStats/repeats:3_mean %console_report$"},
-                          {"^BM_UserStats/repeats:3_median %console_report$"},
-                          {"^BM_UserStats/repeats:3_stddev %console_report$"},
-                          {"^BM_UserStats/repeats:3_ %console_report$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_UserStats/repeats:3\",$"},
-                       {"\"name\": \"BM_UserStats/repeats:3\",$"},
-                       {"\"name\": \"BM_UserStats/repeats:3\",$"},
-                       {"\"name\": \"BM_UserStats/repeats:3_mean\",$"},
-                       {"\"name\": \"BM_UserStats/repeats:3_median\",$"},
-                       {"\"name\": \"BM_UserStats/repeats:3_stddev\",$"},
-                       {"\"name\": \"BM_UserStats/repeats:3_\",$"}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_UserStats/repeats:3\",%csv_report$"},
-                      {"^\"BM_UserStats/repeats:3\",%csv_report$"},
-                      {"^\"BM_UserStats/repeats:3\",%csv_report$"},
-                      {"^\"BM_UserStats/repeats:3_mean\",%csv_report$"},
-                      {"^\"BM_UserStats/repeats:3_median\",%csv_report$"},
-                      {"^\"BM_UserStats/repeats:3_stddev\",%csv_report$"},
-                      {"^\"BM_UserStats/repeats:3_\",%csv_report$"}});
-
-// ========================================================================= //
-// --------------------------- TEST CASES END ------------------------------ //
-// ========================================================================= //
-
-int main(int argc, char* argv[]) { RunOutputTests(argc, argv); }

diff  --git a/llvm/utils/benchmark/test/skip_with_error_test.cc b/llvm/utils/benchmark/test/skip_with_error_test.cc
deleted file mode 100644
index 8d2c342a9af40..0000000000000
--- a/llvm/utils/benchmark/test/skip_with_error_test.cc
+++ /dev/null
@@ -1,192 +0,0 @@
-
-#undef NDEBUG
-#include <cassert>
-#include <vector>
-
-#include "../src/check.h"  // NOTE: check.h is for internal use only!
-#include "benchmark/benchmark.h"
-
-namespace {
-
-class TestReporter : public benchmark::ConsoleReporter {
- public:
-  virtual bool ReportContext(const Context& context) {
-    return ConsoleReporter::ReportContext(context);
-  };
-
-  virtual void ReportRuns(const std::vector<Run>& report) {
-    all_runs_.insert(all_runs_.end(), begin(report), end(report));
-    ConsoleReporter::ReportRuns(report);
-  }
-
-  TestReporter() {}
-  virtual ~TestReporter() {}
-
-  mutable std::vector<Run> all_runs_;
-};
-
-struct TestCase {
-  std::string name;
-  bool error_occurred;
-  std::string error_message;
-
-  typedef benchmark::BenchmarkReporter::Run Run;
-
-  void CheckRun(Run const& run) const {
-    CHECK(name == run.benchmark_name) << "expected " << name << " got "
-                                      << run.benchmark_name;
-    CHECK(error_occurred == run.error_occurred);
-    CHECK(error_message == run.error_message);
-    if (error_occurred) {
-      // CHECK(run.iterations == 0);
-    } else {
-      CHECK(run.iterations != 0);
-    }
-  }
-};
-
-std::vector<TestCase> ExpectedResults;
-
-int AddCases(const char* base_name, std::initializer_list<TestCase> const& v) {
-  for (auto TC : v) {
-    TC.name = base_name + TC.name;
-    ExpectedResults.push_back(std::move(TC));
-  }
-  return 0;
-}
-
-#define CONCAT(x, y) CONCAT2(x, y)
-#define CONCAT2(x, y) x##y
-#define ADD_CASES(...) int CONCAT(dummy, __LINE__) = AddCases(__VA_ARGS__)
-
-}  // end namespace
-
-void BM_error_before_running(benchmark::State& state) {
-  state.SkipWithError("error message");
-  while (state.KeepRunning()) {
-    assert(false);
-  }
-}
-BENCHMARK(BM_error_before_running);
-ADD_CASES("BM_error_before_running", {{"", true, "error message"}});
-
-
-void BM_error_before_running_batch(benchmark::State& state) {
-  state.SkipWithError("error message");
-  while (state.KeepRunningBatch(17)) {
-    assert(false);
-  }
-}
-BENCHMARK(BM_error_before_running_batch);
-ADD_CASES("BM_error_before_running_batch", {{"", true, "error message"}});
-
-void BM_error_before_running_range_for(benchmark::State& state) {
-  state.SkipWithError("error message");
-  for (auto _ : state) {
-    assert(false);
-  }
-}
-BENCHMARK(BM_error_before_running_range_for);
-ADD_CASES("BM_error_before_running_range_for", {{"", true, "error message"}});
-
-void BM_error_during_running(benchmark::State& state) {
-  int first_iter = true;
-  while (state.KeepRunning()) {
-    if (state.range(0) == 1 && state.thread_index <= (state.threads / 2)) {
-      assert(first_iter);
-      first_iter = false;
-      state.SkipWithError("error message");
-    } else {
-      state.PauseTiming();
-      state.ResumeTiming();
-    }
-  }
-}
-BENCHMARK(BM_error_during_running)->Arg(1)->Arg(2)->ThreadRange(1, 8);
-ADD_CASES("BM_error_during_running", {{"/1/threads:1", true, "error message"},
-                                      {"/1/threads:2", true, "error message"},
-                                      {"/1/threads:4", true, "error message"},
-                                      {"/1/threads:8", true, "error message"},
-                                      {"/2/threads:1", false, ""},
-                                      {"/2/threads:2", false, ""},
-                                      {"/2/threads:4", false, ""},
-                                      {"/2/threads:8", false, ""}});
-
-void BM_error_during_running_ranged_for(benchmark::State& state) {
-  assert(state.max_iterations > 3 && "test requires at least a few iterations");
-  int first_iter = true;
-  // NOTE: Users should not write the for loop explicitly.
-  for (auto It = state.begin(), End = state.end(); It != End; ++It) {
-    if (state.range(0) == 1) {
-      assert(first_iter);
-      first_iter = false;
-      state.SkipWithError("error message");
-      // Test the unfortunate but documented behavior that the ranged-for loop
-      // doesn't automatically terminate when SkipWithError is set.
-      assert(++It != End);
-      break; // Required behavior
-    }
-  }
-}
-BENCHMARK(BM_error_during_running_ranged_for)->Arg(1)->Arg(2)->Iterations(5);
-ADD_CASES("BM_error_during_running_ranged_for",
-          {{"/1/iterations:5", true, "error message"},
-           {"/2/iterations:5", false, ""}});
-
-
-
-void BM_error_after_running(benchmark::State& state) {
-  for (auto _ : state) {
-    benchmark::DoNotOptimize(state.iterations());
-  }
-  if (state.thread_index <= (state.threads / 2))
-    state.SkipWithError("error message");
-}
-BENCHMARK(BM_error_after_running)->ThreadRange(1, 8);
-ADD_CASES("BM_error_after_running", {{"/threads:1", true, "error message"},
-                                     {"/threads:2", true, "error message"},
-                                     {"/threads:4", true, "error message"},
-                                     {"/threads:8", true, "error message"}});
-
-void BM_error_while_paused(benchmark::State& state) {
-  bool first_iter = true;
-  while (state.KeepRunning()) {
-    if (state.range(0) == 1 && state.thread_index <= (state.threads / 2)) {
-      assert(first_iter);
-      first_iter = false;
-      state.PauseTiming();
-      state.SkipWithError("error message");
-    } else {
-      state.PauseTiming();
-      state.ResumeTiming();
-    }
-  }
-}
-BENCHMARK(BM_error_while_paused)->Arg(1)->Arg(2)->ThreadRange(1, 8);
-ADD_CASES("BM_error_while_paused", {{"/1/threads:1", true, "error message"},
-                                    {"/1/threads:2", true, "error message"},
-                                    {"/1/threads:4", true, "error message"},
-                                    {"/1/threads:8", true, "error message"},
-                                    {"/2/threads:1", false, ""},
-                                    {"/2/threads:2", false, ""},
-                                    {"/2/threads:4", false, ""},
-                                    {"/2/threads:8", false, ""}});
-
-int main(int argc, char* argv[]) {
-  benchmark::Initialize(&argc, argv);
-
-  TestReporter test_reporter;
-  benchmark::RunSpecifiedBenchmarks(&test_reporter);
-
-  typedef benchmark::BenchmarkReporter::Run Run;
-  auto EB = ExpectedResults.begin();
-
-  for (Run const& run : test_reporter.all_runs_) {
-    assert(EB != ExpectedResults.end());
-    EB->CheckRun(run);
-    ++EB;
-  }
-  assert(EB == ExpectedResults.end());
-
-  return 0;
-}

diff  --git a/llvm/utils/benchmark/test/state_assembly_test.cc b/llvm/utils/benchmark/test/state_assembly_test.cc
deleted file mode 100644
index e2c5c8648d035..0000000000000
--- a/llvm/utils/benchmark/test/state_assembly_test.cc
+++ /dev/null
@@ -1,66 +0,0 @@
-#include <benchmark/benchmark.h>
-
-#ifdef __clang__
-#pragma clang diagnostic ignored "-Wreturn-type"
-#endif
-
-extern "C" {
-  extern int ExternInt;
-  benchmark::State& GetState();
-  void Fn();
-}
-
-using benchmark::State;
-
-// CHECK-LABEL: test_for_auto_loop:
-extern "C" int test_for_auto_loop() {
-  State& S = GetState();
-  int x = 42;
-  // CHECK: 	[[CALL:call(q)*]]	_ZN9benchmark5State16StartKeepRunningEv
-  // CHECK-NEXT: testq %rbx, %rbx
-  // CHECK-NEXT: je [[LOOP_END:.*]]
-
-  for (auto _ : S) {
-    // CHECK: .L[[LOOP_HEAD:[a-zA-Z0-9_]+]]:
-    // CHECK-GNU-NEXT: subq $1, %rbx
-    // CHECK-CLANG-NEXT: {{(addq \$1,|incq)}} %rax
-    // CHECK-NEXT: jne .L[[LOOP_HEAD]]
-    benchmark::DoNotOptimize(x);
-  }
-  // CHECK: [[LOOP_END]]:
-  // CHECK: [[CALL]]	_ZN9benchmark5State17FinishKeepRunningEv
-
-  // CHECK: movl $101, %eax
-  // CHECK: ret
-  return 101;
-}
-
-// CHECK-LABEL: test_while_loop:
-extern "C" int test_while_loop() {
-  State& S = GetState();
-  int x = 42;
-
-  // CHECK: j{{(e|mp)}} .L[[LOOP_HEADER:[a-zA-Z0-9_]+]]
-  // CHECK-NEXT: .L[[LOOP_BODY:[a-zA-Z0-9_]+]]:
-  while (S.KeepRunning()) {
-    // CHECK-GNU-NEXT: subq $1, %[[IREG:[a-z]+]]
-    // CHECK-CLANG-NEXT: {{(addq \$-1,|decq)}} %[[IREG:[a-z]+]]
-    // CHECK: movq %[[IREG]], [[DEST:.*]]
-    benchmark::DoNotOptimize(x);
-  }
-  // CHECK-DAG: movq [[DEST]], %[[IREG]]
-  // CHECK-DAG: testq %[[IREG]], %[[IREG]]
-  // CHECK-DAG: jne .L[[LOOP_BODY]]
-  // CHECK-DAG: .L[[LOOP_HEADER]]:
-
-  // CHECK: cmpb $0
-  // CHECK-NEXT: jne .L[[LOOP_END:[a-zA-Z0-9_]+]]
-  // CHECK: [[CALL:call(q)*]] _ZN9benchmark5State16StartKeepRunningEv
-
-  // CHECK: .L[[LOOP_END]]:
-  // CHECK: [[CALL]] _ZN9benchmark5State17FinishKeepRunningEv
-
-  // CHECK: movl $101, %eax
-  // CHECK: ret
-  return 101;
-}

diff  --git a/llvm/utils/benchmark/test/statistics_gtest.cc b/llvm/utils/benchmark/test/statistics_gtest.cc
deleted file mode 100644
index b4d6abbb5780e..0000000000000
--- a/llvm/utils/benchmark/test/statistics_gtest.cc
+++ /dev/null
@@ -1,61 +0,0 @@
-//===---------------------------------------------------------------------===//
-// statistics_test - Unit tests for src/statistics.cc
-//===---------------------------------------------------------------------===//
-
-#include "../src/statistics.h"
-#include "gtest/gtest.h"
-
-namespace {
-TEST(StatisticsTest, Mean) {
-  std::vector<double> Inputs;
-  {
-    Inputs = {42, 42, 42, 42};
-    double Res = benchmark::StatisticsMean(Inputs);
-    EXPECT_DOUBLE_EQ(Res, 42.0);
-  }
-  {
-    Inputs = {1, 2, 3, 4};
-    double Res = benchmark::StatisticsMean(Inputs);
-    EXPECT_DOUBLE_EQ(Res, 2.5);
-  }
-  {
-    Inputs = {1, 2, 5, 10, 10, 14};
-    double Res = benchmark::StatisticsMean(Inputs);
-    EXPECT_DOUBLE_EQ(Res, 7.0);
-  }
-}
-
-TEST(StatisticsTest, Median) {
-  std::vector<double> Inputs;
-  {
-    Inputs = {42, 42, 42, 42};
-    double Res = benchmark::StatisticsMedian(Inputs);
-    EXPECT_DOUBLE_EQ(Res, 42.0);
-  }
-  {
-    Inputs = {1, 2, 3, 4};
-    double Res = benchmark::StatisticsMedian(Inputs);
-    EXPECT_DOUBLE_EQ(Res, 2.5);
-  }
-  {
-    Inputs = {1, 2, 5, 10, 10};
-    double Res = benchmark::StatisticsMedian(Inputs);
-    EXPECT_DOUBLE_EQ(Res, 5.0);
-  }
-}
-
-TEST(StatisticsTest, StdDev) {
-  std::vector<double> Inputs;
-  {
-    Inputs = {101, 101, 101, 101};
-    double Res = benchmark::StatisticsStdDev(Inputs);
-    EXPECT_DOUBLE_EQ(Res, 0.0);
-  }
-  {
-    Inputs = {1, 2, 3};
-    double Res = benchmark::StatisticsStdDev(Inputs);
-    EXPECT_DOUBLE_EQ(Res, 1.0);
-  }
-}
-
-}  // end namespace

diff  --git a/llvm/utils/benchmark/test/templated_fixture_test.cc b/llvm/utils/benchmark/test/templated_fixture_test.cc
deleted file mode 100644
index ec5b4c0cc0782..0000000000000
--- a/llvm/utils/benchmark/test/templated_fixture_test.cc
+++ /dev/null
@@ -1,28 +0,0 @@
-
-#include "benchmark/benchmark.h"
-
-#include <cassert>
-#include <memory>
-
-template<typename T>
-class MyFixture : public ::benchmark::Fixture {
-public:
-  MyFixture() : data(0) {}
-
-  T data;
-};
-
-BENCHMARK_TEMPLATE_F(MyFixture, Foo, int)(benchmark::State &st) {
-  for (auto _ : st) {
-    data += 1;
-  }
-}
-
-BENCHMARK_TEMPLATE_DEFINE_F(MyFixture, Bar, double)(benchmark::State& st) {
-  for (auto _ : st) {
-    data += 1.0;
-  }
-}
-BENCHMARK_REGISTER_F(MyFixture, Bar);
-
-BENCHMARK_MAIN();

diff  --git a/llvm/utils/benchmark/test/user_counters_tabular_test.cc b/llvm/utils/benchmark/test/user_counters_tabular_test.cc
deleted file mode 100644
index 9b8a6132e6d6a..0000000000000
--- a/llvm/utils/benchmark/test/user_counters_tabular_test.cc
+++ /dev/null
@@ -1,250 +0,0 @@
-
-#undef NDEBUG
-
-#include "benchmark/benchmark.h"
-#include "output_test.h"
-
-// @todo: <jpmag> this checks the full output at once; the rule for
-// CounterSet1 was failing because it was not matching "^[-]+$".
-// @todo: <jpmag> check that the counters are vertically aligned.
-ADD_CASES(TC_ConsoleOut, {
-// keeping these lines long improves readability, so:
-// clang-format off
-    {"^[-]+$", MR_Next},
-    {"^Benchmark %s Time %s CPU %s Iterations %s Bar %s Bat %s Baz %s Foo %s Frob %s Lob$", MR_Next},
-    {"^[-]+$", MR_Next},
-    {"^BM_Counters_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_Counters_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_Counters_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_Counters_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_Counters_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterRates_Tabular/threads:%int %console_report [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s$", MR_Next},
-    {"^BM_CounterRates_Tabular/threads:%int %console_report [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s$", MR_Next},
-    {"^BM_CounterRates_Tabular/threads:%int %console_report [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s$", MR_Next},
-    {"^BM_CounterRates_Tabular/threads:%int %console_report [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s$", MR_Next},
-    {"^BM_CounterRates_Tabular/threads:%int %console_report [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s [ ]*%hrfloat/s$", MR_Next},
-    {"^[-]+$", MR_Next},
-    {"^Benchmark %s Time %s CPU %s Iterations %s Bar %s Baz %s Foo$", MR_Next},
-    {"^[-]+$", MR_Next},
-    {"^BM_CounterSet0_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet0_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet0_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet0_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet0_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet1_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet1_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet1_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet1_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet1_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^[-]+$", MR_Next},
-    {"^Benchmark %s Time %s CPU %s Iterations %s Bat %s Baz %s Foo$", MR_Next},
-    {"^[-]+$", MR_Next},
-    {"^BM_CounterSet2_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet2_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet2_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet2_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$", MR_Next},
-    {"^BM_CounterSet2_Tabular/threads:%int %console_report [ ]*%hrfloat [ ]*%hrfloat [ ]*%hrfloat$"},
-// clang-format on
-});
-ADD_CASES(TC_CSVOut, {{"%csv_header,"
-                       "\"Bar\",\"Bat\",\"Baz\",\"Foo\",\"Frob\",\"Lob\""}});
-
-// ========================================================================= //
-// ------------------------- Tabular Counters Output ----------------------- //
-// ========================================================================= //
-
-void BM_Counters_Tabular(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  namespace bm = benchmark;
-  state.counters.insert({
-    {"Foo",  { 1, bm::Counter::kAvgThreads}},
-    {"Bar",  { 2, bm::Counter::kAvgThreads}},
-    {"Baz",  { 4, bm::Counter::kAvgThreads}},
-    {"Bat",  { 8, bm::Counter::kAvgThreads}},
-    {"Frob", {16, bm::Counter::kAvgThreads}},
-    {"Lob",  {32, bm::Counter::kAvgThreads}},
-  });
-}
-BENCHMARK(BM_Counters_Tabular)->ThreadRange(1, 16);
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Tabular/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"Bar\": %float,$", MR_Next},
-                       {"\"Bat\": %float,$", MR_Next},
-                       {"\"Baz\": %float,$", MR_Next},
-                       {"\"Foo\": %float,$", MR_Next},
-                       {"\"Frob\": %float,$", MR_Next},
-                       {"\"Lob\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_Tabular/threads:%int\",%csv_report,"
-                       "%float,%float,%float,%float,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckTabular(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "Foo", EQ, 1);
-  CHECK_COUNTER_VALUE(e, int, "Bar", EQ, 2);
-  CHECK_COUNTER_VALUE(e, int, "Baz", EQ, 4);
-  CHECK_COUNTER_VALUE(e, int, "Bat", EQ, 8);
-  CHECK_COUNTER_VALUE(e, int, "Frob", EQ, 16);
-  CHECK_COUNTER_VALUE(e, int, "Lob", EQ, 32);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_Tabular/threads:%int", &CheckTabular);
-
-// ========================================================================= //
-// -------------------- Tabular+Rate Counters Output ----------------------- //
-// ========================================================================= //
-
-void BM_CounterRates_Tabular(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  namespace bm = benchmark;
-  state.counters.insert({
-    {"Foo",  { 1, bm::Counter::kAvgThreadsRate}},
-    {"Bar",  { 2, bm::Counter::kAvgThreadsRate}},
-    {"Baz",  { 4, bm::Counter::kAvgThreadsRate}},
-    {"Bat",  { 8, bm::Counter::kAvgThreadsRate}},
-    {"Frob", {16, bm::Counter::kAvgThreadsRate}},
-    {"Lob",  {32, bm::Counter::kAvgThreadsRate}},
-  });
-}
-BENCHMARK(BM_CounterRates_Tabular)->ThreadRange(1, 16);
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterRates_Tabular/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"Bar\": %float,$", MR_Next},
-                       {"\"Bat\": %float,$", MR_Next},
-                       {"\"Baz\": %float,$", MR_Next},
-                       {"\"Foo\": %float,$", MR_Next},
-                       {"\"Frob\": %float,$", MR_Next},
-                       {"\"Lob\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_CounterRates_Tabular/threads:%int\",%csv_report,"
-                       "%float,%float,%float,%float,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckTabularRate(Results const& e) {
-  double t = e.DurationCPUTime();
-  CHECK_FLOAT_COUNTER_VALUE(e, "Foo", EQ, 1./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "Bar", EQ, 2./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "Baz", EQ, 4./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "Bat", EQ, 8./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "Frob", EQ, 16./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "Lob", EQ, 32./t, 0.001);
-}
-CHECK_BENCHMARK_RESULTS("BM_CounterRates_Tabular/threads:%int",
-                        &CheckTabularRate);
-
-// ========================================================================= //
-// ------------------------- Tabular Counters Output ----------------------- //
-// ========================================================================= //
-
-// set only some of the counters
-void BM_CounterSet0_Tabular(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  namespace bm = benchmark;
-  state.counters.insert({
-    {"Foo", {10, bm::Counter::kAvgThreads}},
-    {"Bar", {20, bm::Counter::kAvgThreads}},
-    {"Baz", {40, bm::Counter::kAvgThreads}},
-  });
-}
-BENCHMARK(BM_CounterSet0_Tabular)->ThreadRange(1, 16);
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterSet0_Tabular/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"Bar\": %float,$", MR_Next},
-                       {"\"Baz\": %float,$", MR_Next},
-                       {"\"Foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_CounterSet0_Tabular/threads:%int\",%csv_report,"
-                       "%float,,%float,%float,,"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckSet0(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "Foo", EQ, 10);
-  CHECK_COUNTER_VALUE(e, int, "Bar", EQ, 20);
-  CHECK_COUNTER_VALUE(e, int, "Baz", EQ, 40);
-}
-CHECK_BENCHMARK_RESULTS("BM_CounterSet0_Tabular", &CheckSet0);
-
-// again.
-void BM_CounterSet1_Tabular(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  namespace bm = benchmark;
-  state.counters.insert({
-    {"Foo", {15, bm::Counter::kAvgThreads}},
-    {"Bar", {25, bm::Counter::kAvgThreads}},
-    {"Baz", {45, bm::Counter::kAvgThreads}},
-  });
-}
-BENCHMARK(BM_CounterSet1_Tabular)->ThreadRange(1, 16);
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterSet1_Tabular/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"Bar\": %float,$", MR_Next},
-                       {"\"Baz\": %float,$", MR_Next},
-                       {"\"Foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_CounterSet1_Tabular/threads:%int\",%csv_report,"
-                       "%float,,%float,%float,,"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckSet1(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "Foo", EQ, 15);
-  CHECK_COUNTER_VALUE(e, int, "Bar", EQ, 25);
-  CHECK_COUNTER_VALUE(e, int, "Baz", EQ, 45);
-}
-CHECK_BENCHMARK_RESULTS("BM_CounterSet1_Tabular/threads:%int", &CheckSet1);
-
-// ========================================================================= //
-// ------------------------- Tabular Counters Output ----------------------- //
-// ========================================================================= //
-
-// set only some of the counters, 
diff erent set now.
-void BM_CounterSet2_Tabular(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  namespace bm = benchmark;
-  state.counters.insert({
-    {"Foo", {10, bm::Counter::kAvgThreads}},
-    {"Bat", {30, bm::Counter::kAvgThreads}},
-    {"Baz", {40, bm::Counter::kAvgThreads}},
-  });
-}
-BENCHMARK(BM_CounterSet2_Tabular)->ThreadRange(1, 16);
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_CounterSet2_Tabular/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"Bat\": %float,$", MR_Next},
-                       {"\"Baz\": %float,$", MR_Next},
-                       {"\"Foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_CounterSet2_Tabular/threads:%int\",%csv_report,"
-                       ",%float,%float,%float,,"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckSet2(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "Foo", EQ, 10);
-  CHECK_COUNTER_VALUE(e, int, "Bat", EQ, 30);
-  CHECK_COUNTER_VALUE(e, int, "Baz", EQ, 40);
-}
-CHECK_BENCHMARK_RESULTS("BM_CounterSet2_Tabular", &CheckSet2);
-
-// ========================================================================= //
-// --------------------------- TEST CASES END ------------------------------ //
-// ========================================================================= //
-
-int main(int argc, char* argv[]) { RunOutputTests(argc, argv); }

diff  --git a/llvm/utils/benchmark/test/user_counters_test.cc b/llvm/utils/benchmark/test/user_counters_test.cc
deleted file mode 100644
index 06aafb1fa1463..0000000000000
--- a/llvm/utils/benchmark/test/user_counters_test.cc
+++ /dev/null
@@ -1,217 +0,0 @@
-
-#undef NDEBUG
-
-#include "benchmark/benchmark.h"
-#include "output_test.h"
-
-// ========================================================================= //
-// ---------------------- Testing Prologue Output -------------------------- //
-// ========================================================================= //
-
-ADD_CASES(TC_ConsoleOut,
-          {{"^[-]+$", MR_Next},
-           {"^Benchmark %s Time %s CPU %s Iterations UserCounters...$", MR_Next},
-           {"^[-]+$", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"%csv_header,\"bar\",\"foo\""}});
-
-// ========================================================================= //
-// ------------------------- Simple Counters Output ------------------------ //
-// ========================================================================= //
-
-void BM_Counters_Simple(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  state.counters["foo"] = 1;
-  state.counters["bar"] = 2 * (double)state.iterations();
-}
-BENCHMARK(BM_Counters_Simple);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_Simple %console_report bar=%hrfloat foo=%hrfloat$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Simple\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_Simple\",%csv_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckSimple(Results const& e) {
-  double its = e.GetAs< double >("iterations");
-  CHECK_COUNTER_VALUE(e, int, "foo", EQ, 1);
-  // check that the value of bar is within 0.1% of the expected value
-  CHECK_FLOAT_COUNTER_VALUE(e, "bar", EQ, 2.*its, 0.001);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_Simple", &CheckSimple);
-
-// ========================================================================= //
-// --------------------- Counters+Items+Bytes/s Output --------------------- //
-// ========================================================================= //
-
-namespace { int num_calls1 = 0; }
-void BM_Counters_WithBytesAndItemsPSec(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  state.counters["foo"] = 1;
-  state.counters["bar"] = ++num_calls1;
-  state.SetBytesProcessed(364);
-  state.SetItemsProcessed(150);
-}
-BENCHMARK(BM_Counters_WithBytesAndItemsPSec);
-ADD_CASES(TC_ConsoleOut,
-          {{"^BM_Counters_WithBytesAndItemsPSec %console_report "
-            "bar=%hrfloat foo=%hrfloat +%hrfloatB/s +%hrfloat items/s$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_WithBytesAndItemsPSec\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bytes_per_second\": %float,$", MR_Next},
-                       {"\"items_per_second\": %float,$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_WithBytesAndItemsPSec\","
-                       "%csv_bytes_items_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckBytesAndItemsPSec(Results const& e) {
-  double t = e.DurationCPUTime(); // this (and not real time) is the time used
-  CHECK_COUNTER_VALUE(e, int, "foo", EQ, 1);
-  CHECK_COUNTER_VALUE(e, int, "bar", EQ, num_calls1);
-  // check that the values are within 0.1% of the expected values
-  CHECK_FLOAT_RESULT_VALUE(e, "bytes_per_second", EQ, 364./t, 0.001);
-  CHECK_FLOAT_RESULT_VALUE(e, "items_per_second", EQ, 150./t, 0.001);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_WithBytesAndItemsPSec",
-                        &CheckBytesAndItemsPSec);
-
-// ========================================================================= //
-// ------------------------- Rate Counters Output -------------------------- //
-// ========================================================================= //
-
-void BM_Counters_Rate(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  namespace bm = benchmark;
-  state.counters["foo"] = bm::Counter{1, bm::Counter::kIsRate};
-  state.counters["bar"] = bm::Counter{2, bm::Counter::kIsRate};
-}
-BENCHMARK(BM_Counters_Rate);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_Rate %console_report bar=%hrfloat/s foo=%hrfloat/s$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Rate\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_Rate\",%csv_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckRate(Results const& e) {
-  double t = e.DurationCPUTime(); // this (and not real time) is the time used
-  // check that the values are within 0.1% of the expected values
-  CHECK_FLOAT_COUNTER_VALUE(e, "foo", EQ, 1./t, 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "bar", EQ, 2./t, 0.001);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_Rate", &CheckRate);
-
-// ========================================================================= //
-// ------------------------- Thread Counters Output ------------------------ //
-// ========================================================================= //
-
-void BM_Counters_Threads(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  state.counters["foo"] = 1;
-  state.counters["bar"] = 2;
-}
-BENCHMARK(BM_Counters_Threads)->ThreadRange(1, 8);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_Threads/threads:%int %console_report bar=%hrfloat foo=%hrfloat$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_Threads/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_Threads/threads:%int\",%csv_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckThreads(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "foo", EQ, e.NumThreads());
-  CHECK_COUNTER_VALUE(e, int, "bar", EQ, 2 * e.NumThreads());
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_Threads/threads:%int", &CheckThreads);
-
-// ========================================================================= //
-// ---------------------- ThreadAvg Counters Output ------------------------ //
-// ========================================================================= //
-
-void BM_Counters_AvgThreads(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  namespace bm = benchmark;
-  state.counters["foo"] = bm::Counter{1, bm::Counter::kAvgThreads};
-  state.counters["bar"] = bm::Counter{2, bm::Counter::kAvgThreads};
-}
-BENCHMARK(BM_Counters_AvgThreads)->ThreadRange(1, 8);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_AvgThreads/threads:%int %console_report bar=%hrfloat foo=%hrfloat$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_AvgThreads/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_AvgThreads/threads:%int\",%csv_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckAvgThreads(Results const& e) {
-  CHECK_COUNTER_VALUE(e, int, "foo", EQ, 1);
-  CHECK_COUNTER_VALUE(e, int, "bar", EQ, 2);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_AvgThreads/threads:%int",
-                        &CheckAvgThreads);
-
-// ========================================================================= //
-// ---------------------- ThreadAvg Counters Output ------------------------ //
-// ========================================================================= //
-
-void BM_Counters_AvgThreadsRate(benchmark::State& state) {
-  for (auto _ : state) {
-  }
-  namespace bm = benchmark;
-  state.counters["foo"] = bm::Counter{1, bm::Counter::kAvgThreadsRate};
-  state.counters["bar"] = bm::Counter{2, bm::Counter::kAvgThreadsRate};
-}
-BENCHMARK(BM_Counters_AvgThreadsRate)->ThreadRange(1, 8);
-ADD_CASES(TC_ConsoleOut, {{"^BM_Counters_AvgThreadsRate/threads:%int %console_report bar=%hrfloat/s foo=%hrfloat/s$"}});
-ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Counters_AvgThreadsRate/threads:%int\",$"},
-                       {"\"iterations\": %int,$", MR_Next},
-                       {"\"real_time\": %float,$", MR_Next},
-                       {"\"cpu_time\": %float,$", MR_Next},
-                       {"\"time_unit\": \"ns\",$", MR_Next},
-                       {"\"bar\": %float,$", MR_Next},
-                       {"\"foo\": %float$", MR_Next},
-                       {"}", MR_Next}});
-ADD_CASES(TC_CSVOut, {{"^\"BM_Counters_AvgThreadsRate/threads:%int\",%csv_report,%float,%float$"}});
-// VS2013 does not allow this function to be passed as a lambda argument
-// to CHECK_BENCHMARK_RESULTS()
-void CheckAvgThreadsRate(Results const& e) {
-  CHECK_FLOAT_COUNTER_VALUE(e, "foo", EQ, 1./e.DurationCPUTime(), 0.001);
-  CHECK_FLOAT_COUNTER_VALUE(e, "bar", EQ, 2./e.DurationCPUTime(), 0.001);
-}
-CHECK_BENCHMARK_RESULTS("BM_Counters_AvgThreadsRate/threads:%int",
-                        &CheckAvgThreadsRate);
-
-// ========================================================================= //
-// --------------------------- TEST CASES END ------------------------------ //
-// ========================================================================= //
-
-int main(int argc, char* argv[]) { RunOutputTests(argc, argv); }

diff  --git a/llvm/utils/benchmark/tools/compare.py b/llvm/utils/benchmark/tools/compare.py
deleted file mode 100644
index f0a4455f5fb7c..0000000000000
--- a/llvm/utils/benchmark/tools/compare.py
+++ /dev/null
@@ -1,316 +0,0 @@
-#!/usr/bin/env python
-
-"""
-compare.py - versatile benchmark output compare tool
-"""
-
-import argparse
-from argparse import ArgumentParser
-import sys
-import gbench
-from gbench import util, report
-from gbench.util import *
-
-
-def check_inputs(in1, in2, flags):
-    """
-    Perform checking on the user provided inputs and diagnose any abnormalities
-    """
-    in1_kind, in1_err = classify_input_file(in1)
-    in2_kind, in2_err = classify_input_file(in2)
-    output_file = find_benchmark_flag('--benchmark_out=', flags)
-    output_type = find_benchmark_flag('--benchmark_out_format=', flags)
-    if in1_kind == IT_Executable and in2_kind == IT_Executable and output_file:
-        print(("WARNING: '--benchmark_out=%s' will be passed to both "
-               "benchmarks causing it to be overwritten") % output_file)
-    if in1_kind == IT_JSON and in2_kind == IT_JSON and len(flags) > 0:
-        print("WARNING: passing optional flags has no effect since both "
-              "inputs are JSON")
-    if output_type is not None and output_type != 'json':
-        print(("ERROR: passing '--benchmark_out_format=%s' to 'compare.py`"
-               " is not supported.") % output_type)
-        sys.exit(1)
-
-
-def create_parser():
-    parser = ArgumentParser(
-        description='versatile benchmark output compare tool')
-    subparsers = parser.add_subparsers(
-        help='This tool has multiple modes of operation:',
-        dest='mode')
-
-    parser_a = subparsers.add_parser(
-        'benchmarks',
-        help='The most simple use-case, compare all the output of these two benchmarks')
-    baseline = parser_a.add_argument_group(
-        'baseline', 'The benchmark baseline')
-    baseline.add_argument(
-        'test_baseline',
-        metavar='test_baseline',
-        type=argparse.FileType('r'),
-        nargs=1,
-        help='A benchmark executable or JSON output file')
-    contender = parser_a.add_argument_group(
-        'contender', 'The benchmark that will be compared against the baseline')
-    contender.add_argument(
-        'test_contender',
-        metavar='test_contender',
-        type=argparse.FileType('r'),
-        nargs=1,
-        help='A benchmark executable or JSON output file')
-    parser_a.add_argument(
-        'benchmark_options',
-        metavar='benchmark_options',
-        nargs=argparse.REMAINDER,
-        help='Arguments to pass when running benchmark executables')
-
-    parser_b = subparsers.add_parser(
-        'filters', help='Compare filter one with the filter two of benchmark')
-    baseline = parser_b.add_argument_group(
-        'baseline', 'The benchmark baseline')
-    baseline.add_argument(
-        'test',
-        metavar='test',
-        type=argparse.FileType('r'),
-        nargs=1,
-        help='A benchmark executable or JSON output file')
-    baseline.add_argument(
-        'filter_baseline',
-        metavar='filter_baseline',
-        type=str,
-        nargs=1,
-        help='The first filter, that will be used as baseline')
-    contender = parser_b.add_argument_group(
-        'contender', 'The benchmark that will be compared against the baseline')
-    contender.add_argument(
-        'filter_contender',
-        metavar='filter_contender',
-        type=str,
-        nargs=1,
-        help='The second filter, that will be compared against the baseline')
-    parser_b.add_argument(
-        'benchmark_options',
-        metavar='benchmark_options',
-        nargs=argparse.REMAINDER,
-        help='Arguments to pass when running benchmark executables')
-
-    parser_c = subparsers.add_parser(
-        'benchmarksfiltered',
-        help='Compare filter one of first benchmark with filter two of the second benchmark')
-    baseline = parser_c.add_argument_group(
-        'baseline', 'The benchmark baseline')
-    baseline.add_argument(
-        'test_baseline',
-        metavar='test_baseline',
-        type=argparse.FileType('r'),
-        nargs=1,
-        help='A benchmark executable or JSON output file')
-    baseline.add_argument(
-        'filter_baseline',
-        metavar='filter_baseline',
-        type=str,
-        nargs=1,
-        help='The first filter, that will be used as baseline')
-    contender = parser_c.add_argument_group(
-        'contender', 'The benchmark that will be compared against the baseline')
-    contender.add_argument(
-        'test_contender',
-        metavar='test_contender',
-        type=argparse.FileType('r'),
-        nargs=1,
-        help='The second benchmark executable or JSON output file, that will be compared against the baseline')
-    contender.add_argument(
-        'filter_contender',
-        metavar='filter_contender',
-        type=str,
-        nargs=1,
-        help='The second filter, that will be compared against the baseline')
-    parser_c.add_argument(
-        'benchmark_options',
-        metavar='benchmark_options',
-        nargs=argparse.REMAINDER,
-        help='Arguments to pass when running benchmark executables')
-
-    return parser
-
-
-def main():
-    # Parse the command line flags
-    parser = create_parser()
-    args, unknown_args = parser.parse_known_args()
-    if args.mode is None:
-      parser.print_help()
-      exit(1)
-    assert not unknown_args
-    benchmark_options = args.benchmark_options
-
-    if args.mode == 'benchmarks':
-        test_baseline = args.test_baseline[0].name
-        test_contender = args.test_contender[0].name
-        filter_baseline = ''
-        filter_contender = ''
-
-        # NOTE: if test_baseline == test_contender, you are analyzing the stdev
-
-        description = 'Comparing %s to %s' % (test_baseline, test_contender)
-    elif args.mode == 'filters':
-        test_baseline = args.test[0].name
-        test_contender = args.test[0].name
-        filter_baseline = args.filter_baseline[0]
-        filter_contender = args.filter_contender[0]
-
-        # NOTE: if filter_baseline == filter_contender, you are analyzing the
-        # stdev
-
-        description = 'Comparing %s to %s (from %s)' % (
-            filter_baseline, filter_contender, args.test[0].name)
-    elif args.mode == 'benchmarksfiltered':
-        test_baseline = args.test_baseline[0].name
-        test_contender = args.test_contender[0].name
-        filter_baseline = args.filter_baseline[0]
-        filter_contender = args.filter_contender[0]
-
-        # NOTE: if test_baseline == test_contender and
-        # filter_baseline == filter_contender, you are analyzing the stdev
-
-        description = 'Comparing %s (from %s) to %s (from %s)' % (
-            filter_baseline, test_baseline, filter_contender, test_contender)
-    else:
-        # should never happen
-        print("Unrecognized mode of operation: '%s'" % args.mode)
-        parser.print_help()
-        exit(1)
-
-    check_inputs(test_baseline, test_contender, benchmark_options)
-
-    options_baseline = []
-    options_contender = []
-
-    if filter_baseline and filter_contender:
-        options_baseline = ['--benchmark_filter=%s' % filter_baseline]
-        options_contender = ['--benchmark_filter=%s' % filter_contender]
-
-    # Run the benchmarks and report the results
-    json1 = json1_orig = gbench.util.run_or_load_benchmark(
-        test_baseline, benchmark_options + options_baseline)
-    json2 = json2_orig = gbench.util.run_or_load_benchmark(
-        test_contender, benchmark_options + options_contender)
-
-    # Now, filter the benchmarks so that the 
diff erence report can work
-    if filter_baseline and filter_contender:
-        replacement = '[%s vs. %s]' % (filter_baseline, filter_contender)
-        json1 = gbench.report.filter_benchmark(
-            json1_orig, filter_baseline, replacement)
-        json2 = gbench.report.filter_benchmark(
-            json2_orig, filter_contender, replacement)
-
-    # Diff and output
-    output_lines = gbench.report.generate_
diff erence_report(json1, json2)
-    print(description)
-    for ln in output_lines:
-        print(ln)
-
-
-import unittest
-
-
-class TestParser(unittest.TestCase):
-    def setUp(self):
-        self.parser = create_parser()
-        testInputs = os.path.join(
-            os.path.dirname(
-                os.path.realpath(__file__)),
-            'gbench',
-            'Inputs')
-        self.testInput0 = os.path.join(testInputs, 'test1_run1.json')
-        self.testInput1 = os.path.join(testInputs, 'test1_run2.json')
-
-    def test_benchmarks_basic(self):
-        parsed = self.parser.parse_args(
-            ['benchmarks', self.testInput0, self.testInput1])
-        self.assertEqual(parsed.mode, 'benchmarks')
-        self.assertEqual(parsed.test_baseline[0].name, self.testInput0)
-        self.assertEqual(parsed.test_contender[0].name, self.testInput1)
-        self.assertFalse(parsed.benchmark_options)
-
-    def test_benchmarks_with_remainder(self):
-        parsed = self.parser.parse_args(
-            ['benchmarks', self.testInput0, self.testInput1, 'd'])
-        self.assertEqual(parsed.mode, 'benchmarks')
-        self.assertEqual(parsed.test_baseline[0].name, self.testInput0)
-        self.assertEqual(parsed.test_contender[0].name, self.testInput1)
-        self.assertEqual(parsed.benchmark_options, ['d'])
-
-    def test_benchmarks_with_remainder_after_doubleminus(self):
-        parsed = self.parser.parse_args(
-            ['benchmarks', self.testInput0, self.testInput1, '--', 'e'])
-        self.assertEqual(parsed.mode, 'benchmarks')
-        self.assertEqual(parsed.test_baseline[0].name, self.testInput0)
-        self.assertEqual(parsed.test_contender[0].name, self.testInput1)
-        self.assertEqual(parsed.benchmark_options, ['e'])
-
-    def test_filters_basic(self):
-        parsed = self.parser.parse_args(
-            ['filters', self.testInput0, 'c', 'd'])
-        self.assertEqual(parsed.mode, 'filters')
-        self.assertEqual(parsed.test[0].name, self.testInput0)
-        self.assertEqual(parsed.filter_baseline[0], 'c')
-        self.assertEqual(parsed.filter_contender[0], 'd')
-        self.assertFalse(parsed.benchmark_options)
-
-    def test_filters_with_remainder(self):
-        parsed = self.parser.parse_args(
-            ['filters', self.testInput0, 'c', 'd', 'e'])
-        self.assertEqual(parsed.mode, 'filters')
-        self.assertEqual(parsed.test[0].name, self.testInput0)
-        self.assertEqual(parsed.filter_baseline[0], 'c')
-        self.assertEqual(parsed.filter_contender[0], 'd')
-        self.assertEqual(parsed.benchmark_options, ['e'])
-
-    def test_filters_with_remainder_after_doubleminus(self):
-        parsed = self.parser.parse_args(
-            ['filters', self.testInput0, 'c', 'd', '--', 'f'])
-        self.assertEqual(parsed.mode, 'filters')
-        self.assertEqual(parsed.test[0].name, self.testInput0)
-        self.assertEqual(parsed.filter_baseline[0], 'c')
-        self.assertEqual(parsed.filter_contender[0], 'd')
-        self.assertEqual(parsed.benchmark_options, ['f'])
-
-    def test_benchmarksfiltered_basic(self):
-        parsed = self.parser.parse_args(
-            ['benchmarksfiltered', self.testInput0, 'c', self.testInput1, 'e'])
-        self.assertEqual(parsed.mode, 'benchmarksfiltered')
-        self.assertEqual(parsed.test_baseline[0].name, self.testInput0)
-        self.assertEqual(parsed.filter_baseline[0], 'c')
-        self.assertEqual(parsed.test_contender[0].name, self.testInput1)
-        self.assertEqual(parsed.filter_contender[0], 'e')
-        self.assertFalse(parsed.benchmark_options)
-
-    def test_benchmarksfiltered_with_remainder(self):
-        parsed = self.parser.parse_args(
-            ['benchmarksfiltered', self.testInput0, 'c', self.testInput1, 'e', 'f'])
-        self.assertEqual(parsed.mode, 'benchmarksfiltered')
-        self.assertEqual(parsed.test_baseline[0].name, self.testInput0)
-        self.assertEqual(parsed.filter_baseline[0], 'c')
-        self.assertEqual(parsed.test_contender[0].name, self.testInput1)
-        self.assertEqual(parsed.filter_contender[0], 'e')
-        self.assertEqual(parsed.benchmark_options[0], 'f')
-
-    def test_benchmarksfiltered_with_remainder_after_doubleminus(self):
-        parsed = self.parser.parse_args(
-            ['benchmarksfiltered', self.testInput0, 'c', self.testInput1, 'e', '--', 'g'])
-        self.assertEqual(parsed.mode, 'benchmarksfiltered')
-        self.assertEqual(parsed.test_baseline[0].name, self.testInput0)
-        self.assertEqual(parsed.filter_baseline[0], 'c')
-        self.assertEqual(parsed.test_contender[0].name, self.testInput1)
-        self.assertEqual(parsed.filter_contender[0], 'e')
-        self.assertEqual(parsed.benchmark_options[0], 'g')
-
-
-if __name__ == '__main__':
-    # unittest.main()
-    main()
-
-# vim: tabstop=4 expandtab shiftwidth=4 softtabstop=4
-# kate: tab-width: 4; replace-tabs on; indent-width 4; tab-indents: off;
-# kate: indent-mode python; remove-trailing-spaces modified;

diff  --git a/llvm/utils/benchmark/tools/gbench/Inputs/test1_run1.json b/llvm/utils/benchmark/tools/gbench/Inputs/test1_run1.json
deleted file mode 100644
index d7ec6a9c8f61a..0000000000000
--- a/llvm/utils/benchmark/tools/gbench/Inputs/test1_run1.json
+++ /dev/null
@@ -1,102 +0,0 @@
-{
-  "context": {
-    "date": "2016-08-02 17:44:46",
-    "num_cpus": 4,
-    "mhz_per_cpu": 4228,
-    "cpu_scaling_enabled": false,
-    "library_build_type": "release"
-  },
-  "benchmarks": [
-    {
-      "name": "BM_SameTimes",
-      "iterations": 1000,
-      "real_time": 10,
-      "cpu_time": 10,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_2xFaster",
-      "iterations": 1000,
-      "real_time": 50,
-      "cpu_time": 50,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_2xSlower",
-      "iterations": 1000,
-      "real_time": 50,
-      "cpu_time": 50,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_1PercentFaster",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_1PercentSlower",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_10PercentFaster",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_10PercentSlower",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_100xSlower",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_100xFaster",
-      "iterations": 1000,
-      "real_time": 10000,
-      "cpu_time": 10000,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_10PercentCPUToTime",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_ThirdFaster",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_BadTimeUnit",
-      "iterations": 1000,
-      "real_time": 0.4,
-      "cpu_time": 0.5,
-      "time_unit": "s"
-    },
-    {
-      "name": "BM_DifferentTimeUnit",
-      "iterations": 1,
-      "real_time": 1,
-      "cpu_time": 1,
-      "time_unit": "s"
-    }
-  ]
-}

diff  --git a/llvm/utils/benchmark/tools/gbench/Inputs/test1_run2.json b/llvm/utils/benchmark/tools/gbench/Inputs/test1_run2.json
deleted file mode 100644
index 59a5ffaca4d4d..0000000000000
--- a/llvm/utils/benchmark/tools/gbench/Inputs/test1_run2.json
+++ /dev/null
@@ -1,102 +0,0 @@
-{
-  "context": {
-    "date": "2016-08-02 17:44:46",
-    "num_cpus": 4,
-    "mhz_per_cpu": 4228,
-    "cpu_scaling_enabled": false,
-    "library_build_type": "release"
-  },
-  "benchmarks": [
-    {
-      "name": "BM_SameTimes",
-      "iterations": 1000,
-      "real_time": 10,
-      "cpu_time": 10,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_2xFaster",
-      "iterations": 1000,
-      "real_time": 25,
-      "cpu_time": 25,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_2xSlower",
-      "iterations": 20833333,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_1PercentFaster",
-      "iterations": 1000,
-      "real_time": 98.9999999,
-      "cpu_time": 98.9999999,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_1PercentSlower",
-      "iterations": 1000,
-      "real_time": 100.9999999,
-      "cpu_time": 100.9999999,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_10PercentFaster",
-      "iterations": 1000,
-      "real_time": 90,
-      "cpu_time": 90,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_10PercentSlower",
-      "iterations": 1000,
-      "real_time": 110,
-      "cpu_time": 110,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_100xSlower",
-      "iterations": 1000,
-      "real_time": 1.0000e+04,
-      "cpu_time": 1.0000e+04,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_100xFaster",
-      "iterations": 1000,
-      "real_time": 100,
-      "cpu_time": 100,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_10PercentCPUToTime",
-      "iterations": 1000,
-      "real_time": 110,
-      "cpu_time": 90,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_ThirdFaster",
-      "iterations": 1000,
-      "real_time": 66.665,
-      "cpu_time": 66.664,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_BadTimeUnit",
-      "iterations": 1000,
-      "real_time": 0.04,
-      "cpu_time": 0.6,
-      "time_unit": "s"
-    },
-    {
-      "name": "BM_DifferentTimeUnit",
-      "iterations": 1,
-      "real_time": 1,
-      "cpu_time": 1,
-      "time_unit": "ns"
-    }
-  ]
-}

diff  --git a/llvm/utils/benchmark/tools/gbench/Inputs/test2_run.json b/llvm/utils/benchmark/tools/gbench/Inputs/test2_run.json
deleted file mode 100644
index 15bc698030493..0000000000000
--- a/llvm/utils/benchmark/tools/gbench/Inputs/test2_run.json
+++ /dev/null
@@ -1,81 +0,0 @@
-{
-  "context": {
-    "date": "2016-08-02 17:44:46",
-    "num_cpus": 4,
-    "mhz_per_cpu": 4228,
-    "cpu_scaling_enabled": false,
-    "library_build_type": "release"
-  },
-  "benchmarks": [
-    {
-      "name": "BM_Hi",
-      "iterations": 1234,
-      "real_time": 42,
-      "cpu_time": 24,
-      "time_unit": "ms"
-    },
-    {
-      "name": "BM_Zero",
-      "iterations": 1000,
-      "real_time": 10,
-      "cpu_time": 10,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_Zero/4",
-      "iterations": 4000,
-      "real_time": 40,
-      "cpu_time": 40,
-      "time_unit": "ns"
-    },
-    {
-      "name": "Prefix/BM_Zero",
-      "iterations": 2000,
-      "real_time": 20,
-      "cpu_time": 20,
-      "time_unit": "ns"
-    },
-    {
-      "name": "Prefix/BM_Zero/3",
-      "iterations": 3000,
-      "real_time": 30,
-      "cpu_time": 30,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_One",
-      "iterations": 5000,
-      "real_time": 5,
-      "cpu_time": 5,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_One/4",
-      "iterations": 2000,
-      "real_time": 20,
-      "cpu_time": 20,
-      "time_unit": "ns"
-    },
-    {
-      "name": "Prefix/BM_One",
-      "iterations": 1000,
-      "real_time": 10,
-      "cpu_time": 10,
-      "time_unit": "ns"
-    },
-    {
-      "name": "Prefix/BM_One/3",
-      "iterations": 1500,
-      "real_time": 15,
-      "cpu_time": 15,
-      "time_unit": "ns"
-    },
-    {
-      "name": "BM_Bye",
-      "iterations": 5321,
-      "real_time": 11,
-      "cpu_time": 63,
-      "time_unit": "ns"
-    }
-  ]
-}

diff  --git a/llvm/utils/benchmark/tools/gbench/__init__.py b/llvm/utils/benchmark/tools/gbench/__init__.py
deleted file mode 100644
index fce1a1acfbb33..0000000000000
--- a/llvm/utils/benchmark/tools/gbench/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-"""Google Benchmark tooling"""
-
-__author__ = 'Eric Fiselier'
-__email__ = 'eric at efcs.ca'
-__versioninfo__ = (0, 5, 0)
-__version__ = '.'.join(str(v) for v in __versioninfo__) + 'dev'
-
-__all__ = []

diff  --git a/llvm/utils/benchmark/tools/gbench/report.py b/llvm/utils/benchmark/tools/gbench/report.py
deleted file mode 100644
index 0c090981a833a..0000000000000
--- a/llvm/utils/benchmark/tools/gbench/report.py
+++ /dev/null
@@ -1,208 +0,0 @@
-"""report.py - Utilities for reporting statistics about benchmark results
-"""
-import os
-import re
-import copy
-
-class BenchmarkColor(object):
-    def __init__(self, name, code):
-        self.name = name
-        self.code = code
-
-    def __repr__(self):
-        return '%s%r' % (self.__class__.__name__,
-                         (self.name, self.code))
-
-    def __format__(self, format):
-        return self.code
-
-# Benchmark Colors Enumeration
-BC_NONE = BenchmarkColor('NONE', '')
-BC_MAGENTA = BenchmarkColor('MAGENTA', '\033[95m')
-BC_CYAN = BenchmarkColor('CYAN', '\033[96m')
-BC_OKBLUE = BenchmarkColor('OKBLUE', '\033[94m')
-BC_HEADER = BenchmarkColor('HEADER', '\033[92m')
-BC_WARNING = BenchmarkColor('WARNING', '\033[93m')
-BC_WHITE = BenchmarkColor('WHITE', '\033[97m')
-BC_FAIL = BenchmarkColor('FAIL', '\033[91m')
-BC_ENDC = BenchmarkColor('ENDC', '\033[0m')
-BC_BOLD = BenchmarkColor('BOLD', '\033[1m')
-BC_UNDERLINE = BenchmarkColor('UNDERLINE', '\033[4m')
-
-def color_format(use_color, fmt_str, *args, **kwargs):
-    """
-    Return the result of 'fmt_str.format(*args, **kwargs)' after transforming
-    'args' and 'kwargs' according to the value of 'use_color'. If 'use_color'
-    is False then all color codes in 'args' and 'kwargs' are replaced with
-    the empty string.
-    """
-    assert use_color is True or use_color is False
-    if not use_color:
-        args = [arg if not isinstance(arg, BenchmarkColor) else BC_NONE
-                for arg in args]
-        kwargs = {key: arg if not isinstance(arg, BenchmarkColor) else BC_NONE
-                  for key, arg in kwargs.items()}
-    return fmt_str.format(*args, **kwargs)
-
-
-def find_longest_name(benchmark_list):
-    """
-    Return the length of the longest benchmark name in a given list of
-    benchmark JSON objects
-    """
-    longest_name = 1
-    for bc in benchmark_list:
-        if len(bc['name']) > longest_name:
-            longest_name = len(bc['name'])
-    return longest_name
-
-
-def calculate_change(old_val, new_val):
-    """
-    Return a float representing the decimal change between old_val and new_val.
-    """
-    if old_val == 0 and new_val == 0:
-        return 0.0
-    if old_val == 0:
-        return float(new_val - old_val) / (float(old_val + new_val) / 2)
-    return float(new_val - old_val) / abs(old_val)
-
-
-def filter_benchmark(json_orig, family, replacement=""):
-    """
-    Apply a filter to the json, and only leave the 'family' of benchmarks.
-    """
-    regex = re.compile(family)
-    filtered = {}
-    filtered['benchmarks'] = []
-    for be in json_orig['benchmarks']:
-        if not regex.search(be['name']):
-            continue
-        filteredbench = copy.deepcopy(be) # Do NOT modify the old name!
-        filteredbench['name'] = regex.sub(replacement, filteredbench['name'])
-        filtered['benchmarks'].append(filteredbench)
-    return filtered
-
-
-def generate_
diff erence_report(json1, json2, use_color=True):
-    """
-    Calculate and report the 
diff erence between each test of two benchmarks
-    runs specified as 'json1' and 'json2'.
-    """
-    first_col_width = find_longest_name(json1['benchmarks'])
-    def find_test(name):
-        for b in json2['benchmarks']:
-            if b['name'] == name:
-                return b
-        return None
-    first_col_width = max(first_col_width, len('Benchmark'))
-    first_line = "{:<{}s}Time             CPU      Time Old      Time New       CPU Old       CPU New".format(
-        'Benchmark', 12 + first_col_width)
-    output_strs = [first_line, '-' * len(first_line)]
-
-    gen = (bn for bn in json1['benchmarks'] if 'real_time' in bn and 'cpu_time' in bn)
-    for bn in gen:
-        other_bench = find_test(bn['name'])
-        if not other_bench:
-            continue
-
-        if bn['time_unit'] != other_bench['time_unit']:
-            continue
-
-        def get_color(res):
-            if res > 0.05:
-                return BC_FAIL
-            elif res > -0.07:
-                return BC_WHITE
-            else:
-                return BC_CYAN
-        fmt_str = "{}{:<{}s}{endc}{}{:+16.4f}{endc}{}{:+16.4f}{endc}{:14.0f}{:14.0f}{endc}{:14.0f}{:14.0f}"
-        tres = calculate_change(bn['real_time'], other_bench['real_time'])
-        cpures = calculate_change(bn['cpu_time'], other_bench['cpu_time'])
-        output_strs += [color_format(use_color, fmt_str,
-            BC_HEADER, bn['name'], first_col_width,
-            get_color(tres), tres, get_color(cpures), cpures,
-            bn['real_time'], other_bench['real_time'],
-            bn['cpu_time'], other_bench['cpu_time'],
-            endc=BC_ENDC)]
-    return output_strs
-
-###############################################################################
-# Unit tests
-
-import unittest
-
-class TestReportDifference(unittest.TestCase):
-    def load_results(self):
-        import json
-        testInputs = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'Inputs')
-        testOutput1 = os.path.join(testInputs, 'test1_run1.json')
-        testOutput2 = os.path.join(testInputs, 'test1_run2.json')
-        with open(testOutput1, 'r') as f:
-            json1 = json.load(f)
-        with open(testOutput2, 'r') as f:
-            json2 = json.load(f)
-        return json1, json2
-
-    def test_basic(self):
-        expect_lines = [
-            ['BM_SameTimes', '+0.0000', '+0.0000', '10', '10', '10', '10'],
-            ['BM_2xFaster', '-0.5000', '-0.5000', '50', '25', '50', '25'],
-            ['BM_2xSlower', '+1.0000', '+1.0000', '50', '100', '50', '100'],
-            ['BM_1PercentFaster', '-0.0100', '-0.0100', '100', '99', '100', '99'],
-            ['BM_1PercentSlower', '+0.0100', '+0.0100', '100', '101', '100', '101'],
-            ['BM_10PercentFaster', '-0.1000', '-0.1000', '100', '90', '100', '90'],
-            ['BM_10PercentSlower', '+0.1000', '+0.1000', '100', '110', '100', '110'],
-            ['BM_100xSlower', '+99.0000', '+99.0000', '100', '10000', '100', '10000'],
-            ['BM_100xFaster', '-0.9900', '-0.9900', '10000', '100', '10000', '100'],
-            ['BM_10PercentCPUToTime', '+0.1000', '-0.1000', '100', '110', '100', '90'],
-            ['BM_ThirdFaster', '-0.3333', '-0.3334', '100', '67', '100', '67'],
-            ['BM_BadTimeUnit', '-0.9000', '+0.2000', '0', '0', '0', '1'],
-        ]
-        json1, json2 = self.load_results()
-        output_lines_with_header = generate_
diff erence_report(json1, json2, use_color=False)
-        output_lines = output_lines_with_header[2:]
-        print("\n".join(output_lines_with_header))
-        self.assertEqual(len(output_lines), len(expect_lines))
-        for i in range(0, len(output_lines)):
-            parts = [x for x in output_lines[i].split(' ') if x]
-            self.assertEqual(len(parts), 7)
-            self.assertEqual(parts, expect_lines[i])
-
-
-class TestReportDifferenceBetweenFamilies(unittest.TestCase):
-    def load_result(self):
-        import json
-        testInputs = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'Inputs')
-        testOutput = os.path.join(testInputs, 'test2_run.json')
-        with open(testOutput, 'r') as f:
-            json = json.load(f)
-        return json
-
-    def test_basic(self):
-        expect_lines = [
-            ['.', '-0.5000', '-0.5000', '10', '5', '10', '5'],
-            ['./4', '-0.5000', '-0.5000', '40', '20', '40', '20'],
-            ['Prefix/.', '-0.5000', '-0.5000', '20', '10', '20', '10'],
-            ['Prefix/./3', '-0.5000', '-0.5000', '30', '15', '30', '15'],
-        ]
-        json = self.load_result()
-        json1 = filter_benchmark(json, "BM_Z.ro", ".")
-        json2 = filter_benchmark(json, "BM_O.e", ".")
-        output_lines_with_header = generate_
diff erence_report(json1, json2, use_color=False)
-        output_lines = output_lines_with_header[2:]
-        print("\n")
-        print("\n".join(output_lines_with_header))
-        self.assertEqual(len(output_lines), len(expect_lines))
-        for i in range(0, len(output_lines)):
-            parts = [x for x in output_lines[i].split(' ') if x]
-            self.assertEqual(len(parts), 7)
-            self.assertEqual(parts, expect_lines[i])
-
-
-if __name__ == '__main__':
-    unittest.main()
-
-# vim: tabstop=4 expandtab shiftwidth=4 softtabstop=4
-# kate: tab-width: 4; replace-tabs on; indent-width 4; tab-indents: off;
-# kate: indent-mode python; remove-trailing-spaces modified;

diff  --git a/llvm/utils/benchmark/tools/gbench/util.py b/llvm/utils/benchmark/tools/gbench/util.py
deleted file mode 100644
index 07c2377275498..0000000000000
--- a/llvm/utils/benchmark/tools/gbench/util.py
+++ /dev/null
@@ -1,159 +0,0 @@
-"""util.py - General utilities for running, loading, and processing benchmarks
-"""
-import json
-import os
-import tempfile
-import subprocess
-import sys
-
-# Input file type enumeration
-IT_Invalid    = 0
-IT_JSON       = 1
-IT_Executable = 2
-
-_num_magic_bytes = 2 if sys.platform.startswith('win') else 4
-def is_executable_file(filename):
-    """
-    Return 'True' if 'filename' names a valid file which is likely
-    an executable. A file is considered an executable if it starts with the
-    magic bytes for a EXE, Mach O, or ELF file.
-    """
-    if not os.path.isfile(filename):
-        return False
-    with open(filename, mode='rb') as f:
-        magic_bytes = f.read(_num_magic_bytes)
-    if sys.platform == 'darwin':
-        return magic_bytes in [
-            b'\xfe\xed\xfa\xce',  # MH_MAGIC
-            b'\xce\xfa\xed\xfe',  # MH_CIGAM
-            b'\xfe\xed\xfa\xcf',  # MH_MAGIC_64
-            b'\xcf\xfa\xed\xfe',  # MH_CIGAM_64
-            b'\xca\xfe\xba\xbe',  # FAT_MAGIC
-            b'\xbe\xba\xfe\xca'   # FAT_CIGAM
-        ]
-    elif sys.platform.startswith('win'):
-        return magic_bytes == b'MZ'
-    else:
-        return magic_bytes == b'\x7FELF'
-
-
-def is_json_file(filename):
-    """
-    Returns 'True' if 'filename' names a valid JSON output file.
-    'False' otherwise.
-    """
-    try:
-        with open(filename, 'r') as f:
-            json.load(f)
-        return True
-    except:
-        pass
-    return False
-
-
-def classify_input_file(filename):
-    """
-    Return a tuple (type, msg) where 'type' specifies the classified type
-    of 'filename'. If 'type' is 'IT_Invalid' then 'msg' is a human readable
-    string represeting the error.
-    """
-    ftype = IT_Invalid
-    err_msg = None
-    if not os.path.exists(filename):
-        err_msg = "'%s' does not exist" % filename
-    elif not os.path.isfile(filename):
-        err_msg = "'%s' does not name a file" % filename
-    elif is_executable_file(filename):
-        ftype = IT_Executable
-    elif is_json_file(filename):
-        ftype = IT_JSON
-    else:
-        err_msg = "'%s' does not name a valid benchmark executable or JSON file" % filename
-    return ftype, err_msg
-
-
-def check_input_file(filename):
-    """
-    Classify the file named by 'filename' and return the classification.
-    If the file is classified as 'IT_Invalid' print an error message and exit
-    the program.
-    """
-    ftype, msg = classify_input_file(filename)
-    if ftype == IT_Invalid:
-        print("Invalid input file: %s" % msg)
-        sys.exit(1)
-    return ftype
-
-def find_benchmark_flag(prefix, benchmark_flags):
-    """
-    Search the specified list of flags for a flag matching `<prefix><arg>` and
-    if it is found return the arg it specifies. If specified more than once the
-    last value is returned. If the flag is not found None is returned.
-    """
-    assert prefix.startswith('--') and prefix.endswith('=')
-    result = None
-    for f in benchmark_flags:
-        if f.startswith(prefix):
-            result = f[len(prefix):]
-    return result
-
-def remove_benchmark_flags(prefix, benchmark_flags):
-    """
-    Return a new list containing the specified benchmark_flags except those
-    with the specified prefix.
-    """
-    assert prefix.startswith('--') and prefix.endswith('=')
-    return [f for f in benchmark_flags if not f.startswith(prefix)]
-
-def load_benchmark_results(fname):
-    """
-    Read benchmark output from a file and return the JSON object.
-    REQUIRES: 'fname' names a file containing JSON benchmark output.
-    """
-    with open(fname, 'r') as f:
-        return json.load(f)
-
-
-def run_benchmark(exe_name, benchmark_flags):
-    """
-    Run a benchmark specified by 'exe_name' with the specified
-    'benchmark_flags'. The benchmark is run directly as a subprocess to preserve
-    real time console output.
-    RETURNS: A JSON object representing the benchmark output
-    """
-    output_name = find_benchmark_flag('--benchmark_out=',
-                                      benchmark_flags)
-    is_temp_output = False
-    if output_name is None:
-        is_temp_output = True
-        thandle, output_name = tempfile.mkstemp()
-        os.close(thandle)
-        benchmark_flags = list(benchmark_flags) + \
-                          ['--benchmark_out=%s' % output_name]
-
-    cmd = [exe_name] + benchmark_flags
-    print("RUNNING: %s" % ' '.join(cmd))
-    exitCode = subprocess.call(cmd)
-    if exitCode != 0:
-        print('TEST FAILED...')
-        sys.exit(exitCode)
-    json_res = load_benchmark_results(output_name)
-    if is_temp_output:
-        os.unlink(output_name)
-    return json_res
-
-
-def run_or_load_benchmark(filename, benchmark_flags):
-    """
-    Get the results for a specified benchmark. If 'filename' specifies
-    an executable benchmark then the results are generated by running the
-    benchmark. Otherwise 'filename' must name a valid JSON output file,
-    which is loaded and the result returned.
-    """
-    ftype = check_input_file(filename)
-    if ftype == IT_JSON:
-        return load_benchmark_results(filename)
-    elif ftype == IT_Executable:
-        return run_benchmark(filename, benchmark_flags)
-    else:
-        assert False # This branch is unreachable
\ No newline at end of file

diff  --git a/llvm/utils/benchmark/tools/strip_asm.py b/llvm/utils/benchmark/tools/strip_asm.py
deleted file mode 100644
index 9030550b43bec..0000000000000
--- a/llvm/utils/benchmark/tools/strip_asm.py
+++ /dev/null
@@ -1,151 +0,0 @@
-#!/usr/bin/env python
-
-"""
-strip_asm.py - Cleanup ASM output for the specified file
-"""
-
-from argparse import ArgumentParser
-import sys
-import os
-import re
-
-def find_used_labels(asm):
-    found = set()
-    label_re = re.compile("\s*j[a-z]+\s+\.L([a-zA-Z0-9][a-zA-Z0-9_]*)")
-    for l in asm.splitlines():
-        m = label_re.match(l)
-        if m:
-            found.add('.L%s' % m.group(1))
-    return found
-
-
-def normalize_labels(asm):
-    decls = set()
-    label_decl = re.compile("^[.]{0,1}L([a-zA-Z0-9][a-zA-Z0-9_]*)(?=:)")
-    for l in asm.splitlines():
-        m = label_decl.match(l)
-        if m:
-            decls.add(m.group(0))
-    if len(decls) == 0:
-        return asm
-    needs_dot = next(iter(decls))[0] != '.'
-    if not needs_dot:
-        return asm
-    for ld in decls:
-        asm = re.sub("(^|\s+)" + ld + "(?=:|\s)", '\\1.' + ld, asm)
-    return asm
-
-
-def transform_labels(asm):
-    asm = normalize_labels(asm)
-    used_decls = find_used_labels(asm)
-    new_asm = ''
-    label_decl = re.compile("^\.L([a-zA-Z0-9][a-zA-Z0-9_]*)(?=:)")
-    for l in asm.splitlines():
-        m = label_decl.match(l)
-        if not m or m.group(0) in used_decls:
-            new_asm += l
-            new_asm += '\n'
-    return new_asm
-
-
-def is_identifier(tk):
-    if len(tk) == 0:
-        return False
-    first = tk[0]
-    if not first.isalpha() and first != '_':
-        return False
-    for i in range(1, len(tk)):
-        c = tk[i]
-        if not c.isalnum() and c != '_':
-            return False
-    return True
-
-def process_identifiers(l):
-    """
-    process_identifiers - process all identifiers and modify them to have
-    consistent names across all platforms; specifically across ELF and MachO.
-    For example, MachO inserts an additional understore at the beginning of
-    names. This function removes that.
-    """
-    parts = re.split(r'([a-zA-Z0-9_]+)', l)
-    new_line = ''
-    for tk in parts:
-        if is_identifier(tk):
-            if tk.startswith('__Z'):
-                tk = tk[1:]
-            elif tk.startswith('_') and len(tk) > 1 and \
-                    tk[1].isalpha() and tk[1] != 'Z':
-                tk = tk[1:]
-        new_line += tk
-    return new_line
-
-
-def process_asm(asm):
-    """
-    Strip the ASM of unwanted directives and lines
-    """
-    new_contents = ''
-    asm = transform_labels(asm)
-
-    # TODO: Add more things we want to remove
-    discard_regexes = [
-        re.compile("\s+\..*$"), # directive
-        re.compile("\s*#(NO_APP|APP)$"), #inline ASM
-        re.compile("\s*#.*$"), # comment line
-        re.compile("\s*\.globa?l\s*([.a-zA-Z_][a-zA-Z0-9$_.]*)"), #global directive
-        re.compile("\s*\.(string|asciz|ascii|[1248]?byte|short|word|long|quad|value|zero)"),
-    ]
-    keep_regexes = [
-
-    ]
-    fn_label_def = re.compile("^[a-zA-Z_][a-zA-Z0-9_.]*:")
-    for l in asm.splitlines():
-        # Remove Mach-O attribute
-        l = l.replace('@GOTPCREL', '')
-        add_line = True
-        for reg in discard_regexes:
-            if reg.match(l) is not None:
-                add_line = False
-                break
-        for reg in keep_regexes:
-            if reg.match(l) is not None:
-                add_line = True
-                break
-        if add_line:
-            if fn_label_def.match(l) and len(new_contents) != 0:
-                new_contents += '\n'
-            l = process_identifiers(l)
-            new_contents += l
-            new_contents += '\n'
-    return new_contents
-
-def main():
-    parser = ArgumentParser(
-        description='generate a stripped assembly file')
-    parser.add_argument(
-        'input', metavar='input', type=str, nargs=1,
-        help='An input assembly file')
-    parser.add_argument(
-        'out', metavar='output', type=str, nargs=1,
-        help='The output file')
-    args, unknown_args = parser.parse_known_args()
-    input = args.input[0]
-    output = args.out[0]
-    if not os.path.isfile(input):
-        print(("ERROR: input file '%s' does not exist") % input)
-        sys.exit(1)
-    contents = None
-    with open(input, 'r') as f:
-        contents = f.read()
-    new_contents = process_asm(contents)
-    with open(output, 'w') as f:
-        f.write(new_contents)
-
-
-if __name__ == '__main__':
-    main()
-
-# vim: tabstop=4 expandtab shiftwidth=4 softtabstop=4
-# kate: tab-width: 4; replace-tabs on; indent-width 4; tab-indents: off;
-# kate: indent-mode python; remove-trailing-spaces modified;

diff  --git a/runtimes/CMakeLists.txt b/runtimes/CMakeLists.txt
index c17d7f2fc90d5..8279782a4e7b0 100644
--- a/runtimes/CMakeLists.txt
+++ b/runtimes/CMakeLists.txt
@@ -43,6 +43,8 @@ endfunction()
 find_package(LLVM PATHS "${LLVM_BINARY_DIR}" NO_DEFAULT_PATH NO_CMAKE_FIND_ROOT_PATH)
 find_package(Clang PATHS "${LLVM_BINARY_DIR}" NO_DEFAULT_PATH NO_CMAKE_FIND_ROOT_PATH)
 
+set(LLVM_THIRD_PARTY_DIR "${CMAKE_CURRENT_SOURCE_DIR}/../third-party")
+
 function(get_compiler_rt_path path)
   foreach(entry ${runtimes})
     get_filename_component(projName ${entry} NAME)

diff  --git a/libcxx/utils/google-benchmark/AUTHORS b/third-party/benchmark/AUTHORS
similarity index 100%
rename from libcxx/utils/google-benchmark/AUTHORS
rename to third-party/benchmark/AUTHORS

diff  --git a/libcxx/utils/google-benchmark/BUILD.bazel b/third-party/benchmark/BUILD.bazel
similarity index 100%
rename from libcxx/utils/google-benchmark/BUILD.bazel
rename to third-party/benchmark/BUILD.bazel

diff  --git a/libcxx/utils/google-benchmark/CMakeLists.txt b/third-party/benchmark/CMakeLists.txt
similarity index 100%
rename from libcxx/utils/google-benchmark/CMakeLists.txt
rename to third-party/benchmark/CMakeLists.txt

diff  --git a/libcxx/utils/google-benchmark/CONTRIBUTING.md b/third-party/benchmark/CONTRIBUTING.md
similarity index 100%
rename from libcxx/utils/google-benchmark/CONTRIBUTING.md
rename to third-party/benchmark/CONTRIBUTING.md

diff  --git a/libcxx/utils/google-benchmark/CONTRIBUTORS b/third-party/benchmark/CONTRIBUTORS
similarity index 100%
rename from libcxx/utils/google-benchmark/CONTRIBUTORS
rename to third-party/benchmark/CONTRIBUTORS

diff  --git a/libcxx/utils/google-benchmark/LICENSE b/third-party/benchmark/LICENSE
similarity index 100%
rename from libcxx/utils/google-benchmark/LICENSE
rename to third-party/benchmark/LICENSE

diff  --git a/libcxx/utils/google-benchmark/README.md b/third-party/benchmark/README.md
similarity index 100%
rename from libcxx/utils/google-benchmark/README.md
rename to third-party/benchmark/README.md

diff  --git a/libcxx/utils/google-benchmark/WORKSPACE b/third-party/benchmark/WORKSPACE
similarity index 100%
rename from libcxx/utils/google-benchmark/WORKSPACE
rename to third-party/benchmark/WORKSPACE

diff  --git a/libcxx/utils/google-benchmark/_config.yml b/third-party/benchmark/_config.yml
similarity index 100%
rename from libcxx/utils/google-benchmark/_config.yml
rename to third-party/benchmark/_config.yml

diff  --git a/libcxx/utils/google-benchmark/appveyor.yml b/third-party/benchmark/appveyor.yml
similarity index 100%
rename from libcxx/utils/google-benchmark/appveyor.yml
rename to third-party/benchmark/appveyor.yml

diff  --git a/libcxx/utils/google-benchmark/bindings/python/BUILD b/third-party/benchmark/bindings/python/BUILD
similarity index 100%
rename from libcxx/utils/google-benchmark/bindings/python/BUILD
rename to third-party/benchmark/bindings/python/BUILD

diff  --git a/libcxx/utils/google-benchmark/bindings/python/build_defs.bzl b/third-party/benchmark/bindings/python/build_defs.bzl
similarity index 100%
rename from libcxx/utils/google-benchmark/bindings/python/build_defs.bzl
rename to third-party/benchmark/bindings/python/build_defs.bzl

diff  --git a/libcxx/utils/google-benchmark/bindings/python/google_benchmark/BUILD b/third-party/benchmark/bindings/python/google_benchmark/BUILD
similarity index 100%
rename from libcxx/utils/google-benchmark/bindings/python/google_benchmark/BUILD
rename to third-party/benchmark/bindings/python/google_benchmark/BUILD

diff  --git a/libcxx/utils/google-benchmark/bindings/python/google_benchmark/__init__.py b/third-party/benchmark/bindings/python/google_benchmark/__init__.py
similarity index 100%
rename from libcxx/utils/google-benchmark/bindings/python/google_benchmark/__init__.py
rename to third-party/benchmark/bindings/python/google_benchmark/__init__.py

diff  --git a/libcxx/utils/google-benchmark/bindings/python/google_benchmark/benchmark.cc b/third-party/benchmark/bindings/python/google_benchmark/benchmark.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/bindings/python/google_benchmark/benchmark.cc
rename to third-party/benchmark/bindings/python/google_benchmark/benchmark.cc

diff  --git a/libcxx/utils/google-benchmark/bindings/python/google_benchmark/example.py b/third-party/benchmark/bindings/python/google_benchmark/example.py
similarity index 100%
rename from libcxx/utils/google-benchmark/bindings/python/google_benchmark/example.py
rename to third-party/benchmark/bindings/python/google_benchmark/example.py

diff  --git a/libcxx/utils/google-benchmark/bindings/python/pybind11.BUILD b/third-party/benchmark/bindings/python/pybind11.BUILD
similarity index 100%
rename from libcxx/utils/google-benchmark/bindings/python/pybind11.BUILD
rename to third-party/benchmark/bindings/python/pybind11.BUILD

diff  --git a/libcxx/utils/google-benchmark/bindings/python/python_headers.BUILD b/third-party/benchmark/bindings/python/python_headers.BUILD
similarity index 100%
rename from libcxx/utils/google-benchmark/bindings/python/python_headers.BUILD
rename to third-party/benchmark/bindings/python/python_headers.BUILD

diff  --git a/libcxx/utils/google-benchmark/bindings/python/requirements.txt b/third-party/benchmark/bindings/python/requirements.txt
similarity index 100%
rename from libcxx/utils/google-benchmark/bindings/python/requirements.txt
rename to third-party/benchmark/bindings/python/requirements.txt

diff  --git a/libcxx/utils/google-benchmark/cmake/AddCXXCompilerFlag.cmake b/third-party/benchmark/cmake/AddCXXCompilerFlag.cmake
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/AddCXXCompilerFlag.cmake
rename to third-party/benchmark/cmake/AddCXXCompilerFlag.cmake

diff  --git a/libcxx/utils/google-benchmark/cmake/CXXFeatureCheck.cmake b/third-party/benchmark/cmake/CXXFeatureCheck.cmake
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/CXXFeatureCheck.cmake
rename to third-party/benchmark/cmake/CXXFeatureCheck.cmake

diff  --git a/libcxx/utils/google-benchmark/cmake/Config.cmake.in b/third-party/benchmark/cmake/Config.cmake.in
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/Config.cmake.in
rename to third-party/benchmark/cmake/Config.cmake.in

diff  --git a/libcxx/utils/google-benchmark/cmake/GetGitVersion.cmake b/third-party/benchmark/cmake/GetGitVersion.cmake
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/GetGitVersion.cmake
rename to third-party/benchmark/cmake/GetGitVersion.cmake

diff  --git a/libcxx/utils/google-benchmark/cmake/GoogleTest.cmake b/third-party/benchmark/cmake/GoogleTest.cmake
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/GoogleTest.cmake
rename to third-party/benchmark/cmake/GoogleTest.cmake

diff  --git a/libcxx/utils/google-benchmark/cmake/GoogleTest.cmake.in b/third-party/benchmark/cmake/GoogleTest.cmake.in
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/GoogleTest.cmake.in
rename to third-party/benchmark/cmake/GoogleTest.cmake.in

diff  --git a/libcxx/utils/google-benchmark/cmake/benchmark.pc.in b/third-party/benchmark/cmake/benchmark.pc.in
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/benchmark.pc.in
rename to third-party/benchmark/cmake/benchmark.pc.in

diff  --git a/libcxx/utils/google-benchmark/cmake/gnu_posix_regex.cpp b/third-party/benchmark/cmake/gnu_posix_regex.cpp
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/gnu_posix_regex.cpp
rename to third-party/benchmark/cmake/gnu_posix_regex.cpp

diff  --git a/libcxx/utils/google-benchmark/cmake/llvm-toolchain.cmake b/third-party/benchmark/cmake/llvm-toolchain.cmake
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/llvm-toolchain.cmake
rename to third-party/benchmark/cmake/llvm-toolchain.cmake

diff  --git a/libcxx/utils/google-benchmark/cmake/posix_regex.cpp b/third-party/benchmark/cmake/posix_regex.cpp
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/posix_regex.cpp
rename to third-party/benchmark/cmake/posix_regex.cpp

diff  --git a/libcxx/utils/google-benchmark/cmake/split_list.cmake b/third-party/benchmark/cmake/split_list.cmake
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/split_list.cmake
rename to third-party/benchmark/cmake/split_list.cmake

diff  --git a/libcxx/utils/google-benchmark/cmake/std_regex.cpp b/third-party/benchmark/cmake/std_regex.cpp
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/std_regex.cpp
rename to third-party/benchmark/cmake/std_regex.cpp

diff  --git a/libcxx/utils/google-benchmark/cmake/steady_clock.cpp b/third-party/benchmark/cmake/steady_clock.cpp
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/steady_clock.cpp
rename to third-party/benchmark/cmake/steady_clock.cpp

diff  --git a/libcxx/utils/google-benchmark/cmake/thread_safety_attributes.cpp b/third-party/benchmark/cmake/thread_safety_attributes.cpp
similarity index 100%
rename from libcxx/utils/google-benchmark/cmake/thread_safety_attributes.cpp
rename to third-party/benchmark/cmake/thread_safety_attributes.cpp

diff  --git a/libcxx/utils/google-benchmark/dependencies.md b/third-party/benchmark/dependencies.md
similarity index 100%
rename from libcxx/utils/google-benchmark/dependencies.md
rename to third-party/benchmark/dependencies.md

diff  --git a/libcxx/utils/google-benchmark/docs/AssemblyTests.md b/third-party/benchmark/docs/AssemblyTests.md
similarity index 100%
rename from libcxx/utils/google-benchmark/docs/AssemblyTests.md
rename to third-party/benchmark/docs/AssemblyTests.md

diff  --git a/libcxx/utils/google-benchmark/docs/_config.yml b/third-party/benchmark/docs/_config.yml
similarity index 100%
rename from libcxx/utils/google-benchmark/docs/_config.yml
rename to third-party/benchmark/docs/_config.yml

diff  --git a/libcxx/utils/google-benchmark/docs/perf_counters.md b/third-party/benchmark/docs/perf_counters.md
similarity index 100%
rename from libcxx/utils/google-benchmark/docs/perf_counters.md
rename to third-party/benchmark/docs/perf_counters.md

diff  --git a/libcxx/utils/google-benchmark/docs/random_interleaving.md b/third-party/benchmark/docs/random_interleaving.md
similarity index 100%
rename from libcxx/utils/google-benchmark/docs/random_interleaving.md
rename to third-party/benchmark/docs/random_interleaving.md

diff  --git a/libcxx/utils/google-benchmark/docs/releasing.md b/third-party/benchmark/docs/releasing.md
similarity index 100%
rename from libcxx/utils/google-benchmark/docs/releasing.md
rename to third-party/benchmark/docs/releasing.md

diff  --git a/libcxx/utils/google-benchmark/docs/tools.md b/third-party/benchmark/docs/tools.md
similarity index 100%
rename from libcxx/utils/google-benchmark/docs/tools.md
rename to third-party/benchmark/docs/tools.md

diff  --git a/libcxx/utils/google-benchmark/include/benchmark/benchmark.h b/third-party/benchmark/include/benchmark/benchmark.h
similarity index 100%
rename from libcxx/utils/google-benchmark/include/benchmark/benchmark.h
rename to third-party/benchmark/include/benchmark/benchmark.h

diff  --git a/libcxx/utils/google-benchmark/requirements.txt b/third-party/benchmark/requirements.txt
similarity index 100%
rename from libcxx/utils/google-benchmark/requirements.txt
rename to third-party/benchmark/requirements.txt

diff  --git a/libcxx/utils/google-benchmark/setup.py b/third-party/benchmark/setup.py
similarity index 100%
rename from libcxx/utils/google-benchmark/setup.py
rename to third-party/benchmark/setup.py

diff  --git a/libcxx/utils/google-benchmark/src/CMakeLists.txt b/third-party/benchmark/src/CMakeLists.txt
similarity index 100%
rename from libcxx/utils/google-benchmark/src/CMakeLists.txt
rename to third-party/benchmark/src/CMakeLists.txt

diff  --git a/libcxx/utils/google-benchmark/src/arraysize.h b/third-party/benchmark/src/arraysize.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/arraysize.h
rename to third-party/benchmark/src/arraysize.h

diff  --git a/libcxx/utils/google-benchmark/src/benchmark.cc b/third-party/benchmark/src/benchmark.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/benchmark.cc
rename to third-party/benchmark/src/benchmark.cc

diff  --git a/libcxx/utils/google-benchmark/src/benchmark_api_internal.cc b/third-party/benchmark/src/benchmark_api_internal.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/benchmark_api_internal.cc
rename to third-party/benchmark/src/benchmark_api_internal.cc

diff  --git a/libcxx/utils/google-benchmark/src/benchmark_api_internal.h b/third-party/benchmark/src/benchmark_api_internal.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/benchmark_api_internal.h
rename to third-party/benchmark/src/benchmark_api_internal.h

diff  --git a/libcxx/utils/google-benchmark/src/benchmark_main.cc b/third-party/benchmark/src/benchmark_main.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/benchmark_main.cc
rename to third-party/benchmark/src/benchmark_main.cc

diff  --git a/libcxx/utils/google-benchmark/src/benchmark_name.cc b/third-party/benchmark/src/benchmark_name.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/benchmark_name.cc
rename to third-party/benchmark/src/benchmark_name.cc

diff  --git a/libcxx/utils/google-benchmark/src/benchmark_register.cc b/third-party/benchmark/src/benchmark_register.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/benchmark_register.cc
rename to third-party/benchmark/src/benchmark_register.cc

diff  --git a/libcxx/utils/google-benchmark/src/benchmark_register.h b/third-party/benchmark/src/benchmark_register.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/benchmark_register.h
rename to third-party/benchmark/src/benchmark_register.h

diff  --git a/libcxx/utils/google-benchmark/src/benchmark_runner.cc b/third-party/benchmark/src/benchmark_runner.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/benchmark_runner.cc
rename to third-party/benchmark/src/benchmark_runner.cc

diff  --git a/libcxx/utils/google-benchmark/src/benchmark_runner.h b/third-party/benchmark/src/benchmark_runner.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/benchmark_runner.h
rename to third-party/benchmark/src/benchmark_runner.h

diff  --git a/libcxx/utils/google-benchmark/src/check.h b/third-party/benchmark/src/check.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/check.h
rename to third-party/benchmark/src/check.h

diff  --git a/libcxx/utils/google-benchmark/src/colorprint.cc b/third-party/benchmark/src/colorprint.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/colorprint.cc
rename to third-party/benchmark/src/colorprint.cc

diff  --git a/libcxx/utils/google-benchmark/src/colorprint.h b/third-party/benchmark/src/colorprint.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/colorprint.h
rename to third-party/benchmark/src/colorprint.h

diff  --git a/libcxx/utils/google-benchmark/src/commandlineflags.cc b/third-party/benchmark/src/commandlineflags.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/commandlineflags.cc
rename to third-party/benchmark/src/commandlineflags.cc

diff  --git a/libcxx/utils/google-benchmark/src/commandlineflags.h b/third-party/benchmark/src/commandlineflags.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/commandlineflags.h
rename to third-party/benchmark/src/commandlineflags.h

diff  --git a/libcxx/utils/google-benchmark/src/complexity.cc b/third-party/benchmark/src/complexity.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/complexity.cc
rename to third-party/benchmark/src/complexity.cc

diff  --git a/libcxx/utils/google-benchmark/src/complexity.h b/third-party/benchmark/src/complexity.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/complexity.h
rename to third-party/benchmark/src/complexity.h

diff  --git a/libcxx/utils/google-benchmark/src/console_reporter.cc b/third-party/benchmark/src/console_reporter.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/console_reporter.cc
rename to third-party/benchmark/src/console_reporter.cc

diff  --git a/libcxx/utils/google-benchmark/src/counter.cc b/third-party/benchmark/src/counter.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/counter.cc
rename to third-party/benchmark/src/counter.cc

diff  --git a/libcxx/utils/google-benchmark/src/counter.h b/third-party/benchmark/src/counter.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/counter.h
rename to third-party/benchmark/src/counter.h

diff  --git a/libcxx/utils/google-benchmark/src/csv_reporter.cc b/third-party/benchmark/src/csv_reporter.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/csv_reporter.cc
rename to third-party/benchmark/src/csv_reporter.cc

diff  --git a/libcxx/utils/google-benchmark/src/cycleclock.h b/third-party/benchmark/src/cycleclock.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/cycleclock.h
rename to third-party/benchmark/src/cycleclock.h

diff  --git a/libcxx/utils/google-benchmark/src/internal_macros.h b/third-party/benchmark/src/internal_macros.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/internal_macros.h
rename to third-party/benchmark/src/internal_macros.h

diff  --git a/libcxx/utils/google-benchmark/src/json_reporter.cc b/third-party/benchmark/src/json_reporter.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/json_reporter.cc
rename to third-party/benchmark/src/json_reporter.cc

diff  --git a/libcxx/utils/google-benchmark/src/log.h b/third-party/benchmark/src/log.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/log.h
rename to third-party/benchmark/src/log.h

diff  --git a/libcxx/utils/google-benchmark/src/mutex.h b/third-party/benchmark/src/mutex.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/mutex.h
rename to third-party/benchmark/src/mutex.h

diff  --git a/libcxx/utils/google-benchmark/src/perf_counters.cc b/third-party/benchmark/src/perf_counters.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/perf_counters.cc
rename to third-party/benchmark/src/perf_counters.cc

diff  --git a/libcxx/utils/google-benchmark/src/perf_counters.h b/third-party/benchmark/src/perf_counters.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/perf_counters.h
rename to third-party/benchmark/src/perf_counters.h

diff  --git a/libcxx/utils/google-benchmark/src/re.h b/third-party/benchmark/src/re.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/re.h
rename to third-party/benchmark/src/re.h

diff  --git a/libcxx/utils/google-benchmark/src/reporter.cc b/third-party/benchmark/src/reporter.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/reporter.cc
rename to third-party/benchmark/src/reporter.cc

diff  --git a/libcxx/utils/google-benchmark/src/sleep.cc b/third-party/benchmark/src/sleep.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/sleep.cc
rename to third-party/benchmark/src/sleep.cc

diff  --git a/libcxx/utils/google-benchmark/src/sleep.h b/third-party/benchmark/src/sleep.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/sleep.h
rename to third-party/benchmark/src/sleep.h

diff  --git a/libcxx/utils/google-benchmark/src/statistics.cc b/third-party/benchmark/src/statistics.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/statistics.cc
rename to third-party/benchmark/src/statistics.cc

diff  --git a/libcxx/utils/google-benchmark/src/statistics.h b/third-party/benchmark/src/statistics.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/statistics.h
rename to third-party/benchmark/src/statistics.h

diff  --git a/libcxx/utils/google-benchmark/src/string_util.cc b/third-party/benchmark/src/string_util.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/string_util.cc
rename to third-party/benchmark/src/string_util.cc

diff  --git a/libcxx/utils/google-benchmark/src/string_util.h b/third-party/benchmark/src/string_util.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/string_util.h
rename to third-party/benchmark/src/string_util.h

diff  --git a/libcxx/utils/google-benchmark/src/sysinfo.cc b/third-party/benchmark/src/sysinfo.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/sysinfo.cc
rename to third-party/benchmark/src/sysinfo.cc

diff  --git a/libcxx/utils/google-benchmark/src/thread_manager.h b/third-party/benchmark/src/thread_manager.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/thread_manager.h
rename to third-party/benchmark/src/thread_manager.h

diff  --git a/libcxx/utils/google-benchmark/src/thread_timer.h b/third-party/benchmark/src/thread_timer.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/thread_timer.h
rename to third-party/benchmark/src/thread_timer.h

diff  --git a/libcxx/utils/google-benchmark/src/timers.cc b/third-party/benchmark/src/timers.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/src/timers.cc
rename to third-party/benchmark/src/timers.cc

diff  --git a/libcxx/utils/google-benchmark/src/timers.h b/third-party/benchmark/src/timers.h
similarity index 100%
rename from libcxx/utils/google-benchmark/src/timers.h
rename to third-party/benchmark/src/timers.h

diff  --git a/libcxx/utils/google-benchmark/test/AssemblyTests.cmake b/third-party/benchmark/test/AssemblyTests.cmake
similarity index 100%
rename from libcxx/utils/google-benchmark/test/AssemblyTests.cmake
rename to third-party/benchmark/test/AssemblyTests.cmake

diff  --git a/libcxx/utils/google-benchmark/test/BUILD b/third-party/benchmark/test/BUILD
similarity index 100%
rename from libcxx/utils/google-benchmark/test/BUILD
rename to third-party/benchmark/test/BUILD

diff  --git a/libcxx/utils/google-benchmark/test/CMakeLists.txt b/third-party/benchmark/test/CMakeLists.txt
similarity index 100%
rename from libcxx/utils/google-benchmark/test/CMakeLists.txt
rename to third-party/benchmark/test/CMakeLists.txt

diff  --git a/libcxx/utils/google-benchmark/test/args_product_test.cc b/third-party/benchmark/test/args_product_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/args_product_test.cc
rename to third-party/benchmark/test/args_product_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/basic_test.cc b/third-party/benchmark/test/basic_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/basic_test.cc
rename to third-party/benchmark/test/basic_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/benchmark_gtest.cc b/third-party/benchmark/test/benchmark_gtest.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/benchmark_gtest.cc
rename to third-party/benchmark/test/benchmark_gtest.cc

diff  --git a/libcxx/utils/google-benchmark/test/benchmark_name_gtest.cc b/third-party/benchmark/test/benchmark_name_gtest.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/benchmark_name_gtest.cc
rename to third-party/benchmark/test/benchmark_name_gtest.cc

diff  --git a/libcxx/utils/google-benchmark/test/benchmark_random_interleaving_gtest.cc b/third-party/benchmark/test/benchmark_random_interleaving_gtest.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/benchmark_random_interleaving_gtest.cc
rename to third-party/benchmark/test/benchmark_random_interleaving_gtest.cc

diff  --git a/libcxx/utils/google-benchmark/test/benchmark_test.cc b/third-party/benchmark/test/benchmark_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/benchmark_test.cc
rename to third-party/benchmark/test/benchmark_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/clobber_memory_assembly_test.cc b/third-party/benchmark/test/clobber_memory_assembly_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/clobber_memory_assembly_test.cc
rename to third-party/benchmark/test/clobber_memory_assembly_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/commandlineflags_gtest.cc b/third-party/benchmark/test/commandlineflags_gtest.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/commandlineflags_gtest.cc
rename to third-party/benchmark/test/commandlineflags_gtest.cc

diff  --git a/libcxx/utils/google-benchmark/test/complexity_test.cc b/third-party/benchmark/test/complexity_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/complexity_test.cc
rename to third-party/benchmark/test/complexity_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/cxx03_test.cc b/third-party/benchmark/test/cxx03_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/cxx03_test.cc
rename to third-party/benchmark/test/cxx03_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/diagnostics_test.cc b/third-party/benchmark/test/diagnostics_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/diagnostics_test.cc
rename to third-party/benchmark/test/diagnostics_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/display_aggregates_only_test.cc b/third-party/benchmark/test/display_aggregates_only_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/display_aggregates_only_test.cc
rename to third-party/benchmark/test/display_aggregates_only_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/donotoptimize_assembly_test.cc b/third-party/benchmark/test/donotoptimize_assembly_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/donotoptimize_assembly_test.cc
rename to third-party/benchmark/test/donotoptimize_assembly_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/donotoptimize_test.cc b/third-party/benchmark/test/donotoptimize_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/donotoptimize_test.cc
rename to third-party/benchmark/test/donotoptimize_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/filter_test.cc b/third-party/benchmark/test/filter_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/filter_test.cc
rename to third-party/benchmark/test/filter_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/fixture_test.cc b/third-party/benchmark/test/fixture_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/fixture_test.cc
rename to third-party/benchmark/test/fixture_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/internal_threading_test.cc b/third-party/benchmark/test/internal_threading_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/internal_threading_test.cc
rename to third-party/benchmark/test/internal_threading_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/link_main_test.cc b/third-party/benchmark/test/link_main_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/link_main_test.cc
rename to third-party/benchmark/test/link_main_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/map_test.cc b/third-party/benchmark/test/map_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/map_test.cc
rename to third-party/benchmark/test/map_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/memory_manager_test.cc b/third-party/benchmark/test/memory_manager_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/memory_manager_test.cc
rename to third-party/benchmark/test/memory_manager_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/multiple_ranges_test.cc b/third-party/benchmark/test/multiple_ranges_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/multiple_ranges_test.cc
rename to third-party/benchmark/test/multiple_ranges_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/options_test.cc b/third-party/benchmark/test/options_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/options_test.cc
rename to third-party/benchmark/test/options_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/output_test.h b/third-party/benchmark/test/output_test.h
similarity index 100%
rename from libcxx/utils/google-benchmark/test/output_test.h
rename to third-party/benchmark/test/output_test.h

diff  --git a/libcxx/utils/google-benchmark/test/output_test_helper.cc b/third-party/benchmark/test/output_test_helper.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/output_test_helper.cc
rename to third-party/benchmark/test/output_test_helper.cc

diff  --git a/libcxx/utils/google-benchmark/test/perf_counters_gtest.cc b/third-party/benchmark/test/perf_counters_gtest.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/perf_counters_gtest.cc
rename to third-party/benchmark/test/perf_counters_gtest.cc

diff  --git a/libcxx/utils/google-benchmark/test/perf_counters_test.cc b/third-party/benchmark/test/perf_counters_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/perf_counters_test.cc
rename to third-party/benchmark/test/perf_counters_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/register_benchmark_test.cc b/third-party/benchmark/test/register_benchmark_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/register_benchmark_test.cc
rename to third-party/benchmark/test/register_benchmark_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/repetitions_test.cc b/third-party/benchmark/test/repetitions_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/repetitions_test.cc
rename to third-party/benchmark/test/repetitions_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/report_aggregates_only_test.cc b/third-party/benchmark/test/report_aggregates_only_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/report_aggregates_only_test.cc
rename to third-party/benchmark/test/report_aggregates_only_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/reporter_output_test.cc b/third-party/benchmark/test/reporter_output_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/reporter_output_test.cc
rename to third-party/benchmark/test/reporter_output_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/skip_with_error_test.cc b/third-party/benchmark/test/skip_with_error_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/skip_with_error_test.cc
rename to third-party/benchmark/test/skip_with_error_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/state_assembly_test.cc b/third-party/benchmark/test/state_assembly_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/state_assembly_test.cc
rename to third-party/benchmark/test/state_assembly_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/statistics_gtest.cc b/third-party/benchmark/test/statistics_gtest.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/statistics_gtest.cc
rename to third-party/benchmark/test/statistics_gtest.cc

diff  --git a/libcxx/utils/google-benchmark/test/string_util_gtest.cc b/third-party/benchmark/test/string_util_gtest.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/string_util_gtest.cc
rename to third-party/benchmark/test/string_util_gtest.cc

diff  --git a/libcxx/utils/google-benchmark/test/templated_fixture_test.cc b/third-party/benchmark/test/templated_fixture_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/templated_fixture_test.cc
rename to third-party/benchmark/test/templated_fixture_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/user_counters_tabular_test.cc b/third-party/benchmark/test/user_counters_tabular_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/user_counters_tabular_test.cc
rename to third-party/benchmark/test/user_counters_tabular_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/user_counters_test.cc b/third-party/benchmark/test/user_counters_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/user_counters_test.cc
rename to third-party/benchmark/test/user_counters_test.cc

diff  --git a/libcxx/utils/google-benchmark/test/user_counters_thousands_test.cc b/third-party/benchmark/test/user_counters_thousands_test.cc
similarity index 100%
rename from libcxx/utils/google-benchmark/test/user_counters_thousands_test.cc
rename to third-party/benchmark/test/user_counters_thousands_test.cc

diff  --git a/libcxx/utils/google-benchmark/tools/BUILD.bazel b/third-party/benchmark/tools/BUILD.bazel
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/BUILD.bazel
rename to third-party/benchmark/tools/BUILD.bazel

diff  --git a/libcxx/utils/google-benchmark/tools/compare.py b/third-party/benchmark/tools/compare.py
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/compare.py
rename to third-party/benchmark/tools/compare.py

diff  --git a/libcxx/utils/google-benchmark/tools/gbench/Inputs/test1_run1.json b/third-party/benchmark/tools/gbench/Inputs/test1_run1.json
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/gbench/Inputs/test1_run1.json
rename to third-party/benchmark/tools/gbench/Inputs/test1_run1.json

diff  --git a/libcxx/utils/google-benchmark/tools/gbench/Inputs/test1_run2.json b/third-party/benchmark/tools/gbench/Inputs/test1_run2.json
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/gbench/Inputs/test1_run2.json
rename to third-party/benchmark/tools/gbench/Inputs/test1_run2.json

diff  --git a/libcxx/utils/google-benchmark/tools/gbench/Inputs/test2_run.json b/third-party/benchmark/tools/gbench/Inputs/test2_run.json
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/gbench/Inputs/test2_run.json
rename to third-party/benchmark/tools/gbench/Inputs/test2_run.json

diff  --git a/libcxx/utils/google-benchmark/tools/gbench/Inputs/test3_run0.json b/third-party/benchmark/tools/gbench/Inputs/test3_run0.json
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/gbench/Inputs/test3_run0.json
rename to third-party/benchmark/tools/gbench/Inputs/test3_run0.json

diff  --git a/libcxx/utils/google-benchmark/tools/gbench/Inputs/test3_run1.json b/third-party/benchmark/tools/gbench/Inputs/test3_run1.json
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/gbench/Inputs/test3_run1.json
rename to third-party/benchmark/tools/gbench/Inputs/test3_run1.json

diff  --git a/libcxx/utils/google-benchmark/tools/gbench/Inputs/test4_run.json b/third-party/benchmark/tools/gbench/Inputs/test4_run.json
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/gbench/Inputs/test4_run.json
rename to third-party/benchmark/tools/gbench/Inputs/test4_run.json

diff  --git a/libcxx/utils/google-benchmark/tools/gbench/__init__.py b/third-party/benchmark/tools/gbench/__init__.py
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/gbench/__init__.py
rename to third-party/benchmark/tools/gbench/__init__.py

diff  --git a/libcxx/utils/google-benchmark/tools/gbench/report.py b/third-party/benchmark/tools/gbench/report.py
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/gbench/report.py
rename to third-party/benchmark/tools/gbench/report.py

diff  --git a/libcxx/utils/google-benchmark/tools/gbench/util.py b/third-party/benchmark/tools/gbench/util.py
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/gbench/util.py
rename to third-party/benchmark/tools/gbench/util.py

diff  --git a/libcxx/utils/google-benchmark/tools/requirements.txt b/third-party/benchmark/tools/requirements.txt
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/requirements.txt
rename to third-party/benchmark/tools/requirements.txt

diff  --git a/libcxx/utils/google-benchmark/tools/strip_asm.py b/third-party/benchmark/tools/strip_asm.py
similarity index 100%
rename from libcxx/utils/google-benchmark/tools/strip_asm.py
rename to third-party/benchmark/tools/strip_asm.py


        


More information about the libcxx-commits mailing list