[llvm-bugs] [Bug 31084] New: [libcxx] Add C++ library benchmarks

via llvm-bugs llvm-bugs at lists.llvm.org
Mon Nov 21 01:41:52 PST 2016


https://llvm.org/bugs/show_bug.cgi?id=31084

            Bug ID: 31084
           Summary: [libcxx] Add C++ library benchmarks
           Product: Test Suite
           Version: trunk
          Hardware: PC
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P
         Component: Programs Tests
          Assignee: unassignedbugs at nondot.org
          Reporter: renato.golin at linaro.org
                CC: llvm-bugs at lists.llvm.org
    Classification: Unclassified

We have plenty of C++ library tests in the libcxx repo, but no benchmarks at
all. During the "Libcxx performance" BoF at the US LLVM 2016, we discussed that
adding such a benchmark to the test-suite would be a great idea. However, I
have no idea how to even start.

First, there were some talks about integrating the Google Benchmark [1] into
the test-suite and (maybe) convert the current benchmarks to use it. Regardless
of a potential move, since we're creating a whole new benchmark, I'd assume
using a tested tool is better than using gettimeofday()...

Second, what kinds of tests do we want? I can think of two approaches:

1. We test the "number of iterations" that algorithms and containers perform on
small to medium datasets and make sure they're scaling as documented. I'm not
sure how to do that other than instrumenting libcxx and only enabling the
instrumentation in the benchmark. Other users can enable this on their own
programs, for debug?

2. We test the actual "wall time" on a single core (threading is a separate
problem), and make sure that not only we don't regress from past runs, but that
the actual time spent *also* scales with the guarantees.

What we cannot do is to compare with other standard libraries, unless we want
to include their sources (and build) in the test-suite. Relying on whatever is
in the system will never work across the board.

Finally, the "complicated set". Threading, atomics and localisation tend to be
hard to test and heavily dependent on the architecture and OS. We already have
tests for that, but I fear the run time of those tests, if measured, will have
a large deviation, even on the same architecture/OS combination.
Sub-architecture and OS configureations will play a heavy role on them.

So, how do we start? Who want's to help?

cheers,
--renato

[1] https://github.com/google/benchmark

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20161121/d407a0f0/attachment.html>


More information about the llvm-bugs mailing list