[llvm-dev] [cfe-dev] Proposal: pull benchmark library to the LLVM main repository
Dean Michael Berris via llvm-dev
llvm-dev at lists.llvm.org
Wed Aug 8 08:39:12 PDT 2018
> On 9 Aug 2018, at 00:59, Kirill Bobyrev via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>
> Hi everyone,
>
> Over the last few days I've been looking into pulling the benchmark library to LLVM main repository as proposed before. I submitted a patch for suppressing warnings in benchmark library itself which prevented it from compiling in LLVM tree (due to enabled -Werror) and managed to build it in-tree.
>
> However, I have found that cxx-benchmarks target (containing all libc++ benchmarks) is not buildable for a while. That fact and my experience of build failures when latest Clang is used to compile the project give me an impression that libc++ benchmarks are not frequently used (e.g. continuously run by buildbots). There are multiple commits in which cxx-benchmarks are not buildable, the one that is most likely causing the issue would be https://reviews.llvm.org/rL338103. Another complication is that libc++ can be built out-of-tree and it's possible to pull gtest (with no fixed version) from GitHub using ExternalProject_Add in one of the configurations. Also, libc++ seems to be capable of building benchmark library in two different configurations within the same build which probably means that some libc++-specific code would still end up in libc++ even if google/benchmark is pulled out of that repository (and also we would have to treat benchmark the same way gtest is handled for out-of-tree builds).
>
> I was willing to reach out to the community and ask for more feedback on the following actions. The possible solutions are:
>
> * Pull benchmark library into LLVM core repository without affecting libc++ at all (and introducing another copy of the library). We already have two distinct benchmark copies as mentioned before and I personally am not happy with introducing another one, but libc++ out-of-tree builds and specific requirements complicate things. libc++ also seems to have buildbots for out-of-tree builds which might be different from the "standard" LLVM workflow.
> * See if the benchmark build can be fixed and handle benchmark similarly to gtest for out-of-tree libc++ builds. While this approach looks *correct* to me, I don't know if many users rely on building libc++ out-of-tree *and* testing it (and also potentially running benchmarks), e.g. the version of gtest downloaded from Github is pulled from master branch, which gives me an impression that this option is not frequently used (otherwise the version would probably be fixed for stability). It would also be harder to execute.
> * Execute the first plan by introducing another copy in the LLVM core repository, then migrate libc++ to the LLVM's benchmark copy if the maintainers of libc++ are in favor. This seems both easier and still "correct" to me.
>
This last option seems like a good plan to follow.
> Since libc++ is currently the only user of benchmark library and I might miss some details, I wanted to see if anyone actively developing it could comment on that. I would also like to hear what the others think should be done to introduce benchmark library to the whole LLVM source tree, because I can't figure out what would be the best series of actions.
>
Eizan (CC’ed) worked on getting the benchmark library included and upgraded in the test-suite. Maybe he can answer some of the more specific questions on the process he came up with.
> Also, is running benchmarks continuously something the community would be happy about in the long run?
>
Personally, I’d settle for a pseudo-target that isolates microbenchmarks as a whole and some pseudo-targets by project or group. This would be a bare minimum for example of we want to benchmark individual support libraries, data structures, utilities, parsers/etc. — I am actually looking forward to putting more microbenchmarks on some parts of the compiler-rt project that live beside the implementation to allow for benchmark-qualified (performance driven?) testing/improvements.
Continuous running is a nice to have, but it’s really hard to get statistically stable measurements if they’re running in non-fully-controlled environments. Even in full-control environments, there’s quite a bit of infrastructure required to ensure that the numbers are repeatable and variance is well-accounted for. That’s a lot of effort for arguably little gain.
My gut tells me that having the benchmarks in-tree and making it easy to run as part of the normal development process is more valuable than having it continuously run. As we grow the quantity and improve quality of benchmarks in-tree we can make a determination of how to collect data from multiple environments and maybe generating reports or dashboards.
Needless to say, I’m looking forward to having more and easier access to microbenchmarks in-tree! :)
Cheers
-- Dean
More information about the llvm-dev
mailing list