[llvm-dev] Questions About LLVM Test Suite: Time Units, Re-running benchmarks
Stefanos Baziotis via llvm-dev
llvm-dev at lists.llvm.org
Mon Jul 19 16:00:52 PDT 2021
Yes, I agree. And as I mentioned, one can figure it out by manually
reproducing some measurements. It's just that it leaves you wondering "is
there some other dir that uses something different?" if nobody tells you
about it. Ok, good, I'll try to add some documentation on that.
By the way, does lit have any flags to set core affinity? Currently, I have
modified `timeit.sh` to use `taskset` as in: `taskset --cpu-list 2,4,6 perf
stat ...`
It seems reliable, although I'd like to find a way to actually test the
reliability. But, if lit has an option already, I could use that.
Best,
Stefanos
Στις Τρί, 20 Ιουλ 2021 στις 1:25 π.μ., ο/η Michael Kruse <
llvmdev at meinersbur.de> έγραψε:
> Am Mo., 19. Juli 2021 um 14:47 Uhr schrieb Stefanos Baziotis
> <stefanos.baziotis at gmail.com>:
> > For example, the only reason that it seems that MultiSource/ use
> > seconds is just because I ran a bunch of them manually (and because
> > some outputs saved by llvm-lit, which measure in seconds, match
> > the numbers on JSON).
> >
> > If we know the unit of time per test case (or per X grouping of tests
> > for that matter), we could then, e.g., normalize the times, as you
> > suggest, or anyway, know the unit of time and act accordingly.
>
> You know the unit of time from the top-level folder. MicroBenchmarks
> is microseconds (because Google Benchmark reports microseconds),
> everything is seconds.
>
> That might be confusing when you don't know about it, but if you do
> you there is no ambiguity.
>
> Michael
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210720/90a3fbfc/attachment.html>
More information about the llvm-dev
mailing list