[llvm-dev] New LLD performance builder

Galina Kistanova via llvm-dev llvm-dev at lists.llvm.org
Mon Feb 26 12:46:36 PST 2018


Hello Rafael,

> It seems the produced lld binary is not being statically linked.

Hm. It should. But it seems couple config params are missing. Fixed. Thanks
for catching this!

> Is lld-speed-test in a tmpfs?

Correct.
All the benchmarking tips from https://www.llvm.org/docs/Benchmarking.html
have been applied to that bot.

> Is lld-benchmark.py a copy of lld/utils/benchmark.py?

Correct. Modulo few local changes for more verbose printing to be "bot"
friendly. Didn't decided yet if this is something we want in
lld/utils/benchmark.py.

Thanks

Galina


On Thu, Feb 22, 2018 at 1:56 PM, Rafael Avila de Espindola <
rafael.espindola at gmail.com> wrote:

> Thanks a lot for setting this up!
>
> By using the "mean as aggregation" option one can see the noise in the
> results better:
>
> http://lnt.llvm.org/db_default/v4/link/graph?switch_
> min_mean=yes&moving_window_size=10&plot.9=1.9.7&submit=Update
>
> There are a few benchmarknig tips in https://www.llvm.org/docs/
> Benchmarking.html.
>
> For example, from looking at
>
> http://lab.llvm.org:8011/builders/lld-perf-testsuite/
> builds/285/steps/cmake-configure/logs/stdio
>
> It seems the produced lld binary is not being statically linked.
>
> A tip to make the bot a bit faster is that it could run "ninja bin/lld"
> instead of just "ninja":
>
> http://lab.llvm.org:8011/builders/lld-perf-testsuite/
> builds/285/steps/build-unified-tree/logs/stdio
>
> Is lld-speed-test in a tmpfs?
>
> Is lld-benchmark.py a copy of lld/utils/benchmark.py?
>
> Thanks,
> Rafael
>
> Galina Kistanova via llvm-dev <llvm-dev at lists.llvm.org> writes:
>
> > Hello George,
> >
> > Sorry, somehow hit a send button too soon. Please ignore the previous
> > e-mail.
> >
> > The bot does 10 runs for each of the benchmarks (those dots in the logs
> are
> > meaningful). We can increase the number of runs if proven that this would
> > significantly increase the accuracy. I didn't see the increase in
> accuracy when
> > have been staging the bot, which would justify the extra time and larger
> > gaps between the tested revisions.  10 runs seems give a good balance.
> But
> > I'm open for suggestions.
> >
> > It seems the statistics are quite stable if you would look over number of
> > revisions.
> > And in this particular case the picture seems quite clear.
> >
> > At http://lnt.llvm.org/db_default/v4/link/104, the list of Performance
> > Regressions suggests that the most hit was with the linux-kernel. The
> > regressed metrics - branches, branch-misses, instructions,
> > cycles, seconds-elapsed, task-clock. Some other benchmarks shows
> > regressions in branches and branch-misses, some shows improvements.
> >
> > The metrics are consistent before and after the commit, so, I do not
> think
> > this one is an outliner.
> > For example, if one would take a look at the linux-kernel branches -
> > http://lnt.llvm.org/db_default/v4/link/graph?plot.0=
> 1.12.2&highlight_run=104,
> > it gets obvious that the number of branches increased significantly as a
> > result of the r325313. The metric is very stable around the impacted
> commit
> > and does not go down after. The  branch-misses is more volatile, but
> still
> > consistently shows the regression as the result of this commit.
> >
> > Now someone should see why this particular commit has resulted in
> > significant increase of branching with the Linux Kernel.
> >
> > As of how to use LNT web UI, I'm sure you have checked that, but, just in
> > case, here is the link to the LNT doc - http://llvm.org/docs/lnt/con
> > tents.html.
> >
> >> task-clock results are available for "linux-kernel" and "llvm-as-fsds"
> > only and all other
> >> tests has blank field. Should it mean there was no noticable difference
> > in results ?
> >
> > If you would go to http://lnt.llvm.org/db_default/v4/link/104#task-clock
> > (or go to  http://lnt.llvm.org/db_default/v4/link/104 and select the
> task-clock
> > on the left, which is the same), you would see the list of actual values
> in
> > the "Current" column. All of them populated, none is blank. The column
> "%"
> > contains the difference from the previous run in percents, or dash for no
> > measured difference.
> >
> >>  Also, "Graph" and "Matrix" buttons whatever they should do show errors
> > atm.
> >
> > I guess you didn't select what to graph or what to show as a matrix, did
> > you?
> >
> > Besides reporting to the lnt.llvm.org, each build contains in the log
> all
> > the reported data, so you could process it whatever you want and find
> > helpful.
> >
> > Hope this helps.
> >
> > Thanks
> >
> > Galina
> >
> >
> > On Fri, Feb 16, 2018 at 1:55 AM, George Rimar via llvm-dev <
> > llvm-dev at lists.llvm.org> wrote:
> >
> >> >Hello everyone,
> >> >
> >> >I have added a new public LLD performance builder at
> >> >http://lab.llvm.org:8011/builders/lld-perf-testsuite.
> >> >It builds LLVM and LLD by the latest releaed Clang and runs a set of
> >> >perfromance tests.
> >> >
> >> >The builder is reliable. Please pay attention on the failures.
> >> >
> >> >The performance statistics are here:
> >> >http://lnt.llvm.org/db_default/v4/link/recent_activity
> >> >
> >> >Thanks
> >> >
> >> >Galina
> >>
> >> Great news, thanks !
> >>
> >> Looking on results I am not sure how to explain them though.
> >>
> >> For example r325313 fixes "use after free", it should not give any
> >> performance
> >> slowdowns or boosts. Though if I read results right, they show 23.65%
> >> slowdown
> >> for time of linking linux kernel (http://lnt.llvm.org/db_
> >> default/v4/link/104).
> >>
> >> I guess such variation can happen for example if bot do only single link
> >> iteration for tests,
> >> so that final time is just a error mostly probably.
> >>
> >> task-clock results are available for "linux-kernel" and "llvm-as-fsds"
> >> only and all other
> >> tests has blank field. Should it mean there was no noticable difference
> in
> >> results ?
> >>
> >> Also, "Graph" and "Matrix" buttons whatever they should do show errors
> atm.
> >> ("Nothing to graph." and "Not Found: Request requires some data
> >> arguments.").
> >>
> >> Best regards,
> >> George | Developer | Access Softek, Inc
> >> _______________________________________________
> >> LLVM Developers mailing list
> >> llvm-dev at lists.llvm.org
> >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> >>
> > _______________________________________________
> > LLVM Developers mailing list
> > llvm-dev at lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180226/798bc949/attachment.html>


More information about the llvm-dev mailing list