[clangd-dev] Investigating performance tracking infrastructure

Eric Liu via clangd-dev clangd-dev at lists.llvm.org
Tue Aug 14 00:31:28 PDT 2018


On Tue, Aug 14, 2018, 08:40 Kirill Bobyrev <kbobyrev.lists at gmail.com> wrote:

> Hi Alex,
>
> Such test-suite might be very useful and it'd be great to have it. As Eric
> mentioned, I am working on pulling benchmark library into LLVM. Although I
> fell behind over the past week due to the complications with libc++ (you
> can follow the thread here:
> http://lists.llvm.org/pipermail/llvm-dev/2018-August/125176.html).
>
> Eric, Ilya and I have been discussing a possible "cheap" solution - a tool
> which user can feed a compilation database and which could process some
> queries (maybe in YAML format, too). This would allow a realistic benchmark
> (since you could simply feed LLVM codebase or anything else with the size
> you're aiming for) and be relatively easy to implement. The downside of
> such approach would be that it would require some setup effort. As an
> alternative, it might be worth feeding YAML symbol index instead of the
> compilation commands, because currently the global symbol builder is not
> very efficient. I am looking into that issue, too; we have few ideas what
> the performance bottlenecks in global-symbol-builder can be and how to fix
> them, hopefully I will make the tool way faster soon.
>
Note that sema latency is something we also need to take into
consideration, as it's always part of code completion flow, with or without
index.

>
> In the long term, however, I think the LLVM Community is also interested
> in benchmarking other tools which exist under the LLVM umbrella, so I think
> that opting in for the Benchmark approach would be more beneficial. Having
> an infrastructure based on LNT that we could run either on some buildbots
> or locally would be even better. The downside is that it might turn out to
> be really hard to maintain a realistic test-suite, e.g. storing YAML dump
> of the static index somewhere would be hard because we wouldn't want 300+
> Mb files in the tree but hosting it somewhere else and downloading would
> also potentially introduce additional complexity. On the other hand,
> generating a realistic index programmatically might also be hard.
>
> Having said that, convenient infrastructure for benchmarking which would
> align with the LNT and wouldn't require additional effort from the users
> would be amazing and we are certainly interested in collaboration. What
> models of the benchmarks have you considered and what do you think about
> the options described above?
>
> Kind regards,
> Kirill Bobyrev
>
> On Tue, Aug 14, 2018 at 7:35 AM Eric Liu via clangd-dev <
> clangd-dev at lists.llvm.org> wrote:
>
>> Hi Alex,
>>
>> Kirill is working on pulling google benchmark library into llvm and
>> adding benchmarks to clangd. We are also mostly interested in code
>> completion latency and index performance at this point. We don't have a
>> very clear idea on how to create realistic benchmarks yet e.g. what code to
>> use, what static index corpus to use. I wonder if you have ideas here.
>>
>> Another option that might be worth considering is adding a tool that runs
>> clangd code completion on some existing files in the llvm/clang codebase.
>> It can potentially measure both code completion quality and latency.
>>
>> -Eric
>> On Tue, Aug 14, 2018, 00:53 Alex L via clangd-dev <
>> clangd-dev at lists.llvm.org> wrote:
>>
>>> Hi,
>>>
>>> I'm currently investigating and putting together a plan for open-source
>>> and internal performance tracking infrastructure for Clangd.
>>>
>>> Initially we're interested in one particular metric:
>>> - Code-completion latency
>>>
>>> I would like to put together infrastructure that's based on LNT and that
>>> would identify performance regressions that arise as new commits come in.
>>> From the performance issues I've observed in our libclang stack the
>>> existing test-suite that exist in LLVM does not really reproduce the
>>> performance issues that we see in practice well enough. In my opinion we
>>> should create some sort of editor performance test-suite that would be
>>> unrelated to the test-suite that's used for compile time and performance
>>> tracking. WDYT?
>>>
>>> I'm wondering if there are any other folks looking at this at the moment
>>> as well. If yes, I would like to figure out a way to collaborate on a
>>> solution that would satisfy all of our requirements. Please let me know if
>>> you have ideas in terms of how we should be running the tests /  what the
>>> test-suite should be, or what you needs are.
>>>
>>> Thanks,
>>> Alex
>>> _______________________________________________
>>> clangd-dev mailing list
>>> clangd-dev at lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/clangd-dev
>>>
>> _______________________________________________
>> clangd-dev mailing list
>> clangd-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/clangd-dev
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/clangd-dev/attachments/20180814/79d2b06c/attachment.html>


More information about the clangd-dev mailing list