[clangd-dev] [llvm-dev] Network RPCs in LLVM projects

Ben Craig via clangd-dev clangd-dev at lists.llvm.org
Fri Dec 13 11:18:40 PST 2019


> The most obvious thing is to depend on something like Thrift, grpc, etc, but these aren't trivial dependencies to take on.

I would recommend against using Apache Thrift unless you are able to recruit a larger community for that project.  I am on the project management committee of Apache Thrift, and I do not feel that it is organizationally prepared to handle a client like LLVM.

Note that I am specifically referring to Apache Thrift.  I take no stance on fbthrift, or any of the other Thrift branches or forks.

From: llvm-dev <llvm-dev-bounces at lists.llvm.org> On Behalf Of Chris Bieneman via llvm-dev
Sent: Friday, December 13, 2019 1:12 PM
To: Sam McCall <sammccall at google.com>
Cc: LLVM Dev <llvm-dev at lists.llvm.org>; via clangd-dev <clangd-dev at lists.llvm.org>
Subject: [EXTERNAL] Re: [llvm-dev] Network RPCs in LLVM projects




On Dec 12, 2019, at 5:58 AM, Sam McCall via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:

Short version: clangd would like to be able to build a client+server that can make RPCs across the internet. An RPC system isn't a trivial dependency and rolling our own from scratch isn't appealing.
Have other projects had a need for this? Any advice on how to approach such dependencies?

--

Longer: clangd (a language server, like an IDE backend) builds an index of the project you're working on in order to answer queries (go to definition, code completion...). This takes *lots* of CPU-time to build, and RAM to serve.
For large codebases with many developers, sharing an index across users<https://urldefense.com/v3/__https:/llvm.discourse.group/t/sharing-indexes-for-multiple-users/202__;!!FbZ0ZwI3Qg!-T3MdhMfM0mtXvun8MGCobbUPx90529XRayAMAG-krixjXbjuvXx6tRyCt6K$> is a better approach - you spend the CPU in one place, you spend the RAM in a few places, and an RPC is fast enough even for code completion. We have experience with this approach inside Google.

We'd like to build this index server upstream (just a shell around clangd's current index code) and put the client in clangd. For open-source projects, I imagine the server being publicly accessible over the internet.
This means we care about
 - latency (this is interactive, every 10ms counts)
 - security
 - proxy traversal, probably
 - sensible behavior under load
 - auth is probably nice-to-have

I don't think this is something we want to build from scratch, I hear portable networking is hard :-)

It really isn't that bad. Just as a note, LLDB does have portable socket communication already, so it could be a refactor and reuse exercise rather than building from scratch.


The most obvious thing is to depend on something like Thrift, grpc, etc, but these aren't trivial dependencies to take on. They could probably be structured as an optional CMake dependency, which we'd want to ask distributors to enable.

This is possible, but adding large and non-standard external dependencies have significant drawbacks for distribution.



Have other projects had anything like these requirements? Any solutions, or desire to use such infrastructure? I saw some RPC layer in ORC, but it seems mostly abstract/FD-based IPC.

The ORC RPC layer in-tree runs over sockets, but I've implemented it to run over XPC (a Darwin low-latency IPC mechanism). It is actually a really useful abstraction over true remote procedure calls.

-Chris



_______________________________________________
LLVM Developers mailing list
llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev<https://urldefense.com/v3/__https:/lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev__;!!FbZ0ZwI3Qg!-T3MdhMfM0mtXvun8MGCobbUPx90529XRayAMAG-krixjXbjuvXx6pjHy4wa$>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/clangd-dev/attachments/20191213/a8283b05/attachment-0001.html>


More information about the clangd-dev mailing list