[clangd-dev] Transport layer abstraction

Sam McCall via clangd-dev clangd-dev at lists.llvm.org
Mon Oct 15 08:15:51 PDT 2018


Ouch, sorry to hear that! Hope it clears up soon!

I took a look at your patch and I don't think there's any divergence here.
D53286 only covers a subset of the work, and the interface changes look
much the same in both patches.

The bit that's left out is moving the remaining JSONRPCDispatcher logic
into ClangdLSPServer or similar - your patch is more complete in this
respect. But my hope is to try to land this in smaller pieces, so I might
try to get reviews for D53286, and then come back for the rest.
By the time you're back, we should be a lot closer to what you need than we
are today. Sound OK?

Get well soon!

On Mon, Oct 15, 2018 at 4:47 PM Jan Korous <jkorous at apple.com> wrote:

> Hi Sam,
>
> My work is unfortunately stuck at the moment - I have been enjoying a
> sinus infection for almost two weeks now, expecting to go back to work in
> another week.
>
> I put together crude implementation (lots of TODO everywhere and
> definitely not finished) using your old proposal. If that’s of any interest
> you can find it here:
> https://reviews.llvm.org/D53290
>
> Since this is very important for us I am definitely going to work on
> transport layer when I am back at work. I will take a proper look and try
> to “merge” our approaches.
>
> Cheers,
>
> Jan
>
> On Oct 15, 2018, at 1:54 PM, Sam McCall <sammccall at google.com> wrote:
>
> Hi Jan,
>
> Wondering how your work here is going :-)
> I revived the Transport idea and put together
> https://reviews.llvm.org/D53286 where everything seems to work.
> (There's some cruft left in JSONRPCDispatcher that can be removed
> afterwards).
>
> I think this is worthwhile independent of XPC (e.g. for testing
> ClangdLSPServer) but of course it has to be compatible with what you need.
> Let me know if you see any blockers here or if something has changed!
>
> Cheers, Sam
>
> On Mon, Sep 17, 2018 at 3:46 PM Jan Korous <jkorous at apple.com> wrote:
>
>>
>> On 13 Sep 2018, at 15:15, Sam McCall <sammccall at google.com> wrote:
>>
>>
>>
>> On Thu, Sep 13, 2018 at 3:21 PM Jan Korous <jkorous at apple.com> wrote:
>>
>>> Hi Sam,
>>>
>>> Thanks for the response.
>>>
>>> I think I understand your intentions better now. The Transport
>>> abstraction looks good enough to be used to me so let me just explain what
>>> I though are imperfections that I know about (no big deals though). If
>>> these are ok with you I’d go ahead and work on it. Another thing is that
>>> you guys already have another transport layer implementation so if this
>>> abstraction fits nicely 2 out of 3 implementations with XPC being kind of
>>> little less aligned I’d consider it still pragmatic to use it.
>>>
>> So Transport isn't implemented or even really fully fleshed out. We
>> should design it (or something else) to bridge the JSON/XPC divide well,
>> though perfect is always hard.
>>
>> Our "other transport layer" is really a different protocol, it doesn't
>> use ClangdLSPServer but wraps ClangdServer instead. The protocol happens to
>> follow LSP fairly closely (at least in spirit), but doesn't try to reuse
>> any of our LSP code.
>>
>> Not sure whether this was the right call, but they're not likely to
>> switch to any Transport abstraction we add.
>>
>>
>> Got it.
>>
>> This is certainly an option for XCode too, but then the most obvious path
>> is write a wrapper around ClangdServer, and ignore
>> ClangdLSPServer/ClangdMain.
>>
>>
>> True. We should be good with just custom transport layer implementation
>> though.
>>
>>
>> Transport::MessageHandler
>>>
>>> I just wonder if this isn't actually the “reverse” of desired interface.
>>> It seems to me that MessageHandler is abstracting clangd implementation
>>> details when used in specific Transport implementation. For example it
>>> would help if we ever decide to swap clangd for another LSP implementation
>>> or clangd-ng. But it doesn’t abstract the transport layer itself. What
>>> benefit does it grant instead of calling for example
>>> ClangdLSPServer::notify() directly?
>>>
>> Yeah, this is mostly just ad-hoc decoupling, not real abstraction. It
>> buys us some things:
>>  - transports can be tested without involving the behavior of all of
>> clangd
>>
>>
>> Oh, I totally missed that. I am sold now!
>>
>>  - ClangdLSPServer's notify() can be private (using interface
>> inheritance, or a nested class, or something)
>>  - transports don't have to depend on ClangdLSPServer, if we decide to
>> start caring more about layering (interesting for out-of-tree transports!)
>>
>>
>> Also a fair point.
>>
>>
>>
>>> ClangdLSPServer::run()
>>>
>>> IIUIC purpose of this method is to have abstraction over different
>>> Transport implementations with common strategies for program return value
>>> (ShutdownRequestReceived) and error handling (return value of
>>> Transport.loop()).
>>
>> Well, the purpose of the *current* ClangdLSPServer::run() is set up the
>> dispatcher right, and to put the error handling code somewhere.
>> Maybe it's totally redundant and main should just call Transport.loop().
>>
>> (FWIW I think the error handling on exit isn't an important part of the
>> protocol and we could take some shortcuts, especially in an XPC context).
>>
>>
>>> Yet since we can’t do this for XPC (as xpc_main doesn’t return) we’d
>>> have to duplicate the code (~10 LOC, no big deal) but it would mean that it
>>> wouldn’t be always possible to just blindly swap implementations of this
>>> interface.
>>> In other words it’s a violation of Liskov substitution principle which
>>> in itself doesn’t necessarily mean it’s a big deal - I am just wondering if
>>> it makes sense to have such abstraction.
>>>
>> So I'm not sure how big a deal this is. It would indeed be cleaner not to
>> have a loop() that sometimes returns and sometimes doesn't. Pragmatically,
>> I think we can arrange to handle it calling exit instead, I guess.
>>
>> How do you want clangd shutdown to work in an XPC context? Reading the
>> XPC docs suggest services just run until they get SIGKILL. In which case a
>> Transport::loop() that never returns seems completely appropriate to me:
>> the transport dictates that the stream of events never ends.
>>
>>
>> Check if the client request is exit notification in the XPC message
>> handling C block and conditionally call exit(). Would just need to get
>> access to ClangdLSPServer::ShutdownRequestReceived from there.
>>
>>
>> This is basically what I meant that it’s difficult to have common
>>> abstraction over the “input direction” of transport layer (since
>>> Transport::MessageHandler looks more like an abstraction over
>>> ClangdLSPServer input handling) and because of that I’d consider different
>>> dispatchers instead of Transport::loop().
>>>
>> I'm not sure what distinction is being drawn here, I think we'll end up
>> with some function like loop() that returns in JSON and not with XPC.
>> Where is my logic confused?
>>  - JSONRPCDispatcher does some JSON stuff and some generic "message
>> handling" stuff
>>  - we want the "message handling" stuff to be common
>>  - the JSON stuff needs to be split from the message handling stuff, so
>> it can be replaced by XPC (in the sketch I called this part "Transport")
>>  - we want a common abstraction (e.g. interface) over the XPC and
>> JSON-specific code
>>  - XPC needs somewhere to call xpc_main(), which doesn't return
>>  - JSON needs every function to return, because it supports clean exit
>>  - therefore some XPC function doesn't return, when the corresponding
>> JSON one does
>> There are ways around this I guess (call xpc_main on a thread? have the
>> JSON code clean up and call exit()?) but they seem bad worse than just
>> having a transport that never stops looping…
>>
>>
>> My point of view was that rather than having part of abstraction that is
>> working only for 1 out of 2 implementations it is supposed to abstract I
>> would consider removing it from the interface (less code, less complexity)
>> and keeping it as part of JSON-RPC specific code. But I see the benefit of
>> not having duplicated callbacks to dispatcher registration so I am happy to
>> go with the Transport::loop().
>>
>> Cheers,
>>
>> Jan
>>
>>
>>
>>>
>>> WDYT?
>>>
>>> Cheers,
>>>
>>> Jan
>>>
>>>
>>>
>>> P. S. For the record (if it would matter in the future) XPC interface is
>>> callback-based:
>>>
>>> void XPCEntrypoint(xpc_connection_t connection) {
>>>   xpc_connection_set_event_handler(connection,
>>>     ^(xpc_object_t message) {/* C block handling the message */ }
>>>   );
>>>   xpc_connection_resume(connection);
>>> }
>>>
>>> int main() {
>>>   xpc_main(XPCEntrypoint);
>>> }
>>>
>>>
>>>
>>> > On Sep 12, 2018, at 11:17 PM, Sam McCall <sammccall at google.com> wrote:
>>> >
>>> > On Wed, Sep 12, 2018 at 6:20 PM Jan Korous via clangd-dev <
>>> clangd-dev at lists.llvm.org> wrote:
>>> > Hi all,
>>> >
>>> > We had an internal discussion about XPC use with clangd and the result
>>> is that XPC-based transport layer in clangd is a hard requirement. This
>>> effectively means that the adapter approach is not feasible and I am
>>> looking into the transport layer abstraction Sam proposed earlier (
>>> https://reviews.llvm.org/D49389). Sorry for the trouble!
>>> > No problem! (And sorry if I've been a bit unresponsive!)
>>> >
>>> > There is a difficulty with designing an abstraction over transport
>>> layer due to XPC API requiring client code to hand control over. It's
>>> necessary for XPC server to call xpc_main() with a message handling
>>> callback:
>>> > xpc_main(connection_handler);
>>> > But XPC is running it's own message loop inside xpc_main() which is
>>> supposed to never return (in case of failure exit() is called) which makes
>>> it difficult to have common interface for JSON RPC and XPC.
>>> > Unless I am mistaken that means that the message loop
>>> (Transport::loop(), ClangdLSPServer::run()) can't be part of generic code.
>>> >
>>> > I am curious what are other people's thoughts on this?
>>> > That makes sense to me, but I don't think a common interface is
>>> impossible.
>>> > I noticed this in your patch, and in D49389 the idea (which I didn't
>>> spell out) was that the XPC transport's loop() would call xpc_main and
>>> never return.
>>> > Transport::loop() is specific to a transport, it doesn't need to be
>>> part of generic code. ClangdLSPServer::run() ends up calling
>>> Transport::loop(), so it could be common code.
>>> >
>>> > It'd look something like:
>>> >
>>> > static Transport::MessageHandler *IncomingXPCMessages; // XPC -> clangd
>>> > static xpc_connection_t *XPCConnection; // clangd -> XPC
>>> >
>>> > XPCTransport::loop(MessageHandler &M) {
>>> >   IncomingXPCMessages = &M;
>>> >   xpc_main(&XPCEntrypoint);
>>> > }
>>> >
>>> > static void XPCEntrypoint(xpc_connection_t Conn) {
>>> >   // I don't remember how an XPC server looks
>>> >   // i guess you're reading editor->clangd messages in a loop?
>>> >   for (;;) {
>>> >     // lets say you parsed a notification
>>> >     std::string name = ...;
>>> >     json::Value args = xpcToJSON(...);
>>> >     IncomingXPCMessages->notify(name, args); // send the notification
>>> to clangd
>>> >   }
>>> > }
>>> >
>>> > XPCTransport::notify(StringRef name, json::Value args) {
>>> >   xpc_object_t XPCNotification = encodeNotificationForXPCSomehow(name,
>>> jsonToXPC(args))
>>> >   xpc_connection_send(*XPCConnection, XPCNotification);
>>> > }
>>> >
>>> > WDYT?
>>> >
>>> > I tried to come up with some reasonable abstraction but I am probably
>>> already anchored by my previous work (https://reviews.llvm.org/D48559)
>>> - all my design ideas tend to go in the direction of abstraction over
>>> JsonOutput and custom dispatchers (StdIoDispatcher/XPCDispatcher).
>>> >
>>> > Cheers,
>>> >
>>> > Jan
>>> > _______________________________________________
>>> > clangd-dev mailing list
>>> > clangd-dev at lists.llvm.org
>>> > http://lists.llvm.org/cgi-bin/mailman/listinfo/clangd-dev
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/clangd-dev/attachments/20181015/712af014/attachment-0001.html>


More information about the clangd-dev mailing list