[Lldb-commits] [PATCH] Refactor Socket into a first class Host-object

Todd Fiala tfiala at google.com
Mon Jul 28 12:34:42 PDT 2014


Hey Zachary,

If you look in lldb-gdbserver.cpp in the patch I sent (it has updates to get it up to date with top of tree, which changed in this file on Saturday), this section of code:

```C++
                // FIXME use new generic named pipe support.
                int fd = ::open(named_pipe_path, O_WRONLY);
                if (fd > -1)
                {
                    Socket &socket = static_cast<Socket&> (s_listen_connection_up->GetReadObject ());
                    const uint16_t bound_port = socket.GetBoundPort (10);
```

That socket represents a listener socket, and the operation that is intending to be performed is "ask the listener what socket it is listening on" --- meaning, what port is the socket listening on.  We're telling the listening port to assign something randomly that isn't already used, and here is where we find out what was actually selected.  I can't totally tell what the code under Socket is trying to do here, but what it is supposed to do is ask the listening socket, which is *not yet* connected with a client, to specify the port it is using.

This is probably what is breaking the Mac side too, since the Mac uses this paradigm I think even more than we do on Linux at the moment given the way they launch debugserver for all local debugging.  (You'll notice only the Linux llgs-related tests failed, but if we were using llgs everywhere, everything would have failed, much like the Mac).

With the new code, how would you expect I'd ask a listener socket what port it is bound to?  Most likely we need to adjust the code to do whatever that is in these cases.

http://reviews.llvm.org/D4641






More information about the lldb-commits mailing list