[Lldb-commits] [PATCH] D62931: [lldb-server] Add setting to force 'g' packet use

Greg Clayton via Phabricator via lldb-commits lldb-commits at lists.llvm.org
Thu Jun 6 07:49:10 PDT 2019


clayborg added a comment.

In D62931#1531965 <https://reviews.llvm.org/D62931#1531965>, @labath wrote:

> Personally, I'm wondering whether we even need a setting for this. Running the speed test command locally (i.e. the lowest latency we can possibly get), I get results like:
>
>   (lldb) process plugin packet speed-test --count 100000 --max-receive 2048
>   Testing sending 100000 packets of various sizes:
>   ...
>   qSpeedTest(send=      4, recv=      0) in 0.788656771 s for 126797.88 packets/s (0.007886 ms per packet) with standard deviation of 0.001604 ms
>   qSpeedTest(send=      4, recv=      4) in 0.770005465 s for 129869.21 packets/s (0.007700 ms per packet) with standard deviation of 0.002120 ms
>   qSpeedTest(send=      4, recv=      8) in 0.895460367 s for 111674.40 packets/s (0.008954 ms per packet) with standard deviation of 0.002048 ms
>   qSpeedTest(send=      4, recv=     16) in 0.910367966 s for 109845.70 packets/s (0.009103 ms per packet) with standard deviation of 0.001886 ms
>   qSpeedTest(send=      4, recv=     32) in 0.889442623 s for 112429.96 packets/s (0.008894 ms per packet) with standard deviation of 0.001705 ms
>   qSpeedTest(send=      4, recv=     64) in 0.945124865 s for 105806.12 packets/s (0.009451 ms per packet) with standard deviation of 0.002349 ms
>   qSpeedTest(send=      4, recv=    128) in 0.759995580 s for 131579.72 packets/s (0.007599 ms per packet) with standard deviation of 0.002971 ms
>   qSpeedTest(send=      4, recv=    256) in 0.847535312 s for 117989.19 packets/s (0.008475 ms per packet) with standard deviation of 0.002629 ms
>   qSpeedTest(send=      4, recv=    512) in 1.022377729 s for  97811.21 packets/s (0.010223 ms per packet) with standard deviation of 0.003248 ms
>   qSpeedTest(send=      4, recv=   1024) in 1.436389208 s for  69619.02 packets/s (0.014363 ms per packet) with standard deviation of 0.000688 ms
>   qSpeedTest(send=      4, recv=   2048) in 1.910557270 s for  52340.75 packets/s (0.019105 ms per packet) with standard deviation of 0.001194 ms
>   ...
>
>
> Now, if we assume that the `p` resonse is about 16 bytes long, and `g` response is 2048, we find that the `g` packet is worth approximately two `p` packets. And this is a local link, where it pretty much doesn't matter what we do, as we're pumping 50k packets per second even in the slowest case. On links with a non-negligible latency, the difference between the two packets should be even smaller (though it might be nice to verify that).
>
> Combining this with the fact that we normally expedite all general purpose registers (and so there won't be *any* packets if one is only reading those), I would say that enabling this unconditionally is a pretty safe thing to do.
>
> Greg, Jason, what do you think?


There are so many various GDB remote servers and it is hard to say what will work for most people. Might be worth testing this with the setting set to true and false on a complex program single step on mac, linux and android and verifying that stepping speeds don't regress at all. Android tends to have hundreds of threads which usually makes it the slowest of all when stepping. If we don't see regressions, then I am fine with no setting. Every scenario I know of benefits from fewer packets with larger content (macOS, watchOS, linux, android) so I am fine just enabling it. Maybe it would be good to check what GDB does by default as it might tell us, via comments, why they don't use it if they did run into issues.

So I am ok with no setting as long as perf doesn't regress on complex steps and as long as GDB uses 'g' packets by default.



================
Comment at: lldb/source/Plugins/Process/gdb-remote/ProcessGDBRemote.cpp:124
+     "The file that provides the description for remote target registers."},
+    {"try-to-use-g-packet", OptionValue::eTypeBoolean, true, 0, nullptr, {},
+     "Specify if the server should try to use 'g' packets."}};
----------------
I would change the setting to just be "use-g-packet".


Repository:
  rLLDB LLDB

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D62931/new/

https://reviews.llvm.org/D62931





More information about the lldb-commits mailing list