[Lldb-commits] [PATCH] D100500: Add setting to artifically delay communication with the debug server, in order ro simulate slow network conditions
Jason Molenda via Phabricator via lldb-commits
lldb-commits at lists.llvm.org
Wed Apr 14 20:12:54 PDT 2021
jasonmolenda added a comment.
Hi Augusto, this is an interesting idea. This would simulate a slow serial connection well, but over a modern USB or Wifi or Bluetooth connection, we find the packet size is nearly irrelevant to performance -- but the number of packets is absolutely critical. If we need to read 100 kbytes, sending ten 10kb read packets is *much* slower than a single 100kb packet. For fun, I built a native debugserver that claims it supported a maximum 1k packet size, and then rebuilt it claiming it supports a maximum 512k packet size. It turns out lldb won't actually use more than a 128kbyte packet size (v. ProcessGDBRemote::GetMaxMemorySize) but anyway,
cat a.c
#include <stdlib.h>
int main() { const int bufsize = 768 * 1024; char *buf = (char* )malloc(bufsize);
return buf[0]; }
and ran to the `return` and read the buffer,
(lldb) mem rea -b -c `bufsize` --force -o /tmp/f buf
786432 bytes written to '/tmp/f'
When debugserver said it only supports 1024 packets,
1618455297.760255098 < 45> send packet: $qSupported:xmlRegisters=i386,arm,mips,arc#74
1618455297.760361910 < 46> read packet: $qXfer:features:read+;PacketSize=400;qEcho+#00
it took 2092 memory read packets and 0.537 seconds to complete. When I used 512k byte packets (lldb capped it to 128k arbitrarily), it took 24 packets and 0.340 seconds to complete.
the same amount of data was transferred (except for packet header/footer etc but that's nothing) but it was a lot slower. This was on a native system, but as soon as we have USB or WiFi or Bluetooth in the middle, this effect is exaggerated.
If we were debugging over a serial cable, slowing it down per character would be exactly right. But for instance, debugging to an Apple Watch over Bluetooth (at least with the original watch) could be as slow as ~40 packets per second. It basically didn't matter how big those packets are, that's what we can get out of it. (I haven't benchmarked this in YEARS, I'm sure it's faster now, but the general character of the connection is still true).
It's an interesting idea to embed a tester facility like this into lldb, but this isn't (IMO) what we want to model.
FWIW in the past I've used sloxy when I needed to simulate a slow connection over the internet https://github.com/jakob/sloxy . Usually when I'm doing performance work, I enable the gdb-remote packet logging and count the packets and look at the timestamps. I've never reduced the # of packets to accomplish a task, and turned out to have made a mistake in doing so. :) Testing with a real device is always best if you need to be really sure (or want to measure the actual perf improvement of a change), but I don't see a *ton* of benefit in my work to introducing an artificially slow connection like this. (the bug report I used sloxy on was reporting a tricky race condition in lldb that I've put off figuring out how to fix and not to do with performance ;)
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D100500/new/
https://reviews.llvm.org/D100500
More information about the lldb-commits
mailing list