[Lldb-commits] [PATCH] D123020: increase timeouts if running on valgrind
Jonas Devlieghere via Phabricator via lldb-commits
lldb-commits at lists.llvm.org
Thu Apr 7 13:52:47 PDT 2022
JDevlieghere added a comment.
In D123020#3437246 <https://reviews.llvm.org/D123020#3437246>, @llunak wrote:
> In D123020#3426867 <https://reviews.llvm.org/D123020#3426867>, @JDevlieghere wrote:
>
>> FWIW the official policy is outlined here: https://llvm.org/docs/CodeReview.html
>
> I'm aware of it, but as far as I can judge I was following it. Even reading it now again I see nothing that I would understand as mandating review for everything.
>
>> The GDB remote packet timeout is customizable:
>> ...
>> So maybe we don't need this patch?
>
> The timeout in GDBRemoteCommunicationClient::QueryNoAckModeSupported() is hardcoded to 6 seconds, so I doubt the setting matters there.
This is incorrect. It's hardcoded to be `std::max(GetPacketTimeout(), seconds(6))` where `GetPacketTimeout()` is the value from the setting.
> And even if it did, I'd find it sometwhat silly having to look for a please-unbreak-valgrind setting, especially given that there is already LLVM support for detecting Valgrind.
The suggestion was to increase the timeout globally (through the setting) when running under Valgrind.
> But this all is too much for something so simple.
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D123020/new/
https://reviews.llvm.org/D123020
More information about the lldb-commits
mailing list