[Lldb-commits] [lldb] [lldb] Support non-blocking reads in JSONRPCTransport (PR #144610)
John Harrison via lldb-commits
lldb-commits at lists.llvm.org
Tue Jun 17 15:55:18 PDT 2025
================
@@ -67,19 +67,22 @@ ReadFull(IOObject &descriptor, size_t length,
return data.substr(0, length);
}
-static Expected<std::string>
-ReadUntil(IOObject &descriptor, StringRef delimiter,
- std::optional<std::chrono::microseconds> timeout = std::nullopt) {
- std::string buffer;
- buffer.reserve(delimiter.size() + 1);
- while (!llvm::StringRef(buffer).ends_with(delimiter)) {
+Expected<std::string>
+JSONTransport::ReadUntil(IOObject &descriptor, StringRef delimiter,
+ std::optional<std::chrono::microseconds> timeout) {
+ if (!timeout || *timeout != std::chrono::microseconds::zero()) {
+ m_buffer.clear();
+ m_buffer.reserve(delimiter.size() + 1);
+ }
+
+ while (!llvm::StringRef(m_buffer).ends_with(delimiter)) {
Expected<std::string> next =
- ReadFull(descriptor, buffer.empty() ? delimiter.size() : 1, timeout);
+ ReadFull(descriptor, m_buffer.empty() ? delimiter.size() : 1, timeout);
----------------
ashgti wrote:
If we have a buffer, we could adjust our approach to read in larger than 1 byte chunks when we're reading until a delimiter.
We could read chunks of say 1024 and then split the buffer on the delimited until we run out of data and then do a new read with the next chunk size.
I don't know if this approach would have issues on windows or anything though, so maybe someone with more platform specific knowledge may know how it handles blocking reads if `_read` is called with no data.
https://github.com/llvm/llvm-project/pull/144610
More information about the lldb-commits
mailing list