<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/66827>66827</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
Concurrent fetches for debuginfod client
</td>
</tr>
<tr>
<th>Labels</th>
<td>
new issue
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
mysterymath
</td>
</tr>
</table>
<pre>
One debuginfod server may slower than others, or the debug info being fetched may be small relative to the amount of time it takes to set up a HTTP request. Accordingly, it can often help performance to perform a small number of HTTP requests in parallel; this typically causes a slight constant overhead for long-running transfers, but can dramatically reduce the overhead of a large number of small transfers (e.g., llvm-cov).
We should support concurrent fetches when appropriate; probably with an extension of the API. Part of this work should be to determine a plausible for the parallelism cap
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJxkk0GP4zYMhX-NfCHGiJU4iQ8-zO4iaE-dwwI9UzZtqStLLkUl9b8v5GS7A_Riwyb43sdHEFNycyDqVftFtd8qzGIj98uWhHhbUGxl4rj1fwSCkUyeXZjiCIn4TgwLbpB8fBCDWAwQxRInpb9CLH9eLVB6wJALM0wkg6Vx7zQEaUHvgcmjuDuBxL0Ll5iDQJxA3ELgBAR_UCrlRAJ5BYTfvn__AKa_MyWp4X0YIo8uzH4r7k5gKDiTUABLfoWVeIq8YBh2l9cn4Isg5MUQF8fPuglcgBUZvSevjl9ArEsg2-oG9H6DAXOiVES8m63AEEMSLOR3Yks4whQZfAzzG-cQyvzCGNL0CsnkJ-fIuKC8RJnGXCAt_ZKJEyB45Jk-kT7J_xMEpa9Uz3UR9v6-vA3xrnRXq8M3dXh_Pv8kSDZmP0LK6xp5Rx4yMwV5rSbBw1IAXFeOKzsUKoOvHA0av8HDiQUMQP8IheRi2LdkCd4_fq_hA_m5tpLTI_KPn3ZmT30kIV5cIEBYPebkjCfYQyoSP5N2aYEB12rsj2N37LCivjl3bdN2l-Za2f5quvO1Hc9n3Wi6GK2brr22l1Nrzno6XU-V6_VBHw9d0-nm1LTX-tSg6cx4OZ2v-nKhQZ0OtKDzdQmqjjxXLqVM_bnUK4-GfNovQutAD9iLSutyINzv4Zo8J3U6eJck_VIRJ576r__PtIz46XwG7yhIldn3VmRN6viu9E3p2-zEZlMPcVH6VmRfr7eV4180iNK3HSYpfdth_w0AAP__BfRNeA">