[PATCH] D27159: [ThreadPool] Simplify the interface

Davide Italiano via llvm-commits llvm-commits at lists.llvm.org
Mon Nov 28 22:42:25 PST 2016


On Mon, Nov 28, 2016 at 10:40 PM, Davide Italiano <dccitaliano at gmail.com> wrote:
> On Mon, Nov 28, 2016 at 10:29 PM, Mehdi Amini <mehdi.amini at apple.com> wrote:
>>
>> On Nov 28, 2016, at 6:04 PM, Michael Spencer <bigcheesegs at gmail.com> wrote:
>>
>> On Tue, Nov 29, 2016 at 10:18 AM, Mehdi AMINI via Phabricator
>> <reviews at reviews.llvm.org> wrote:
>>>
>>> mehdi_amini added a comment.
>>>
>>> What is the motivation? This is removing a feature. Even though it is not
>>> used at that time, it is tested and could find its use in the future.
>>>
>>>
>>> https://reviews.llvm.org/D27159
>>>
>>>
>>>
>>
>> shared_future has a large performance overhead compared to just handing off
>> a function pointer to another thread to run.
>>
>>
>> That’s a good point, but “large overhead” is quite subjective. Note also
>> that the LLVM ThreadPool has a global lock, so it is not intended for a lot
>> of very small tasks (<100ms).
>> My first prototype was way more complex, and was wrapping around libdispatch
>> when available or using C++11 construct otherwise. Much better for a lot of
>> small tasks!
>> However the complexity was not worth it for my use case at the time: ThinLTO
>> tasks range  between 100ms and a few seconds, so a few ms overhead don’t
>> matter.
>>
>> To focus on the submission part, I queued an empty task multiple times in a
>> threapool with 0 threads, never deleted/synchronized, and measuring just the
>> queuing time.
>>
>> For 1000000 queuing, it went from 180ms to 500ms, which account on average
>> to a queuing time for one task going from 180ns to 500ns, so the overhead of
>> shared_future is 320ns per task.
>>
>> As matter of comparison, to evaluate the overhead of the rest of the thread
>> pool infrastructure, I reran the same experiment but this time with one
>> thread in the pool processing these empty tasks. It took over 7s (so over 20
>> times the std::future overhead). Also, adding threads increases the
>> contention and the performance drops (8.5s with 2 threads, and 11s with
>> 4threads).
>>
>>>> Mehdi
>>
>
> Independently from the overhead, which can be subjective, this library
> returns a value that nobody uses. Even worse, there's no immediate
> plan for using it. This is a good metric of its usefulness for the
> time being. If you plan to use it anytime soon, fine, but otherwise,
> I'm not excited of keeping dead code in tree "just in case" we need
> it.
>

In particular, considering that this feature comes with a non-trivial
amount of compatibility cruft to work around a MSVC deficiency.

--
Davide


More information about the llvm-commits mailing list