[PATCH] D78856: [Support] Simplify and optimize ThreadPool

Mehdi AMINI via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Sat Apr 25 16:28:28 PDT 2020


mehdi_amini added a comment.

In D78856#2003762 <https://reviews.llvm.org/D78856#2003762>, @MaskRay wrote:

> In D78856#2003725 <https://reviews.llvm.org/D78856#2003725>, @mehdi_amini wrote:
>
> > > No. Only the thread(s) calling ThreadPool::Wait() waits on CompletionCondition. When it gets stuck, it will not awake spuriously. Note, before, we took locks 3 times in the loop body. Now it is 2...
> >
> > I wasn't worried about contention with the waiting threads, but between working threads: they are taking the QueueLock more often now (it was taken once or twice per iteration of the loop body before and is now taken an extra time).
> >  I am not sure we'd be able to measure any difference here, just curious how you thought about this tradeoff!
>
>
> There are two stages, before and after the execution of as task. The original code sequence used different locks. I suppose this is what you meant by less contention: a worker in the first stage (holding QueueLock while not holding CompletionLock) cannot contend with a worker in the second stage (holding CompletionLock)..


Yes this is what I meant, the "second stage" lock contends with the first stage queue manipulation now, while it didn't before. It also contends with any thread trying to enqueue.
Overall for a given execution there will be quite a lot more (almost doubled?) locking/unlocking of the QueueLock right?

> - the spurious signalling of a condition variable (CompletionCondition) can cause contention because a condition variable has an internal lock.

We're getting into implementation / platform specific, but I thought that signaling was mostly gonna map to one or two futex syscall?

> The suggested code sequence has the problem of lost signals. It happens because a waiter thread can change the observed state between waiter's testing and blocking.
>  ...

Thanks! This is likely the sequence that lead me to add this lock at the time... IIRC you can avoid this with low-level use of futex but there isn't a portable solution.

Anyway, I'm fine with this change overall, I don't think this thread pool implementation is optimized for very small tasks anyway.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D78856/new/

https://reviews.llvm.org/D78856





More information about the llvm-commits mailing list