[PATCH] D133715: [ADT] Add HashMappedTrie

Steven Wu via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Jan 6 09:03:22 PST 2023


steven_wu added a comment.

In D133715#4031396 <https://reviews.llvm.org/D133715#4031396>, @avl wrote:

>> But having a fast concurrent BumpPtrAllocator would be independently useful, and I'd suggest optimizing the allocator before bloating the default trie size.
>
> +1 for fast concurrent  ThreadSafeBumpPtrAllocator.
>
> What do you think about following alternative implementation?
>
>   class ThreadSafeBumpPtrAllocator {
>     ThreadSafeBumpPtrAllocator() {
>       size_t ThreadsNum = ThreadPoolStrategy.compute_thread_count();
>       allocators.resize(ThreadsNum);
>     }
>     
>     void* Allocate (size_t Num) {
>         size_t AllocatorIdx = getThreadIdx();
>         
>         return allocators[AllocatorIdx].Allocate(Num);
>     }
>   
>     std::vector<BumpPtrAllocator> allocators;
>   };
>   
>   static thread_local ThreadIdx;
>   
>   size_t getThreadIdx() {
>     return ThreadIdx;
>   }
>
> This implementation uses the fact that ThreadPoolExecutor creates a fixed number
> of threads(ThreadPoolStrategy.compute_thread_count()) and keeps them until destructed
> . ThreadPoolExecutor can initialise thread local field ThreadIdx to the proper thread index. 
> The getThreadIdx() could return index of thread inside ThreadPoolExecutor.Threads. 
> ThreadSafeBumpPtrAllocator keeps separate allocator for each thread. In this case each thread would
> always use separate allocator. No neccessary to have locks, cas operations, no races...

Let's move the discussion of ThreadSafeAllocator to https://reviews.llvm.org/D133713 since this patch just uses it and the implementation is over there.

The background of this data structure is to use by a CAS, so it is ideal that the CAS doesn't need to lock to the amount of threads that is going to be spawned or rely on the thread id.
You can have a thread local allocator that allocates the value to be stored, you just need to do the allocation in `insertLazy` with your own allocator and manage its life time.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D133715/new/

https://reviews.llvm.org/D133715



More information about the llvm-commits mailing list