[PATCH] D133715: [ADT] Add HashMappedTrie

Alexey Lapshin via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Jan 6 06:28:19 PST 2023


avl added a comment.

> But having a fast concurrent BumpPtrAllocator would be independently useful, and I'd suggest optimizing the allocator before bloating the default trie size.

+1 for fast concurrent  ThreadSafeBumpPtrAllocator.

What do you think about following alternative implementation?

  class ThreadSafeBumpPtrAllocator {
    ThreadSafeBumpPtrAllocator() {
      size_t ThreadsNum = ThreadPoolStrategy.compute_thread_count();
      allocators.resize(ThreadsNum);
    }
    
    void* Allocate (size_t Num) {
        size_t AllocatorIdx = getThreadIdx();
        
        return allocators[AllocatorIdx].Allocate(Num);
    }
  
    std::vector<BumpPtrAllocator> allocators;
  };
  
  static thread_local ThreadIdx;
  
  size_t getThreadIdx() {
    return ThreadIdx;
  }

This implementation uses the fact that ThreadPoolExecutor creates a fixed number
of threads(ThreadPoolStrategy.compute_thread_count()) and keeps them until destructed
. ThreadPoolExecutor can initialise thread local field ThreadIdx to the proper thread index. 
The getThreadIdx() could return index of thread inside ThreadPoolExecutor.Threads. 
ThreadSafeBumpPtrAllocator keeps separate allocator for each thread. In this case each thread would
always use separate allocator. No neccessary to have locks, cas operations, no races...


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D133715/new/

https://reviews.llvm.org/D133715



More information about the llvm-commits mailing list