[RFC] Using large pages for large hash tables
Michael Kruse via llvm-commits
llvm-commits at lists.llvm.org
Mon Oct 17 08:49:29 PDT 2016
2016-10-17 17:38 GMT+02:00 Rafael Espíndola via llvm-commits
<llvm-commits at lists.llvm.org>:
> I did a quick and dirty experiment to use large pages when a hash
> table gets big. The results for lld are pretty impressive (see
> attached file, but basically 1.04X faster link of files with debug
> I tested disabling madvise and the performance goes back to what it
> was, so it is really the large pages that improves the performance.
> The main question is then what the interface should look like. On
> linux the abstraction could be
> std::pair<void *, size_t> mallocLarge(size_t Size);
> which return the allocated memory and how much was actually allocated.
> The pointer can be passed to free once it is no longer needed.
> The fallback implementation just calls malloc and returns Size unmodified.
> On linux x86_64 if size is larger than 2MiB we use posix_memalign and madvise.
> Would the same interface work on windows?
Yes, using VirtualAlloc using MEM_LARGE_PAGES flag:
Size can be determined using GetLargePageMinimum(), or a multiple of it.
Do common implementations of malloc allocate (large) pages
automatically when the allocated size is large enough?
More information about the llvm-commits