[llvm-commits] Speeding up RegAllocLinearScan on big test-cases

Chris Lattner clattner at apple.com
Sun May 18 07:45:57 PDT 2008


On May 16, 2008, at 8:20 AM, Roman Levenstein wrote:

>
> So, we can see, that performance-wise the difference is not that huge.
> But if we look at the number of new/delete calls, then it is quite  
> different:
> 1) without STL standard allocator - Total of only 847(!!!) mallocs for
> all of the allocators together,  while adding 1000000 nodes for each
> of them.
> 2) with STL standard allocator - Total of 2000628 mallocs for all of
> the allocators together, while adding 1000000 nodes for each of them.
>
> So,  the standard allocator of STL produces a huge number of
> new/delete calls. And other allocators reduce it
> by almost 4 orders of magnitude. But, as mentioned before, it DOES NOT
> result in a big performance difference on
> my Ubuntu/Linux/x86 machine, which indicates that mallocs are very
> efficient here. But for you it seems to be very different...

Hey Roman, it would be interesting to measure the locality effect of  
using new/delete.  Also, I don't know how your setup is arranged.  In  
the case of LLVM, the heap is significantly fragmented before a pass  
starts beating on (f.e.) a std::map.  When the nodes go through  
malloc, this means that the nodes for one map end up scattered  
throughout the heap.  My guess is that this makes traversals far more  
expensive than when using a pool allocator that makes them contiguous.

Relevant reading :)
http://llvm.org/pubs/2005-05-21-PLDI-PoolAlloc.html

-Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20080518/6868bbee/attachment.html>


More information about the llvm-commits mailing list