[llvm-dev] RFC: Should SmallVectors be smaller?

Reid Kleckner via llvm-dev llvm-dev at lists.llvm.org
Fri Jun 22 15:18:25 PDT 2018


On Thu, Jun 21, 2018 at 9:16 PM Duncan P. N. Exon Smith via llvm-dev <
llvm-dev at lists.llvm.org> wrote:

>
>
> On Jun 21, 2018, at 18:38, Chris Lattner <clattner at nondot.org> wrote:
>
>
>
> On Jun 21, 2018, at 9:52 AM, Duncan P. N. Exon Smith via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
> I've been curious for a while whether SmallVectors have the right
> speed/memory tradeoff.  It would be straightforward to shave off a couple
> of pointers (1 pointer/4B on 32-bit; 2 pointers/16B on 64-bit) if users
> could afford to test for small-mode vs. large-mode.
>
>
> Something like this could definitely work, but most smallvectors are on
> the stack.  They are intentionally used when sizeof(smallvector) isn’t
> important, so I don’t think this optimization will pay off.
>
>
> For better or worse (mostly worse), there are a ton of SmallVector fields
> in data structures, including some even nested inside other SmallVectors
> (e.g., see the cleanup in r235229).  Often these data structures are
> heap-allocated.
>

Yes, this is a huge problem. We seriously overuse SmallVector. I think in
CodeViewDebug.cpp we had a DenseMap of a struct which had a SmallVector of
structs that contained SmallVectors. It was silly.

Out of curiosity, what brings this up?
>
>
> I've noticed that Clang is using more stack recently (we're seeing more
> crashes from template recursion; it seems the template recursion limit
> needs to shrink), and somehow that train of thought led to this.
>
> I share your skepticism that it will help stack usage much, but
> SmallVector/SmallVectorImpl is so ubiquitous, it could help the heap a
> bit.  And if it doesn’t hurt runtime performance in practice, there’s no
> reason to fork the data structure.
>
> If no one has measured before I might try it some time.
>

I think it's important to keep begin(), end(), and indexing operations
branchless, so I'm not sure this pointer union is the best idea. I haven't
profiled, but that's my intuition. If you wanted to limit all our vectors
to 4 billion elements to save a pointer, I'd probably be fine with that.

I think we might be better off just reducing the pre-allocation size of
most of our SmallVectors across LLVM and Clang. They're all wild guesses,
never profiled. Especially for vectors of relatively "large" elements, the
pre-allocation optimization just doesn't make that much sense. I'd go as
far as to suggest providing a default SmallVector N value of something like
`sizeof(void*) * 3 / sizeof(T)`, i.e. by default, every SmallVector is at
most 6 pointers big.

---

Relatedly, there's a lot of work that can be done to tune DenseMap. When
the key and value pair is relatively large, we waste a lot of space on
empty table slots.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180622/c76cd840/attachment.html>


More information about the llvm-dev mailing list