<div dir="ltr">IIRC when I talked with Chandler (who has a lot of background in hashing), the Bernstein hash function actually ends up being pretty good (as a hash function, not necessarily performance) for C/C++ identifier-like strings (despite not being that good for other types of strings), so we'll want to verify that we don't regress hash quality (which affects efficiency of the hash tables). In particular, IIRC this is the function used for Clang's identifier maps (via StringMap) and so I'd like to see some measurements that ensure that these performance improvements translate over to Clang (or at least don't regress).<div><br></div><div>If Clang doesn't regress and xxHash is measurably better for other HashString workloads, then I don't see a reason not to switch to it.<div><br></div><div>-- Sean Silva</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Apr 24, 2017 at 5:37 PM, Scott Smith via llvm-dev <span dir="ltr"><<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div>I've been working on improving the startup performance of lldb, and ran into an issue with llvm::HashString. It works a character at a time, which creates a long dependency chain in the processor. On the other hand, the function is very short, which probably works well for short identifiers.<br><br></div>I don't know how the mix of identifier length seen by lldb compares with that seen by llvm/clang; I imagine they're pretty similar.<br><br></div>I have to different proposals, and wanted to gauge which would be preferred:<br><br></div><div>1. Use xxhash instead.<br></div><div><br></div>2. Use the Intel native crc32q instruction to process 8 bytes at a time, then fall back to byte at a time. Non sse 4.2 capable processors (either early or non Intel/AMD x86) would use the existing algorithm, or possibly #1 above.<br><br></div>For my test, both result in approximately the same # of cycles (within 0.5%).<br><br>#1 uses 3+% more instructions.<br></div>#2 requires (realistically) runtime detection of cpu capabilities, because distributions would want generic x86/x86_64 compatible binaries, not separate binaries per cpu feature.<br><br></div>I'm leaning toward #1 despite the instruction increase. My only worry is the effect on compile times for code with lots of short identifiers. I haven't tested that though (and I don't have a suitable benchmark suite for that) so for all I know I'm worrying about nothing.<br><br></div>FYI the improvement is approximately 11.5% reduction in cycles for my lldb test (b main, run, quit), so IMO it's pretty significant.<br><br></div>
<br>______________________________<wbr>_________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/llvm-dev</a><br>
<br></blockquote></div><br></div>