[llvm-dev] Is there a way to know the target's L1 data cache line size?
Bruce Hoult via llvm-dev
llvm-dev at lists.llvm.org
Sat Mar 11 04:16:37 PST 2017
There's no way to know, until you run on real hardware. It could be
different every time the binary is run. You have to ask the OS or hardware,
and that's system dependent.
The cache line size can even change in the middle of the program running,
for example if your program is moved between a "big" and "LITTLE" core on
ARM. In this case the OS is supposed to lie to you and tell you the
smallest of the cache line sizes (but that can only work if cache line
operations are non-estructive! No "zero cache line" or "throw away local
changes in cache line" like on PowerPC). It also means that you might not
places things far enough apart to be on different cache lines on the bigger
core, and so not acheive the optimal result you wanted. It's a mess!
On Sat, Mar 11, 2017 at 2:13 PM, Hadrien G. via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> Hi everyone,
>
> I'm hailing from the Rust community, where there is a discussion about
> adding facilities for aligning data on an L1 cache line boundary. One
> example of situation where this is useful is when building thread
> synchronization primitives, where avoiding false sharing can be a critical
> concern.
>
> Now, when it comes to implementation, I have this gut feeling that we
> probably do not need to hardcode every target's cache line size in rustc
> ourselves, because there is probably a way to get this information directly
> from the LLVM toolchain that we are using. Is my gut right on this one? And
> can you provide us with some details on how it should be done?
>
> Thanks in advance,
> Hadrien
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170311/21effb85/attachment.html>
More information about the llvm-dev
mailing list