<div dir="ltr">PowerPC G5 (970) and all recent IBM Power have 128 byte cache lines. I believe Itanium is also 128.<div><br></div><div>Intel has stuck with 64 recently with x86, at least at L1. I believe multiple adjacent lines may be combined into a "block" (with a single tag) at L2 or higher in some of them.</div><div><br></div><div>ARM can be 32 or 64.</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Mar 11, 2017 at 3:24 PM, Hadrien G. <span dir="ltr"><<a href="mailto:knights_of_ni@gmx.com" target="_blank">knights_of_ni@gmx.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div class="m_7674352269886622525moz-cite-prefix">I guess that in this case, what I would
like to know is a reasonable upper bound of the cache line size on
the target architecture. Something that I can align my data
structures on at compile time so as to minimize the odds of false
sharing. Think std::hardware_destructive_<wbr>interference_size in
C++17.<div><div class="h5"><br>
<br>
Le 11/03/2017 à 13:16, Bruce Hoult a écrit :<br>
</div></div></div><div><div class="h5">
<blockquote type="cite">
<div dir="ltr">There's no way to know, until you run on real
hardware. It could be different every time the binary is run.
You have to ask the OS or hardware, and that's system dependent.
<div><br>
</div>
<div>The cache line size can even change in the middle of the
program running, for example if your program is moved between
a "big" and "LITTLE" core on ARM. In this case the OS is
supposed to lie to you and tell you the smallest of the cache
line sizes (but that can only work if cache line operations
are non-estructive! No "zero cache line" or "throw away local
changes in cache line" like on PowerPC). It also means that
you might not places things far enough apart to be on
different cache lines on the bigger core, and so not acheive
the optimal result you wanted. It's a mess!</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Sat, Mar 11, 2017 at 2:13 PM,
Hadrien G. via llvm-dev <span dir="ltr"><<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi
everyone,<br>
<br>
I'm hailing from the Rust community, where there is a
discussion about adding facilities for aligning data on an
L1 cache line boundary. One example of situation where this
is useful is when building thread synchronization
primitives, where avoiding false sharing can be a critical
concern.<br>
<br>
Now, when it comes to implementation, I have this gut
feeling that we probably do not need to hardcode every
target's cache line size in rustc ourselves, because there
is probably a way to get this information directly from the
LLVM toolchain that we are using. Is my gut right on this
one? And can you provide us with some details on how it
should be done?<br>
<br>
Thanks in advance,<br>
Hadrien<br>
______________________________<wbr>_________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/llvm-dev</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<p><br>
</p>
<br></div></div><span class="HOEnZb"><font color="#888888">--
<br>This message has been scanned for viruses and
<br>dangerous content by
<a href="http://www.mailscanner.info/" target="_blank"><b>MailScanner</b></a>, and is
<br>believed to be clean.
</font></span></div>
</blockquote></div><br></div>