<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Well, thank you all for these
informations!<br>
<br>
Hadrien<br>
<br>
Le 11/03/2017 à 14:53, Hal Finkel a écrit :<br>
</div>
<blockquote cite="mid:4ce687f5-99b6-a6df-8724-8a7ee72cd966@anl.gov"
type="cite">
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<p><br>
</p>
<div class="moz-cite-prefix">On 03/11/2017 06:41 AM, Hadrien G.
via llvm-dev wrote:<br>
</div>
<blockquote
cite="mid:33d2e242-9966-888b-ad90-03a890b61dfe@gmx.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=utf-8">
<div class="moz-cite-prefix">Thank you! Is this information
available programmatically through some LLVM API, so that next
time some hardware manufacturer does some crazy experiment, my
code can be automatically compatible with it as soon as LLVM
is?<br>
</div>
</blockquote>
<br>
Yes, using TargetTransformInfo, you can call
TTI->getCacheLineSize(). Not all targets provide this
information, however, and as Bruce pointed out, there are
environments where this does not make sense (
<meta http-equiv="content-type" content="text/html; charset=utf-8">
caveat emptor).<br>
<br>
-Hal<br>
<br>
<blockquote
cite="mid:33d2e242-9966-888b-ad90-03a890b61dfe@gmx.com"
type="cite">
<div class="moz-cite-prefix"> <br>
Le 11/03/2017 à 13:38, Bruce Hoult a écrit :<br>
</div>
<blockquote
cite="mid:CAMU+Ekw=yW5XG9GVWQ+N1TnLyUG_hXOztxW5NsmbVoBx3qXX6g@mail.gmail.com"
type="cite">
<div dir="ltr">PowerPC G5 (970) and all recent IBM Power have
128 byte cache lines. I believe Itanium is also 128.
<div><br>
</div>
<div>Intel has stuck with 64 recently with x86, at least at
L1. I believe multiple adjacent lines may be combined into
a "block" (with a single tag) at L2 or higher in some of
them.</div>
<div><br>
</div>
<div>ARM can be 32 or 64.</div>
<div><br>
</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Sat, Mar 11, 2017 at 3:24 PM,
Hadrien G. <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:knights_of_ni@gmx.com" target="_blank">knights_of_ni@gmx.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div class="m_7674352269886622525moz-cite-prefix">I
guess that in this case, what I would like to know
is a reasonable upper bound of the cache line size
on the target architecture. Something that I can
align my data structures on at compile time so as to
minimize the odds of false sharing. Think
std::hardware_destructive_<wbr>interference_size in
C++17.
<div>
<div class="h5"><br>
<br>
Le 11/03/2017 à 13:16, Bruce Hoult a écrit :<br>
</div>
</div>
</div>
<div>
<div class="h5">
<blockquote type="cite">
<div dir="ltr">There's no way to know, until you
run on real hardware. It could be different
every time the binary is run. You have to ask
the OS or hardware, and that's system
dependent.
<div><br>
</div>
<div>The cache line size can even change in
the middle of the program running, for
example if your program is moved between a
"big" and "LITTLE" core on ARM. In this case
the OS is supposed to lie to you and tell
you the smallest of the cache line sizes
(but that can only work if cache line
operations are non-estructive! No "zero
cache line" or "throw away local changes in
cache line" like on PowerPC). It also means
that you might not places things far enough
apart to be on different cache lines on the
bigger core, and so not acheive the optimal
result you wanted. It's a mess!</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Sat, Mar 11, 2017
at 2:13 PM, Hadrien G. via llvm-dev <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:llvm-dev@lists.llvm.org"
target="_blank">llvm-dev@lists.llvm.org</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">Hi everyone,<br>
<br>
I'm hailing from the Rust community, where
there is a discussion about adding
facilities for aligning data on an L1
cache line boundary. One example of
situation where this is useful is when
building thread synchronization
primitives, where avoiding false sharing
can be a critical concern.<br>
<br>
Now, when it comes to implementation, I
have this gut feeling that we probably do
not need to hardcode every target's cache
line size in rustc ourselves, because
there is probably a way to get this
information directly from the LLVM
toolchain that we are using. Is my gut
right on this one? And can you provide us
with some details on how it should be
done?<br>
<br>
Thanks in advance,<br>
Hadrien<br>
______________________________<wbr>_________________<br>
LLVM Developers mailing list<br>
<a moz-do-not-send="true"
href="mailto:llvm-dev@lists.llvm.org"
target="_blank">llvm-dev@lists.llvm.org</a><br>
<a moz-do-not-send="true"
href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev"
rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/llvm-dev</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<p><br>
</p>
<br>
</div>
</div>
<span class="HOEnZb"><font color="#888888">-- <br>
This message has been scanned for viruses and <br>
dangerous content by <a moz-do-not-send="true"
href="http://www.mailscanner.info/"
target="_blank"><b>MailScanner</b></a>, and is <br>
believed to be clean. </font></span></div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<p><br>
</p>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
LLVM Developers mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a>
</pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory</pre>
</blockquote>
<p><br>
</p>
</body>
</html>