<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 12/3/13 9:30 PM, Chris Lattner
wrote:<br>
</div>
<blockquote
cite="mid:FFBA55ED-A5CF-49EC-BAEC-6EDCA315E50C@apple.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=ISO-8859-1">
This looks really awesome! Great idea starting this, and thank
you for pushing it forward. Some thoughts:
<div><br>
</div>
<div>- In "local variables", it would be great to talk about how
the "alloca trick" avoids forcing your frontend to build SSA.
You could even include an example.</div>
</blockquote>
It would be worth mentioning that this is the trick that Clang
itself uses. There can be some tradeoffs though - particularly if
your language involves anything which looks to LLVM like an
assignment which isn't semantically. (For example, object
relocation, metadata updates, lazy computations, etc...) Long
term, you probably want to teach LLVM about your semantics anyways,
but as a short term measure doing SSA conversion yourself can be
useful to increase out of the gate performance. <br>
<blockquote
cite="mid:FFBA55ED-A5CF-49EC-BAEC-6EDCA315E50C@apple.com"
type="cite">
<div><br>
</div>
<div>- In the "constants" section, it is probably best to say that
"constants that allocate memory" are just global variables in
LLVM IR, marked with the 'constant' keyword. It would also be
great to mention constant exprs here, since they are a point of
confusion (and you introduce them in sizeof).</div>
<div><br>
</div>
<div>- Having something that talks about lowering C-style unions
to llvm IR would be great :-)</div>
</blockquote>
Agreed. Adding a description of the variant type (type safe union)
would also be useful. <br>
<blockquote
cite="mid:FFBA55ED-A5CF-49EC-BAEC-6EDCA315E50C@apple.com"
type="cite">
<div><br>
</div>
<div>- A nice new top-level section would be "interoperating with
a runtime library", pointing out that not everything needs to be
emitted as inline LLVM IR: a frontend can either just call into
a runtime library, or it can even emit a call to a runtime
library (whose body is also available as IR) and then have the
optimizer inline it if run.</div>
</blockquote>
A couple things to add here: <br>
1) Your runtime function might also be represented as an custom LLVM
intrinsic. This allows selective lowering with custom passes. (By
default, you'd lower the intrinsic into a runtime call.) <br>
2) Getting function attributes correct on your runtime calls can be
key for performance. (i.e. if this is a pure call, CSE should
exploit that.) Custom calling conventions (i.e. fewer caller saved
registers, etc..) can also be useful. <br>
<br>
Philip<br>
</body>
</html>