[LLVMdev] Proposal for safe-to-execute meta-data for heap accesses

Chandler Carruth chandlerc at google.com
Sat Nov 16 16:51:18 PST 2013


On Fri, Nov 8, 2013 at 9:50 PM, Filip Pizlo <fpizlo at apple.com> wrote:

>
> On Nov 8, 2013, at 9:36 PM, Chandler Carruth <chandlerc at google.com> wrote:
>
>
> On Fri, Nov 8, 2013 at 8:44 AM, Filip Pizlo <fpizlo at apple.com> wrote:
>
>> Is the expectation that to utilize this metadata an optimization pass
>> would have to inspect the body of @f and reason about its behavior given
>> <args>?
>>
>>
>> Yes.
>>
>>
>> If so, then I think this is pretty bad. If we ever want to parallelize
>> function passes, then they can't inspect the innards of other functions.
>>
>>
>> I must be missing something. Can't you do some simple locking?  Lock a
>> function if it's being transformed, or if you want to inspect it...
>>
>
> I really, *really* don't like this.
>
> I do *not* want parallelizing LLVM
>
>
> So, I'm relatively new to LLVM, but I'm not new to parallelizing a
> compiler - I've done it before.  And when I did it, it (a) did use locking
> in a bunch of places, (b) wasn't a big deal, and (c) reliably scaled to 8
> cores (the max number of cores I had at the time - I was a grad student and
> it was, like, the last decade).
>

Because it will merely trade a datarace for non-determinism. It is also
really, really slow.


> Is there any documented proposal that lays out the philosophy?  I'd like
> to understand why locks are such a party pooper.
>

No concrete proposal.

The fundamental idea is that we'd like for passes to have relatively
constrained domains that they operate on so that synchronization is very
rare and the majority of work in the optimizer can proceed in parallel.
This applies mostly to function passes and CGSCC passes, which account for
most of the time in LLVM (codegen is a function pass).


> Hence you can imagine freezing a copy of those functions that are used in
>> this meta-data.
>>
>
> At this point, you are essentially proposing that these functions are a
> similar but not quite the same IR... They will have the same concepts but
> subtly different constraints or "expectations".
>
>
> Sort of.  I'm only proposing that they get treated differently from the
> standpoint of the compilation pipeline.  But, to clarify, the IR inside
> them still has the same semantics as LLVM IR.
>
> It's interesting that this is the second time that the thought of
> "special" functions has arisen in my LLVM JIT adventures.  The other time
> was when I wanted to create a module that contained one function that I
> wanted to compile (i.e. it was a function that carried the IR that I
> actually wanted to JIT) but I wanted to pre-load that module with runtime
> function that were inline candidates.  I did not want the JIT to compile
> those functions except if they were inlined.
>
> I bring this up not because I have any timetable for implementing this
> other concept, but because I find it interesting that LLVM's "every
> function in a module is a thing that will get compiled and be part of the
> resulting object file" rule is a tad constraining for a bunch of things I
> want to do that don't involve a C-like language.
>
>
> I'm not yet sure how I feel about this. It could work really well, or it
> could end up looking a lot like ConstantExpr and being a pain for us going
> forward. I'm going to keep thinking about this though and see if I can
> contribute a more positive comment. =]
>
>
> Fair enough!  I look forward to hearing more feedback.
>

Sorry, still haven't really had time to think more about this. I have
mentioned this thread to Nick who should also chime in when he has sometime.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20131116/02e6f3e7/attachment.html>


More information about the llvm-dev mailing list