[llvm-dev] Memory scope proposal
Tom Stellard via llvm-dev
llvm-dev at lists.llvm.org
Thu Sep 1 08:52:05 PDT 2016
On Wed, Aug 31, 2016 at 12:23:34PM -0700, Justin Lebar via llvm-dev wrote:
> > Some optimizations that are related to a single thread could be done without needing to know the actual memory scope.
>
> Right, it's clear to me that there exist optimizations that you cannot
> do if we model these ops as target-specific intrinsics.
>
> But what I think Mehdi and I were trying to get at is: How much of a
> problem is this in practice? Are there real-world programs that
> suffer because we miss these optimizations? If so, how much?
>
> The reason I'm asking this question is, there's a real cost to adding
> complexity in LLVM. Everyone in the project is going to pay that
> cost, forever (or at least, until we remove the feature :). So I want
> to try to evaluate whether paying that cost is actually worth while,
> as compared to the simple alternative (i.e., intrinsics). Given the
> tepid response to this proposal, I'm sort of thinking that now may not
> be the time to start paying this cost. (We can always revisit this in
> the future.) But I remain open to being convinced.
>
I think the cost of adding this information to the IR is really low.
There is already a sychronization scope field present for LLVM atomic
instructions, and it is already being encoded as 32-bits, so it is
possible to represent the additional scopes using the existing bitcode
format. Optimization passes are already aware of this synchronization
scope field, so they know how to preserve it when transforming the IR.
The primary goal here is to pass synchronization scope information from
the fronted to the backend. We already have a mechanism for doing this,
so why not use it? That seems like the lowest cost option to me.
-Tom
> As a point of comparison, we have a rule of thumb that we'll add an
> optimization that increases compilation time by x% if we have a
> benchmark that is sped up by at least x%. Similarly here, I'd want to
> weigh the added complexity against the improvements to user code.
>
> -Justin
>
> On Tue, Aug 23, 2016 at 2:28 PM, Tye, Tony via llvm-dev
> <llvm-dev at lists.llvm.org> wrote:
> >> Since the scope is “opaque” and target specific, can you elaborate what
> >> kind of generic optimization can be performed?
> >
> >
> >
> > Some optimizations that are related to a single thread could be done without
> > needing to know the actual memory scope. For example, an atomic acquire can
> > restrict reordering memory operations after it, but allow reordering of
> > memory operations (except another atomic acquire) before it, regardless of
> > the memory scope.
> >
> >
> >
> > Thanks,
> >
> > -Tony
> >
> >
> > _______________________________________________
> > LLVM Developers mailing list
> > llvm-dev at lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> >
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
More information about the llvm-dev
mailing list