[PATCH] D20116: Add speculatable function attribute

Mehdi AMINI via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Mar 27 08:22:22 PDT 2017


mehdi_amini added a comment.

In https://reviews.llvm.org/D20116#711097, @nlopes wrote:

> In https://reviews.llvm.org/D20116#710923, @mehdi_amini wrote:
>
> > In https://reviews.llvm.org/D20116#710866, @nlopes wrote:
> >
> > > Then we have an orthogonal concern which is what's the precondition that is sufficient to justify an optimization. For example, whether a function call can be executed speculatively is one of such preconditions. It can be derived by looking at the function attributes.  For example, for speculative execution we probably need to know that the function doesn't write to memory and that it terminates.
> >
> >
> > It is not enough (for example division by zero).
>
>
> Sure; I didn't mean to have enumerated a complete list.


Still: my point was you *can't* provide a complete list because we're not capturing everything with the existing attributes.

>>> So I feel that this speculatable attribute is not the right answer. It should be a helper function that derives its result from a set of function attributes, but shouldn't be an attribute on its own.
>> 
>> Feel free to propose an alternative solution, right now there is no combination of attributes that is enough.
> 
> My point is that attributes should be about the function behavior: they should be about the *how*, and not what kind of transformations you can do with respective call sites.
>  In the same way, today we don't have attributes like "call can be removed if result unused", "call can be removed if multiple calls with same argument", "call can be sank into a loop", etc.
>  Instead we have things like "doesn't write to memory", "only writes to memory that is not observable by the caller", etc.
>  From our set of attributes we can infer if a transformation is valid or not. This approach scales much better: it's one attribute per behavior we want to capture instead of 1 attribute per transformation we want to do, which is surely much higher.

I get that, I'm still not sure what you're suggesting *in this case*.
Is it just the name that bothers your? 
Looking at the description of the attribute, it fits your definition somehow: "This function attribute indicates that the function does not have undefined behavior for any possible combination of arguments or global memory state."

> The other point is that this approach separates concerns: we have an analysis pass that decorates functions with attributes about how they behave, and then we have transformations that consume this information in some way.  To me it seems useful to have this separation: it makes the analysis part much easier to reason about and to implement correctly.

I don't see how this is related to this attribute in any way.

> My biggest concern with speculatable is that there's no intuitive semantics for it.  If people already have different opinions of what "readonly" means (and it was supposed to have trivial semantics), then something as complex as speculatable seems like a can of worms.
>  And a month from now people will want more and more speculatable attributes. For example, "can be speculated across stores", "can be speculated across stores and function calls", etc.  Doesn't seem to scale.

I don't have this impression. If you feel some parts of the definition need to be more carefully specified, that's fine. But the way you're putting it forward above seems like FUD to me.

>>> The only reason I could see to have it as an attribute would be to carry cost information.
>> 
>> I'm not convinced that attributes should carry "cost" information, is there a precedent for this?
> 
> No, and I didn't propose we do that.  But at some point for applications like ThinLTO and PGO it seems that an inter-proc cost/information framework will be needed. ThinLTO needs to summarize what functions do.  Imagine extending ThinLTO summaries to include information like "simplifies a lot if 2nd argument is false", "returns a number in range [0, 4]", etc.  So it feels that eventually we will need an additional set of information to be attached to functions that we can probably not cover with the attribute framework we have today.

The way this is structured in ThinLTO is using in-memory analysis that don't attach their result to the IR. This is then part of the summaries directly. These are just a serialization of the in-memory analysis, they don't need any IR construct like attribute or metadata.


https://reviews.llvm.org/D20116





More information about the llvm-commits mailing list