[PATCH] D76602: [MLIR] Introduce std.alloca op

Mehdi AMINI via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Apr 8 21:02:57 PDT 2020


mehdi_amini added inline comments.
Herald added a subscriber: frgossen.


================
Comment at: mlir/include/mlir/Dialect/StandardOps/IR/Ops.td:329
+    The "alloca" operation allocates memory on the stack, to be automatically
+    released when the stack frame is discarded. The amount of memory allocated
+    is specified by its memref and additional operands. For example:
----------------
bondhugula wrote:
> mehdi_amini wrote:
> > ftynse wrote:
> > > mehdi_amini wrote:
> > > > ftynse wrote:
> > > > > bondhugula wrote:
> > > > > > ftynse wrote:
> > > > > > > bondhugula wrote:
> > > > > > > > mehdi_amini wrote:
> > > > > > > > > ftynse wrote:
> > > > > > > > > > bondhugula wrote:
> > > > > > > > > > > ftynse wrote:
> > > > > > > > > > > > bondhugula wrote:
> > > > > > > > > > > > > ftynse wrote:
> > > > > > > > > > > > > > Could you please elaborate what is a stack frame in MLIR? We don't seem to have this concept defined anywhere. In particular, is it only related to `std.func`, or can one register other ops that create stack frames?
> > > > > > > > > > > > > A stack frame here refers to the standard stack frame concept in CS that we know of! It's up to the conversion out of MLIR to realize this correctly. 
> > > > > > > > > > > > MLIR makes us rethink a lot of things. Like MLIR does not have functions as a first-class concept :)  I could obviously guess your intention with stack frames, but I would still insist you think it through in context of MLIR. Similarly to functions, I don't think we have a built-in memory model that has a stack... Or that `std.func` semantics says something about stack frames. IIRC, this was one of the conceptual problems of having `alloca` in the first place.
> > > > > > > > > > > > 
> > > > > > > > > > > > What should happen if I do the following?
> > > > > > > > > > > > 
> > > > > > > > > > > > ```
> > > > > > > > > > > > my.func @foo() {
> > > > > > > > > > > >   alloca ...
> > > > > > > > > > > > }
> > > > > > > > > > > > ```
> > > > > > > > > > > > 
> > > > > > > > > > > > Or
> > > > > > > > > > > > 
> > > > > > > > > > > > ```
> > > > > > > > > > > > std.func @foo() {
> > > > > > > > > > > >   %0 = my.func_that_may_or_may_not_be_a_lambda @inner() {
> > > > > > > > > > > >     alloca ...
> > > > > > > > > > > >   }
> > > > > > > > > > > > }
> > > > > > > > > > > > ```
> > > > > > > > > > > > 
> > > > > > > > > > > >  "Conversion out of MLIR" does not mean anything to me either. Do you mean the translation of LLVM dialect to LLVM IR? There are other passes that may be using the standard dialect that are completely unaware of that.
> > > > > > > > > > > Irrespective of which case it is, it depends on the ops around it and for the folks defining those ops to realize it. What you need to keep in mind is that an alloca's memory is automatically freed (i.e., you won't find a dealloc)  and it disappears at the time the stack frame goes away whenever such a concept exists. Now, one is of course free to transform it and implement the auto freeing in another way within MLIR itself. Heap allocations for eg. get promoted to stack ones (eg. in LLVM's passes), and for whatever reason, one could decide to switch a stack allocation to a heap one. That won't be an incorrect transform. So it ultimately may not even be realized on the stack (let alone your question of when it should be freed). It's the intent of the op when you actually see in the IR that is of an allocation that is freed automatically when the stack frame goes away.
> > > > > > > > > > I don't disagree with anything you say. I am merely pointing out that it is absolutely unclear from the op definition _when_ the allocated memory will be automatically freed. And "when the stack frame goes away" is not a satisfactory definition, because it is meaningless under the current MLIR semantics as it is written. I could literally add the `stack_frame.go_away()` operation tomorrow and expect it to free the allocations... If you could instead tie it to something like "std.func" returning or the region of an operation with FunctionLike trait transferring control flow back to its enclosing op, it would be more MLIR-compatible. 
> > > > > > > > > I agree with Alex that this could deserve a more careful way of describing this.
> > > > > > > > > In particular the "stack" isn't very important here, we really need a scope. Could we use a trait like we have for "IsolatedFromAbove" to control this scoping?
> > > > > > > > > Unless we just consider every region as a new scope for these allocations?
> > > > > > > > > 
> > > > > > > > Looks like these messages were in flight while I updated and committed. The rephrasing can be addressed in a future revision while we continue discussion here. 
> > > > > > > > 
> > > > > > > > >the current MLIR semantics as it is written. I could literally add the 
> > > > > > > > >stack_frame.go_away() operation tomorrow and expect it to
> > > > > > > > > free the allocations. 
> > > > > > > > 
> > > > > > > > The 'a' suffix in alloca is for automatic freeing - it can't be explicitly freed using another op. If you are imagining a low level op that exists that manipulates the stack frame, this would be a pathological case that it has freed it by circumventing things. 
> > > > > > > > 
> > > > > > > > >In particular the "stack" isn't very important here, we really need 
> > > > > > > > >a scope. Could we use a trait like we have for IsolatedFromAbove" to
> > > > > > > > 
> > > > > > > > Tying it to a scope like that of a closest surrounding op with a function like trait (or isolatedFromAbove) is almost fine, but ultimately not perfect since imperative function like ops like std.execute_region don't have function like traits, and we may want to imply freeing at that scope. 
> > > > > > > > 
> > > > > > > > The 'a' suffix in alloca is for automatic freeing - it can't be explicitly freed using another op.
> > > > > > > 
> > > > > > > Why not? I would not constraint the behavior of every possible op based on a one-letter suffix of another op... "Automatic" does not mean it cannot be connected to another op. In fact, in MLIR, it will likely be connected to another op because functions are ops.
> > > > > > > 
> > > > > > > > Tying it to a scope like that of a closest surrounding op with a function like trait (or isolatedFromAbove) is almost fine, but ultimately not perfect since imperative function like ops like std.execute_region don't have function like traits, and we may want to imply freeing at that scope.
> > > > > > > 
> > > > > > > IIUC, the idea is to introduce a new trait `AutomaticAllocationScope` that is orthogonal to `FunctionLike`. We can then make, e.g., `std.func`, `llvm.func` and `std.execute_region` have this trait, and let other ops opt-in to the automatic allocation/deallocation behavior. This would be my ideal solution, but I hesitated to push it since it may involve larger changes to this patch, trying to find a simpler solution like attaching to the function trait first. Now that the patch has landed, I think introducing a separate trait is the right thing to do.
> > > > > > >mean it cannot be connected to another op. In fact, in MLIR, it will 
> > > > > > >likely be connected to another op because functions are ops.
> > > > > > 
> > > > > > Automatically there implies it has to be freed automatically, not by another op. The moment you want to explicitly free that, you'll have to rewrite it to something else - for eg. alloc/dealloc pair.
> > > > > > 
> > > > > > >like attaching to the function trait first. Now that the patch has 
> > > > > > >landed, I think introducing a separate trait is the right thing to do.
> > > > > > 
> > > > > > Strictly speaking, I also feel adding another trait is the right option. But for now and to keep it lightweight until we have such use cases, 'freed when the closest surrounding op with FunctionLike trait returns' is a good approximation for 'stack frame being discarded'.
> > > > > > 
> > > > > > 
> > > > > > Strictly speaking, I also feel adding another trait is the right option. But for now and to keep it lightweight until we have such use cases, 'freed when the closest surrounding op with FunctionLike trait returns' is a good approximation for 'stack frame being discarded'.
> > > > > 
> > > > > Let's have it as "freed when the closest surrounding op with FunctionLike trait has the control transferred back from its body", this avoids the potential interpretation that FunctionLike should also terminate with "std.return" and a weird-sounding "op <..> returns". We may also want to add a verifier check that such an op exists.
> > > > I am not convinced that the "FunctionLike" trait is the right one here, an op like `gpu.launch` for example would create a new scope in my expectations. What do you think about the trait Uday proposed above `AutomaticAllocationScope` ?
> > > > 
> > > I proposed it :) So I am obviously in favor of it.
> > > 
> > > I think we can start by associating the deallocation with `FunctionLike`, it's a simple documentation+verifier change that we can land fast and avoid having contentious code upstream as well as rolling it back. Then we can implement `AutomaticAllocationScope` and let ops to opt-in (e.g., I'd expect @herhut to decide whether we want `alloca`s in GPU at all).
> > > I think we can start by associating the deallocation with FunctionLike, 
> > 
> > I'm afraid this will lead to wrong assumption, I rather have code written with the right trait checked from the beginning.
> It is clear that using a new trait for scoped allocation will be accurate here. FunctionLike will always imply a new scope for stack allocation. Introducing the trait should be straightforward - okay to update the description when we introduce the trait. 
Are you adding the trait? I think this should be done now: I wouldn't want code that start checking for "FunctionLike" where it should check for the "AutomaticAllocationScope" trait.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76602/new/

https://reviews.llvm.org/D76602





More information about the llvm-commits mailing list