[PATCH] D70897: [Matrix] Add forward shape propagation and first shape aware lowerings.

Florian Hahn via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Sun Dec 8 07:01:24 PST 2019


fhahn marked an inline comment as done.
fhahn added inline comments.


================
Comment at: llvm/lib/Transforms/Scalar/LowerMatrixIntrinsics.cpp:365
+        if (OpShape != ShapeMap.end())
+          setShapeInfo(Inst, OpShape->second);
+        continue;
----------------
LuoYuanke wrote:
> fhahn wrote:
> > LuoYuanke wrote:
> > > It seems only store instruction is propagated with the shape information. Why? Take below pseudo code for example. Are v2 and v3 propagated with the shape information?   
> > > ```
> > > v1 = matrix_columnwise_load(..., m, n)
> > > v2 = max(v1, 0)
> > > v3 = v1 / v2
> > > store v3, ptr
> > > ```
> > This patch mostly adds the infrastructure for the propagation and only uses it for store instructions. So in the example, the shape is not propagated by this patch.
> > 
> > Additional support for loads (D70900), binary operators (D70898) and back-propagation (D70899) are added in follow-up commits, to make reviewing them more manageable. The whole patch series is linked in Phabricator (see the 'stack' section).
> > 
> > Please note that we could propagate shape information to more instructions, e.g. phis or selects. That can be added as follow-up as well, it is just a matter of priorities (we found loads/stores/binary operators to be by far the most common operations in matrix expressions). Any help with covering more cases would be very welcome :)
> Thank you for reply. Do you propagate the shape information recursively? If the matrix is stored to memory, does the propagation break? Can we get the shape information when reload the matrix from memory?
> 
> ```
> v1 = matrix_multipy(..., m, n, k)
> store v1, ptr *
> v2 = load ptr*
> ```
> How do we pass the shape information across function and return shape from function? Do you plan to have matrix as first class type?
> 
> ```
> v1 = matrix_multipy(..., m, n, k)
> v2 = call foo(v1)
> ```
> Thank you for reply. Do you propagate the shape information recursively? 

It is propagated iteratively: once we propagated shape information to an instruction, we add its users to the worklist. A later patches add back propagation as well and D70901 implements iteration until no new shape information can be discovered.

> If the matrix is stored to memory, does the propagation break? Can we get the shape information when reload the matrix from memory?

Currently yes, we do not propagate through memory instructions. For simple cases like the one above should not really show up, as such loads should be promoted to use the value directly. We could handle more involved cases by using MemorySSA/additional alias analysis. Currently that is not a high priority for us, but we would be happy to collaborate on that as well.

> How do we pass the shape information across function and return shape from function? Do you plan to have matrix as first class type?

Currently we do not propagate the shape information across function boundaries and we do not plan on proposing a dedicated matrix type. The original proposal was focused around a dedicated type, but  it was decided to go with a more lightweight solution and potential revisit the matrix type once there is a strong need.

For propagating across function boundaries one way to go about would be to turn the lowering into a module pass.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D70897/new/

https://reviews.llvm.org/D70897





More information about the llvm-commits mailing list