[Mlir-commits] [mlir] [mlir][sparse] Populate lvlToDim (PR #68937)
Yinying Li
llvmlistbot at llvm.org
Mon Oct 16 14:06:57 PDT 2023
================
@@ -749,6 +748,71 @@ mlir::sparse_tensor::getSparseTensorEncoding(Type type) {
return nullptr;
}
+AffineMap mlir::sparse_tensor::inferLvlToDim(AffineMap dimToLvl,
+ MLIRContext *context) {
+ auto map = static_cast<AffineMap>(dimToLvl);
+ AffineMap lvlToDim;
+ // TODO: support ELL instead of returning an empty lvlToDim.
+ if (!map || map.getNumSymbols() != 0) {
+ lvlToDim = AffineMap();
+ } else if (map.isPermutation()) {
+ lvlToDim = inversePermutation(map);
+ } else {
+ lvlToDim = inverseBlockSparsity(map, context);
+ }
+ return lvlToDim;
+}
+
+AffineMap mlir::sparse_tensor::inverseBlockSparsity(AffineMap dimToLvl,
+ MLIRContext *context) {
+ SmallVector<AffineExpr> lvlExprs;
+ auto numLvls = dimToLvl.getNumResults();
+ lvlExprs.reserve(numLvls);
+ // lvlExprComponents stores information of the floordiv and mod operations
+ // applied to the same dimension, so as to build the lvlToDim map.
+ // Map key is the position of the dimension in dimToLvl.
+ // Map value is a SmallVector that contains lvl var for floordiv, multiplier,
+ // lvl var for mod in dimToLvl.
+ // For example, for il = i floordiv 2 and ii = i mod 2, the SmalleVector
+ // would be [il, 2, ii]. It could be used to build the AffineExpr
+ // i = il * 2 + ii in lvlToDim.
+ std::map<unsigned, SmallVector<AffineExpr, 3>> lvlExprComponents;
----------------
yinying-lisa-li wrote:
I tried the vector way, but it's less convenient than map for building the lvlExprs in the for loop. If we use map, it could be empty and hence be skipped. But if we use vector, since we need to resize the vector in the beginning, it won't skip the loop. And it could lead to cases like below which are more complicated to deal with for resizing. The vector size should be 1, but it's not dimension size nor results size.
```
#NV_24 = #sparse_tensor.encoding<{
map = ( i, j ) ->
( i : dense,
j floordiv 4 : dense,
j mod 4 : block2_4
)
}>
```
The similar goes to tuple vs SmallVector: for SmallVector, the assertion could be components.second.size() == 3, but if we switch to tuple, we would have to assert three things != Null. And I personally don't like c++'s cumbersome way of updating tuple: std::get<1>(components) = binOp.getRHS(). Since based on our previous conversation about this, you don't have a strong preference, I'll keep SmallVector. ;)
https://github.com/llvm/llvm-project/pull/68937
More information about the Mlir-commits
mailing list