[Mlir-commits] [mlir] [mlir][sparse] complete migration to dim2lvl/lvl2dim in library (PR #69268)

Yinying Li llvmlistbot at llvm.org
Tue Oct 17 09:17:20 PDT 2023


================
@@ -265,39 +240,27 @@ class SparseTensorStorage final : public SparseTensorStorageBase {
                       const DimLevelType *lvlTypes, const uint64_t *dim2lvl,
                       const uint64_t *lvl2dim, const intptr_t *lvlBufs);
 
-  /// Allocates a new empty sparse tensor. The preconditions/assertions
-  /// are as per the `SparseTensorStorageBase` ctor; which is to say,
-  /// the `dimSizes` and `lvlSizes` must both be "sizes" not "shapes",
-  /// since there's nowhere to reconstruct dynamic sizes from.
+  /// Allocates a new empty sparse tensor.
   static SparseTensorStorage<P, C, V> *
   newEmpty(uint64_t dimRank, const uint64_t *dimSizes, uint64_t lvlRank,
            const uint64_t *lvlSizes, const DimLevelType *lvlTypes,
            const uint64_t *dim2lvl, const uint64_t *lvl2dim, bool forwarding);
 
   /// Allocates a new sparse tensor and initializes it from the given COO.
-  /// The preconditions are as per the `SparseTensorStorageBase` ctor
-  /// (where we define `lvlSizes = lvlCOO.getDimSizes().data()`), but
-  /// using the following assertions in lieu of the base ctor's assertions:
-  //
-  // TODO: The ability to reconstruct dynamic dimensions-sizes does not
-  // easily generalize to arbitrary `lvl2dim` mappings.  When compiling
-  // MLIR programs to use this library, we should be able to generate
-  // code for effectively computing the reconstruction, but it's not clear
-  // that there's a feasible way to do so from within the library itself.
-  // Therefore, when we functionalize the `lvl2dim` mapping we'll have
-  // to update the type/preconditions of this factory too.
   static SparseTensorStorage<P, C, V> *
-  newFromCOO(uint64_t dimRank, const uint64_t *dimShape, uint64_t lvlRank,
-             const DimLevelType *lvlTypes, const uint64_t *dim2lvl,
-             const uint64_t *lvl2dim, SparseTensorCOO<V> &lvlCOO);
+  newFromCOO(uint64_t dimRank, const uint64_t *dimSizes, uint64_t lvlRank,
+             const uint64_t *lvlSizes, const DimLevelType *lvlTypes,
+             const uint64_t *dim2lvl, const uint64_t *lvl2dim,
+             SparseTensorCOO<V> &lvlCOO);
 
   /// Allocates a new sparse tensor and initialize it with the data stored level
   /// buffers directly.
-  static SparseTensorStorage<P, C, V> *packFromLvlBuffers(
-      uint64_t dimRank, const uint64_t *dimShape, uint64_t lvlRank,
-      const uint64_t *lvlSizes, const DimLevelType *lvlTypes,
-      const uint64_t *src2lvl, // FIXME: dim2lvl
-      const uint64_t *lvl2dim, uint64_t srcRank, const intptr_t *buffers);
+  static SparseTensorStorage<P, C, V> *
+  packFromLvlBuffers(uint64_t dimRank, const uint64_t *dimSizes,
----------------
yinying-lisa-li wrote:

Are we switching from pack to assemble in all cases?

https://github.com/llvm/llvm-project/pull/69268


More information about the Mlir-commits mailing list