[all-commits] [llvm/llvm-project] 70633a: [mlir][sparse] first general insertion implementat...

Aart Bik via All-commits all-commits at lists.llvm.org
Tue Nov 8 13:10:21 PST 2022


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: 70633a8d55a543eff892cc3316eaa3605d084637
      https://github.com/llvm/llvm-project/commit/70633a8d55a543eff892cc3316eaa3605d084637
  Author: Aart Bik <ajcbik at google.com>
  Date:   2022-11-08 (Tue, 08 Nov 2022)

  Changed paths:
    M mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
    M mlir/test/Dialect/SparseTensor/codegen.mlir
    M mlir/test/Dialect/SparseTensor/scf_1_N_conversion.mlir
    M mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_1d.mlir
    A mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_2d.mlir

  Log Message:
  -----------
  [mlir][sparse] first general insertion implementation with pure codegen

This revision generalizes lowering the sparse_tensor.insert op into actual code that directly operates on the memrefs of a sparse storage scheme. The current insertion strategy does *not* rely on a cursor anymore, with introduces some testing overhead for each insertion (but still proportional to the rank, as before). Over time, we can optimize the code generation, but this version enables us to finish the effort to migrate from library to actual codegen.

Things to do:
(1) carefully deal with (un)ordered and (not)unique
(2) omit overhead when not needed
(3) optimize and specialize
(4) try to avoid the pointer "cleanup" (at HasInserts), and make sure the storage scheme is consistent at every insertion point (so that it can "escape" without concerns).

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D137457




More information about the All-commits mailing list