[Mlir-commits] [mlir] 110295e - [mlir][sparse] Moving lexOrder from SparseTensorCOO to Element
wren romano
llvmlistbot at llvm.org
Tue Mar 22 13:07:12 PDT 2022
Author: wren romano
Date: 2022-03-22T13:07:05-07:00
New Revision: 110295ebb76150887a5a83733d7ddcf8506da9ad
URL: https://github.com/llvm/llvm-project/commit/110295ebb76150887a5a83733d7ddcf8506da9ad
DIFF: https://github.com/llvm/llvm-project/commit/110295ebb76150887a5a83733d7ddcf8506da9ad.diff
LOG: [mlir][sparse] Moving lexOrder from SparseTensorCOO to Element
This is the more logical place for the function to live. If/when we factor out a separate class for just the `Coordinates` themselves, then the definition should be moved to `Coordinates::lexOrder` (and `Element::lexOrder` would become a thin wrapper delegating to that function).
This is (tangentially) work towards fixing: https://github.com/llvm/llvm-project/issues/51652
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D122057
Added:
Modified:
mlir/lib/ExecutionEngine/SparseTensorUtils.cpp
Removed:
################################################################################
diff --git a/mlir/lib/ExecutionEngine/SparseTensorUtils.cpp b/mlir/lib/ExecutionEngine/SparseTensorUtils.cpp
index b307104908646..ab145fbebf476 100644
--- a/mlir/lib/ExecutionEngine/SparseTensorUtils.cpp
+++ b/mlir/lib/ExecutionEngine/SparseTensorUtils.cpp
@@ -82,6 +82,17 @@ struct Element {
Element(const std::vector<uint64_t> &ind, V val) : indices(ind), value(val){};
std::vector<uint64_t> indices;
V value;
+ /// Returns true if indices of e1 < indices of e2.
+ static bool lexOrder(const Element<V> &e1, const Element<V> &e2) {
+ uint64_t rank = e1.indices.size();
+ assert(rank == e2.indices.size());
+ for (uint64_t r = 0; r < rank; r++) {
+ if (e1.indices[r] == e2.indices[r])
+ continue;
+ return e1.indices[r] < e2.indices[r];
+ }
+ return false;
+ }
};
/// A memory-resident sparse tensor in coordinate scheme (collection of
@@ -111,7 +122,7 @@ struct SparseTensorCOO {
assert(!iteratorLocked && "Attempt to sort() after startIterator()");
// TODO: we may want to cache an `isSorted` bit, to avoid
// unnecessary/redundant sorting.
- std::sort(elements.begin(), elements.end(), lexOrder);
+ std::sort(elements.begin(), elements.end(), Element<V>::lexOrder);
}
/// Returns rank.
uint64_t getRank() const { return sizes.size(); }
@@ -149,17 +160,6 @@ struct SparseTensorCOO {
}
private:
- /// Returns true if indices of e1 < indices of e2.
- static bool lexOrder(const Element<V> &e1, const Element<V> &e2) {
- uint64_t rank = e1.indices.size();
- assert(rank == e2.indices.size());
- for (uint64_t r = 0; r < rank; r++) {
- if (e1.indices[r] == e2.indices[r])
- continue;
- return e1.indices[r] < e2.indices[r];
- }
- return false;
- }
const std::vector<uint64_t> sizes; // per-dimension sizes
std::vector<Element<V>> elements;
bool iteratorLocked;
More information about the Mlir-commits
mailing list