[Mlir-commits] [mlir] ab47418 - [mlir][bufferize] Merge tensor-constant-bufferize into arith-bufferize

Matthias Springer llvmlistbot at llvm.org
Sun Jan 30 04:38:21 PST 2022


Author: Matthias Springer
Date: 2022-01-30T21:37:48+09:00
New Revision: ab47418df67010c6cf97b5f9797bf65e855cef3f

URL: https://github.com/llvm/llvm-project/commit/ab47418df67010c6cf97b5f9797bf65e855cef3f
DIFF: https://github.com/llvm/llvm-project/commit/ab47418df67010c6cf97b5f9797bf65e855cef3f.diff

LOG: [mlir][bufferize] Merge tensor-constant-bufferize into arith-bufferize

The bufferization of arith.constant ops is also switched over to BufferizableOpInterface-based bufferization. The old implementation is deleted. Both implementations utilize GlobalCreator, now renamed to just `getGlobalFor`.

GlobalCreator no longer maintains a set of all created allocations to avoid duplicate allocations of the same constant. Instead, `getGlobalFor` scans the module to see if there is already a global allocation with the same constant value.

For compatibility reasons, it is still possible to create a pass that bufferizes only `arith.constant`. This pass (createConstantBufferizePass) could be deleted once all users were switched over to One-Shot bufferization.

Differential Revision: https://reviews.llvm.org/D118483

Added: 
    

Modified: 
    mlir/docs/Bufferization.md
    mlir/include/mlir/Dialect/Arithmetic/Transforms/Passes.h
    mlir/include/mlir/Dialect/Arithmetic/Transforms/Passes.td
    mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h
    mlir/include/mlir/Dialect/Bufferization/Transforms/BufferUtils.h
    mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.h
    mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.td
    mlir/lib/Dialect/Arithmetic/Transforms/BufferizableOpInterfaceImpl.cpp
    mlir/lib/Dialect/Arithmetic/Transforms/Bufferize.cpp
    mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp
    mlir/lib/Dialect/Bufferization/Transforms/BufferUtils.cpp
    mlir/lib/Dialect/Linalg/Transforms/ComprehensiveBufferizePass.cpp
    mlir/lib/Dialect/SparseTensor/Pipelines/CMakeLists.txt
    mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
    mlir/lib/Dialect/StandardOps/Transforms/CMakeLists.txt
    mlir/test/Dialect/Arithmetic/bufferize.mlir
    mlir/test/Dialect/SparseTensor/sparse_lower.mlir
    mlir/test/Dialect/SparseTensor/sparse_lower_col.mlir
    mlir/test/Dialect/SparseTensor/sparse_lower_inplace.mlir
    mlir/test/Integration/Dialect/Linalg/CPU/test-collapse-tensor.mlir
    mlir/test/Integration/Dialect/Linalg/CPU/test-elementwise.mlir
    mlir/test/Integration/Dialect/Linalg/CPU/test-expand-tensor.mlir
    mlir/test/Integration/Dialect/Linalg/CPU/test-padtensor.mlir
    mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert-multiple-uses.mlir
    mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert.mlir
    mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-e2e.mlir
    mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-matmul.mlir
    mlir/test/Integration/Dialect/SparseTensor/python/test_SDDMM.py
    mlir/test/Integration/Dialect/SparseTensor/python/test_SpMM.py
    mlir/test/Integration/Dialect/SparseTensor/python/test_elementwise_add_sparse_output.py
    mlir/test/Integration/Dialect/SparseTensor/python/test_output.py
    mlir/test/Integration/Dialect/SparseTensor/python/test_stress.py
    mlir/test/Integration/Dialect/SparseTensor/taco/tools/mlir_pytaco.py
    mlir/test/lib/Dialect/Linalg/TestComprehensiveBufferize.cpp

Removed: 
    mlir/lib/Dialect/StandardOps/Transforms/TensorConstantBufferize.cpp
    mlir/test/Dialect/Standard/tensor-constant-bufferize.mlir


################################################################################
diff  --git a/mlir/docs/Bufferization.md b/mlir/docs/Bufferization.md
index f1ccff783e606..b7e9665acd7d5 100644
--- a/mlir/docs/Bufferization.md
+++ b/mlir/docs/Bufferization.md
@@ -97,10 +97,9 @@ The code, slightly simplified and annotated, is reproduced here:
 
 Looking first at the partial bufferization passes, we see that there are a
 sequence of `FuncOp` passes (which run in parallel on functions). These function
-passes are bracketed by `tensor-constant-bufferize` and `func-bufferize`, which
-are module passes (and thus serialize the parallel compilation process). These
-two passes must be module passes because they make changes to the top-level
-module.
+passes are bracketed by `arith-bufferize` and `func-bufferize`, which are module
+passes (and thus serialize the parallel compilation process). These two passes
+must be module passes because they make changes to the top-level module.
 
 The bulk of the bufferization work is done by the function passes. Most of these
 passes are provided as part of the upstream MLIR distribution and bufferize
@@ -235,7 +234,7 @@ which helps with this in general.
     -   This is an example of a pass that is not split along dialect
         subdivisions.
 
--   `tensor-constant-bufferize`
+-   `arith-bufferize`
     ([code](https://github.com/llvm/llvm-project/blob/bc8acf2ce8ad6e8c9b1d97b2e02d3f4ad26e1d9d/mlir/lib/Dialect/StandardOps/Transforms/TensorConstantBufferize.cpp#L1),
     [test](https://github.com/llvm/llvm-project/blob/bc8acf2ce8ad6e8c9b1d97b2e02d3f4ad26e1d9d/mlir/test/Dialect/Standard/tensor-constant-bufferize.mlir#L1))
 

diff  --git a/mlir/include/mlir/Dialect/Arithmetic/Transforms/Passes.h b/mlir/include/mlir/Dialect/Arithmetic/Transforms/Passes.h
index 5569a8347dad2..1acea57102dbd 100644
--- a/mlir/include/mlir/Dialect/Arithmetic/Transforms/Passes.h
+++ b/mlir/include/mlir/Dialect/Arithmetic/Transforms/Passes.h
@@ -17,6 +17,9 @@ namespace arith {
 /// Create a pass to bufferize Arithmetic ops.
 std::unique_ptr<Pass> createArithmeticBufferizePass();
 
+/// Create a pass to bufferize arith.constant ops.
+std::unique_ptr<Pass> createConstantBufferizePass(uint64_t alignment = 0);
+
 /// Add patterns to expand Arithmetic ops for LLVM lowering.
 void populateArithmeticExpandOpsPatterns(RewritePatternSet &patterns);
 

diff  --git a/mlir/include/mlir/Dialect/Arithmetic/Transforms/Passes.td b/mlir/include/mlir/Dialect/Arithmetic/Transforms/Passes.td
index bb30949208ab1..af46bfd14105a 100644
--- a/mlir/include/mlir/Dialect/Arithmetic/Transforms/Passes.td
+++ b/mlir/include/mlir/Dialect/Arithmetic/Transforms/Passes.td
@@ -11,9 +11,21 @@
 
 include "mlir/Pass/PassBase.td"
 
-def ArithmeticBufferize : Pass<"arith-bufferize", "FuncOp"> {
+def ArithmeticBufferize : Pass<"arith-bufferize", "ModuleOp"> {
   let summary = "Bufferize Arithmetic dialect ops.";
+  let description = [{
+    This pass bufferizes arith dialect ops.
+
+    This pass needs to be a module pass because it inserts memref.global
+    ops into the module, which cannot be done safely from a function pass due to
+    multi-threading. Most other bufferization passes can run in parallel at
+    function granularity.
+  }];
   let constructor = "mlir::arith::createArithmeticBufferizePass()";
+  let options = [
+    Option<"alignment", "alignment", "unsigned", /*default=*/"0",
+           "Create global memrefs with a specified alignment">,
+  ];
 }
 
 def ArithmeticExpandOps : Pass<"arith-expand", "FuncOp"> {

diff  --git a/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h b/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h
index 534f664483be2..3176d6fa337bc 100644
--- a/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h
+++ b/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h
@@ -27,19 +27,21 @@ class FuncOp;
 
 namespace bufferization {
 
-// TODO: from some HW description.
-static constexpr int64_t kBufferAlignments = 128;
-
 class BufferizableOpInterface;
 struct BufferizationOptions;
 class BufferizationState;
 
 /// Options for ComprehensiveBufferize.
 struct BufferizationOptions {
-  using AllocationFn = std::function<FailureOr<Value>(OpBuilder &, Location,
-                                                      MemRefType, ValueRange)>;
+  /// Allocator function: Generate a memref allocation with the given type,
+  /// dynamic extents and alignment.
+  using AllocationFn = std::function<FailureOr<Value>(
+      OpBuilder &, Location, MemRefType, ValueRange, unsigned int)>;
+  /// Deallocator function: Deallocate a buffer that was allocated with
+  /// AllocatorFn.
   using DeallocationFn =
       std::function<LogicalResult(OpBuilder &, Location, Value)>;
+  /// Memcpy function: Generate a memcpy between two buffers.
   using MemCpyFn =
       std::function<LogicalResult(OpBuilder &, Location, Value, Value)>;
 
@@ -50,14 +52,13 @@ struct BufferizationOptions {
 
   /// Return `true` if the op is allowed to be bufferized.
   bool isOpAllowed(Operation *op) const {
-    if (!dialectFilter.hasValue())
+    if (!hasFilter)
       return true;
-    return dialectFilter->contains(op->getDialect()->getNamespace());
+    return dialectFilter.contains(op->getDialect()->getNamespace()) ||
+           operationFilter.contains(op->getName().getStringRef());
   }
 
-  /// Allow-list the given dialects in the dialect filter. Only ops from
-  /// allow-listed dialects will be bufferized. If no dialect is added, ops from
-  /// any dialect will be bufferized.
+  /// Allow the given dialects and activate the filter (`hasFilter`).
   template <typename... DialectTs>
   void addToDialectFilter() {
     // The following expands a call to addToDialectFilterImpl for each dialect
@@ -68,6 +69,13 @@ struct BufferizationOptions {
         0, (addToDialectFilterImpl<DialectTs>(), 0)...};
   }
 
+  /// Allow the given ops and activate the filter (`hasFilter`).
+  template <typename... OpTys> void addToOperationFilter() {
+    // FIXME: In c++17 this can be simplified by using 'fold expressions'.
+    (void)std::initializer_list<int>{0,
+                                     (addToOperationFilterImpl<OpTys>(), 0)...};
+  }
+
   /// Try to cast the given op to BufferizableOpInterface if the op is allow
   /// listed.
   BufferizableOpInterface dynCastBufferizableOp(Operation *op) const;
@@ -110,23 +118,36 @@ struct BufferizationOptions {
   /// For debugging only. Should be used together with `testAnalysisOnly`.
   bool printConflicts = false;
 
-  /// Only bufferize ops from dialects that are allowed-listed by the filter.
-  /// All other ops are ignored. This option controls the scope of partial
-  /// bufferization.
+  /// Buffer alignment for new memory allocations.
+  unsigned int bufferAlignment = 128;
+
+  /// If set to `true`, only ops that belong to a filtered dialect
+  /// (`dialectFilter`) and filtered ops (`operationFilter`) are processed. All
+  /// other ops are ignored. If set to `false`, all ops are bufferized (as long
+  /// as they implement BufferizableOpInterface).
   ///
-  /// Note: If no filter is specified, all ops are bufferized (as long as they
-  /// implement BufferizableOpInterface). If a filter is specified,
-  /// `allowUnknownOps` should be enabled. Otherwise, bufferization would fail
-  /// when encountering an op that is forbidden by the filter.
-  Optional<DenseSet<StringRef>> dialectFilter;
+  /// If a filter is specified, `allowUnknownOps` should be enabled. Otherwise,
+  /// bufferization would fail when encountering a non-filtered op.
+  bool hasFilter = false;
+
+  /// A set of allowed dialects.
+  DenseSet<StringRef> dialectFilter;
+
+  /// A set of allowed ops.
+  DenseSet<StringRef> operationFilter;
 
 private:
-  /// Allow-list a dialect in the dialect filter.
+  /// Allow a dialect.
   template <typename DialectT>
   void addToDialectFilterImpl() {
-    if (!dialectFilter.hasValue())
-      dialectFilter.emplace();
-    dialectFilter->insert(DialectT::getDialectNamespace());
+    hasFilter = true;
+    dialectFilter.insert(DialectT::getDialectNamespace());
+  }
+
+  /// Allow an op.
+  template <typename OpTy> void addToOperationFilterImpl() {
+    hasFilter = true;
+    operationFilter.insert(OpTy::getOperationName());
   }
 };
 

diff  --git a/mlir/include/mlir/Dialect/Bufferization/Transforms/BufferUtils.h b/mlir/include/mlir/Dialect/Bufferization/Transforms/BufferUtils.h
index 681b94953f20f..3b6d57e682cd7 100644
--- a/mlir/include/mlir/Dialect/Bufferization/Transforms/BufferUtils.h
+++ b/mlir/include/mlir/Dialect/Bufferization/Transforms/BufferUtils.h
@@ -121,22 +121,12 @@ class BufferPlacementTransformationBase {
   Liveness liveness;
 };
 
-// Support class to create global ops for tensor-valued constants in the
-// program. Globals are created lazily at the top of the `moduleOp` with pretty
+// Create a global op for the given tensor-valued constant in the program.
+// Globals are created lazily at the top of the enclosing ModuleOp with pretty
 // names. Duplicates are avoided.
-class GlobalCreator {
-public:
-  GlobalCreator(ModuleOp module, unsigned alignment = 0)
-      : moduleOp(module), alignment(alignment) {}
-  memref::GlobalOp getGlobalFor(arith::ConstantOp constantOp);
+FailureOr<memref::GlobalOp> getGlobalFor(arith::ConstantOp constantOp,
+                                         uint64_t alignment);
 
-private:
-  ModuleOp moduleOp;
-  unsigned alignment;
-  // This could use memref::GlobalOp key but we avoid introducing a new
-  // dependence to the memref dialect for this.
-  DenseMap<Attribute, Operation *> globals;
-};
 } // namespace bufferization
 } // namespace mlir
 

diff  --git a/mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.h b/mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.h
index dea605f1ae1a7..52bbea000d1f3 100644
--- a/mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.h
+++ b/mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.h
@@ -19,7 +19,6 @@
 namespace mlir {
 namespace bufferization {
 class BufferizeTypeConverter;
-class GlobalCreator;
 } // namespace bufferization
 
 class RewritePatternSet;
@@ -34,16 +33,6 @@ std::unique_ptr<Pass> createStdBufferizePass();
 /// Creates an instance of func bufferization pass.
 std::unique_ptr<Pass> createFuncBufferizePass();
 
-/// Add patterns to bufferize tensor constants into global memrefs to the given
-/// pattern list.
-void populateTensorConstantBufferizePatterns(
-    bufferization::GlobalCreator &globalCreator,
-    bufferization::BufferizeTypeConverter &typeConverter,
-    RewritePatternSet &patterns);
-
-/// Creates an instance of tensor constant bufferization pass.
-std::unique_ptr<Pass> createTensorConstantBufferizePass(unsigned alignment = 0);
-
 //===----------------------------------------------------------------------===//
 // Registration
 //===----------------------------------------------------------------------===//

diff  --git a/mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.td b/mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.td
index 339c1b1194cce..6bd83938346e4 100644
--- a/mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.td
+++ b/mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.td
@@ -47,23 +47,4 @@ def FuncBufferize : Pass<"func-bufferize", "ModuleOp"> {
                            "memref::MemRefDialect"];
 }
 
-def TensorConstantBufferize : Pass<"tensor-constant-bufferize", "ModuleOp"> {
-  let summary = "Bufferize tensor constants.";
-  let description = [{
-    This pass bufferizes tensor constants.
-
-    This pass needs to be a module pass because it inserts memref.global
-    ops into the module, which cannot be done safely from a function pass due to
-    multi-threading. Most other bufferization passes can run in parallel at
-    function granularity.
-  }];
-  let constructor = "mlir::createTensorConstantBufferizePass()";
-  let dependentDialects = ["bufferization::BufferizationDialect",
-                           "memref::MemRefDialect"];
-  let options = [
-    Option<"alignment", "alignment", "unsigned", /*default=*/"0",
-           "Create global memrefs with a specified alignment">,
-  ];
-}
-
 #endif // MLIR_DIALECT_STANDARD_TRANSFORMS_PASSES

diff  --git a/mlir/lib/Dialect/Arithmetic/Transforms/BufferizableOpInterfaceImpl.cpp b/mlir/lib/Dialect/Arithmetic/Transforms/BufferizableOpInterfaceImpl.cpp
index 00cc640327aad..6073ad9c7b361 100644
--- a/mlir/lib/Dialect/Arithmetic/Transforms/BufferizableOpInterfaceImpl.cpp
+++ b/mlir/lib/Dialect/Arithmetic/Transforms/BufferizableOpInterfaceImpl.cpp
@@ -39,8 +39,11 @@ struct ConstantOpInterface
 
     // Create global memory segment and replace tensor with memref pointing to
     // that memory segment.
-    GlobalCreator globalCreator(moduleOp);
-    auto globalMemref = globalCreator.getGlobalFor(constantOp);
+    FailureOr<memref::GlobalOp> globalOp =
+        getGlobalFor(constantOp, state.getOptions().bufferAlignment);
+    if (failed(globalOp))
+      return failure();
+    memref::GlobalOp globalMemref = globalOp.getValue();
     replaceOpWithNewBufferizedOp<memref::GetGlobalOp>(
         rewriter, op, globalMemref.type(), globalMemref.getName());
 

diff  --git a/mlir/lib/Dialect/Arithmetic/Transforms/Bufferize.cpp b/mlir/lib/Dialect/Arithmetic/Transforms/Bufferize.cpp
index 2c8d2cb553ccb..1fafd255d60d3 100644
--- a/mlir/lib/Dialect/Arithmetic/Transforms/Bufferize.cpp
+++ b/mlir/lib/Dialect/Arithmetic/Transforms/Bufferize.cpp
@@ -22,10 +22,21 @@ namespace {
 /// Pass to bufferize Arithmetic ops.
 struct ArithmeticBufferizePass
     : public ArithmeticBufferizeBase<ArithmeticBufferizePass> {
+  ArithmeticBufferizePass(uint64_t alignment = 0, bool constantOpOnly = false)
+      : ArithmeticBufferizeBase<ArithmeticBufferizePass>(),
+        constantOpOnly(constantOpOnly) {
+    this->alignment = alignment;
+  }
+
   void runOnOperation() override {
     std::unique_ptr<BufferizationOptions> options =
         getPartialBufferizationOptions();
-    options->addToDialectFilter<arith::ArithmeticDialect>();
+    if (constantOpOnly) {
+      options->addToOperationFilter<arith::ConstantOp>();
+    } else {
+      options->addToDialectFilter<arith::ArithmeticDialect>();
+    }
+    options->bufferAlignment = alignment;
 
     if (failed(bufferizeOp(getOperation(), *options)))
       signalPassFailure();
@@ -36,9 +47,18 @@ struct ArithmeticBufferizePass
                     arith::ArithmeticDialect>();
     arith::registerBufferizableOpInterfaceExternalModels(registry);
   }
+
+private:
+  bool constantOpOnly;
 };
 } // namespace
 
 std::unique_ptr<Pass> mlir::arith::createArithmeticBufferizePass() {
   return std::make_unique<ArithmeticBufferizePass>();
 }
+
+std::unique_ptr<Pass>
+mlir::arith::createConstantBufferizePass(uint64_t alignment) {
+  return std::make_unique<ArithmeticBufferizePass>(alignment,
+                                                   /*constantOpOnly=*/true);
+}

diff  --git a/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp
index 3af1c37594f96..869037598b310 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp
@@ -464,11 +464,12 @@ bufferization::createAlloc(OpBuilder &b, Location loc, MemRefType type,
                            ValueRange dynShape,
                            const BufferizationOptions &options) {
   if (options.allocationFn)
-    return (*options.allocationFn)(b, loc, type, dynShape);
+    return (*options.allocationFn)(b, loc, type, dynShape,
+                                   options.bufferAlignment);
 
   // Default bufferallocation via AllocOp.
   Value allocated = b.create<memref::AllocOp>(
-      loc, type, dynShape, b.getI64IntegerAttr(kBufferAlignments));
+      loc, type, dynShape, b.getI64IntegerAttr(options.bufferAlignment));
   return allocated;
 }
 

diff  --git a/mlir/lib/Dialect/Bufferization/Transforms/BufferUtils.cpp b/mlir/lib/Dialect/Bufferization/Transforms/BufferUtils.cpp
index a373a8dbe86b4..9e0c310d0d286 100644
--- a/mlir/lib/Dialect/Bufferization/Transforms/BufferUtils.cpp
+++ b/mlir/lib/Dialect/Bufferization/Transforms/BufferUtils.cpp
@@ -144,16 +144,27 @@ bool BufferPlacementTransformationBase::isLoop(Operation *op) {
 // BufferPlacementTransformationBase
 //===----------------------------------------------------------------------===//
 
-memref::GlobalOp GlobalCreator::getGlobalFor(arith::ConstantOp constantOp) {
+FailureOr<memref::GlobalOp>
+bufferization::getGlobalFor(arith::ConstantOp constantOp, uint64_t alignment) {
   auto type = constantOp.getType().cast<RankedTensorType>();
-
-  BufferizeTypeConverter typeConverter;
+  auto moduleOp = constantOp->getParentOfType<ModuleOp>();
+  if (!moduleOp)
+    return failure();
 
   // If we already have a global for this constant value, no need to do
   // anything else.
-  auto it = globals.find(constantOp.getValue());
-  if (it != globals.end())
-    return cast<memref::GlobalOp>(it->second);
+  for (Operation &op : moduleOp.getRegion().getOps()) {
+    auto globalOp = dyn_cast<memref::GlobalOp>(&op);
+    if (!globalOp)
+      continue;
+    if (!globalOp.initial_value().hasValue())
+      continue;
+    uint64_t opAlignment =
+        globalOp.alignment().hasValue() ? globalOp.alignment().getValue() : 0;
+    Attribute initialValue = globalOp.initial_value().getValue();
+    if (opAlignment == alignment && initialValue == constantOp.getValue())
+      return globalOp;
+  }
 
   // Create a builder without an insertion point. We will insert using the
   // symbol table to guarantee unique names.
@@ -171,6 +182,7 @@ memref::GlobalOp GlobalCreator::getGlobalFor(arith::ConstantOp constantOp) {
       alignment > 0 ? IntegerAttr::get(globalBuilder.getI64Type(), alignment)
                     : IntegerAttr();
 
+  BufferizeTypeConverter typeConverter;
   auto global = globalBuilder.create<memref::GlobalOp>(
       constantOp.getLoc(), (Twine("__constant_") + os.str()).str(),
       /*sym_visibility=*/globalBuilder.getStringAttr("private"),
@@ -182,6 +194,5 @@ memref::GlobalOp GlobalCreator::getGlobalFor(arith::ConstantOp constantOp) {
   // The symbol table inserts at the end of the module, but globals are a bit
   // nicer if they are at the beginning.
   global->moveBefore(&moduleOp.front());
-  globals[constantOp.getValue()] = global;
   return global;
 }

diff  --git a/mlir/lib/Dialect/Linalg/Transforms/ComprehensiveBufferizePass.cpp b/mlir/lib/Dialect/Linalg/Transforms/ComprehensiveBufferizePass.cpp
index 151ef39d685dc..314daed4f4cef 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/ComprehensiveBufferizePass.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/ComprehensiveBufferizePass.cpp
@@ -71,9 +71,10 @@ static void applyEnablingTransformations(ModuleOp moduleOp) {
 
 static FailureOr<Value> allocationFnUsingAlloca(OpBuilder &b, Location loc,
                                                 MemRefType type,
-                                                ValueRange dynShape) {
+                                                ValueRange dynShape,
+                                                unsigned int bufferAlignment) {
   Value allocated = b.create<memref::AllocaOp>(
-      loc, type, dynShape, b.getI64IntegerAttr(kBufferAlignments));
+      loc, type, dynShape, b.getI64IntegerAttr(bufferAlignment));
   return allocated;
 }
 

diff  --git a/mlir/lib/Dialect/SparseTensor/Pipelines/CMakeLists.txt b/mlir/lib/Dialect/SparseTensor/Pipelines/CMakeLists.txt
index 7fa0586a7204a..909f3bc0d97e5 100644
--- a/mlir/lib/Dialect/SparseTensor/Pipelines/CMakeLists.txt
+++ b/mlir/lib/Dialect/SparseTensor/Pipelines/CMakeLists.txt
@@ -5,6 +5,7 @@ add_mlir_dialect_library(MLIRSparseTensorPipelines
   ${MLIR_MAIN_INCLUDE_DIR}/mlir/Dialect/SparseTensor
 
   LINK_LIBS PUBLIC
+  MLIRArithmeticTransforms
   MLIRAffineToStandard
   MLIRBufferizationTransforms
   MLIRLinalgTransforms

diff  --git a/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp b/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
index ff4f7daa35699..32cfeb16a73fa 100644
--- a/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
@@ -9,6 +9,7 @@
 #include "mlir/Dialect/SparseTensor/Pipelines/Passes.h"
 
 #include "mlir/Conversion/Passes.h"
+#include "mlir/Dialect/Arithmetic/Transforms/Passes.h"
 #include "mlir/Dialect/Bufferization/Transforms/Passes.h"
 #include "mlir/Dialect/Linalg/Passes.h"
 #include "mlir/Dialect/SparseTensor/IR/SparseTensor.h"
@@ -33,7 +34,7 @@ void mlir::sparse_tensor::buildSparseCompiler(
   pm.addPass(createConvertVectorToSCFPass());
   pm.addPass(createLowerToCFGPass()); // --convert-scf-to-std
   pm.addPass(createFuncBufferizePass());
-  pm.addPass(createTensorConstantBufferizePass());
+  pm.addPass(arith::createConstantBufferizePass());
   pm.addPass(createTensorBufferizePass());
   pm.addPass(createStdBufferizePass());
   pm.addPass(mlir::bufferization::createFinalizingBufferizePass());

diff  --git a/mlir/lib/Dialect/StandardOps/Transforms/CMakeLists.txt b/mlir/lib/Dialect/StandardOps/Transforms/CMakeLists.txt
index f8082601b48b3..d5869ce207cf8 100644
--- a/mlir/lib/Dialect/StandardOps/Transforms/CMakeLists.txt
+++ b/mlir/lib/Dialect/StandardOps/Transforms/CMakeLists.txt
@@ -3,7 +3,6 @@ add_mlir_dialect_library(MLIRStandardOpsTransforms
   DecomposeCallGraphTypes.cpp
   FuncBufferize.cpp
   FuncConversions.cpp
-  TensorConstantBufferize.cpp
 
   ADDITIONAL_HEADER_DIRS
   ${MLIR_MAIN_INCLUDE_DIR}/mlir/Dialect/StandardOps/Transforms

diff  --git a/mlir/lib/Dialect/StandardOps/Transforms/TensorConstantBufferize.cpp b/mlir/lib/Dialect/StandardOps/Transforms/TensorConstantBufferize.cpp
deleted file mode 100644
index 5bae6f3f6154f..0000000000000
--- a/mlir/lib/Dialect/StandardOps/Transforms/TensorConstantBufferize.cpp
+++ /dev/null
@@ -1,92 +0,0 @@
-//===- Bufferize.cpp - Bufferization for std ops --------------------------===//
-//
-// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
-// See https://llvm.org/LICENSE.txt for license information.
-// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
-//
-//===----------------------------------------------------------------------===//
-//
-// This file implements bufferization of tensor-valued arith.constant ops.
-//
-//===----------------------------------------------------------------------===//
-
-#include "PassDetail.h"
-#include "mlir/Dialect/Bufferization/IR/Bufferization.h"
-#include "mlir/Dialect/Bufferization/Transforms/BufferUtils.h"
-#include "mlir/Dialect/Bufferization/Transforms/Bufferize.h"
-#include "mlir/Dialect/MemRef/IR/MemRef.h"
-#include "mlir/Dialect/StandardOps/IR/Ops.h"
-#include "mlir/Dialect/StandardOps/Transforms/Passes.h"
-#include "mlir/IR/BlockAndValueMapping.h"
-#include "mlir/Transforms/DialectConversion.h"
-
-using namespace mlir;
-using namespace mlir::bufferization;
-
-namespace {
-class BufferizeTensorConstantOp
-    : public OpConversionPattern<arith::ConstantOp> {
-public:
-  BufferizeTensorConstantOp(GlobalCreator &globals,
-                            TypeConverter &typeConverter, MLIRContext *context)
-      : OpConversionPattern<arith::ConstantOp>(typeConverter, context,
-                                               /*benefit=*/1),
-        globals(globals) {}
-
-  LogicalResult
-  matchAndRewrite(arith::ConstantOp op, OpAdaptor adaptor,
-                  ConversionPatternRewriter &rewriter) const override {
-    auto type = op.getType().dyn_cast<RankedTensorType>();
-    if (!type)
-      return failure();
-
-    auto globalMemref = globals.getGlobalFor(op);
-    rewriter.replaceOpWithNewOp<memref::GetGlobalOp>(op, globalMemref.type(),
-                                                     globalMemref.getName());
-    return success();
-  }
-  GlobalCreator &globals;
-};
-} // namespace
-
-void mlir::populateTensorConstantBufferizePatterns(
-    GlobalCreator &globalCreator,
-    bufferization::BufferizeTypeConverter &typeConverter,
-    RewritePatternSet &patterns) {
-  patterns.add<BufferizeTensorConstantOp>(globalCreator, typeConverter,
-                                          patterns.getContext());
-}
-
-namespace {
-class TensorConstantBufferizePass
-    : public TensorConstantBufferizeBase<TensorConstantBufferizePass> {
-public:
-  explicit TensorConstantBufferizePass(unsigned alignment) {
-    if (alignment)
-      this->alignment = alignment;
-  }
-
-  void runOnOperation() override {
-    auto module = getOperation();
-    GlobalCreator globals(module, alignment);
-
-    auto *context = &getContext();
-    bufferization::BufferizeTypeConverter typeConverter;
-    RewritePatternSet patterns(context);
-    ConversionTarget target(*context);
-
-    target.addLegalDialect<memref::MemRefDialect>();
-    populateTensorConstantBufferizePatterns(globals, typeConverter, patterns);
-    target.addDynamicallyLegalOp<arith::ConstantOp>([&](arith::ConstantOp op) {
-      return typeConverter.isLegal(op.getType());
-    });
-    if (failed(applyPartialConversion(module, target, std::move(patterns))))
-      signalPassFailure();
-  }
-};
-} // namespace
-
-std::unique_ptr<Pass>
-mlir::createTensorConstantBufferizePass(unsigned alignment) {
-  return std::make_unique<TensorConstantBufferizePass>(alignment);
-}

diff  --git a/mlir/test/Dialect/Arithmetic/bufferize.mlir b/mlir/test/Dialect/Arithmetic/bufferize.mlir
index 6038b3e47699a..f39d8a46a0934 100644
--- a/mlir/test/Dialect/Arithmetic/bufferize.mlir
+++ b/mlir/test/Dialect/Arithmetic/bufferize.mlir
@@ -1,4 +1,5 @@
-// RUN: mlir-opt %s -arith-bufferize | FileCheck %s
+// RUN: mlir-opt %s -arith-bufferize -split-input-file | FileCheck %s
+// RUN: mlir-opt %s -arith-bufferize=alignment=64 -split-input-file | FileCheck --check-prefix=ALIGNED %s
 
 // CHECK-LABEL:   func @index_cast(
 // CHECK-SAME:  %[[TENSOR:.*]]: tensor<i32>, %[[SCALAR:.*]]: i32
@@ -12,3 +13,70 @@ func @index_cast(%tensor: tensor<i32>, %scalar: i32) -> (tensor<index>, index) {
 // CHECK-SAME:   memref<i32> to memref<index>
 // CHECK-NEXT: %[[INDEX_TENSOR:.*]] = bufferization.to_tensor %[[INDEX_MEMREF]]
 // CHECK: return %[[INDEX_TENSOR]]
+
+// -----
+
+// CHECK-LABEL: module {
+
+// We check the debug name too since we put some effort into making that readable.
+// The name isn't load-bearing though.
+
+// CHECK: memref.global "private" constant @__constant_3x4xf32 : memref<3x4xf32> = dense<7.000000e+00>
+// CHECK-NOT: alignment
+
+// ALIGNED: memref.global "private" constant @__constant_3x4xf32 : memref<3x4xf32> = dense<7.000000e+00>
+// ALIGNED-SAME: {alignment = 64 : i64}
+
+// CHECK: @basic
+func @basic() -> tensor<3x4xf32> {
+  // CHECK: %[[MEMREF:.*]] = memref.get_global @__constant_3x4xf32 : memref<3x4xf32>
+  // CHECK: %[[TENSOR:.*]] = bufferization.to_tensor %[[MEMREF]]
+  %0 = arith.constant dense<7.0> : tensor<3x4xf32>
+  // CHECK: return %[[TENSOR]]
+  return %0 : tensor<3x4xf32>
+}
+
+// CHECK: }
+
+// -----
+
+// CHECK-LABEL: module {
+
+// Only one global is created.
+// CHECK: memref.global
+// CHECK-NOT: memref.global
+func @duplicate_constants() -> (tensor<3x4xf32>, tensor<3x4xf32>) {
+  %0 = arith.constant dense<7.0> : tensor<3x4xf32>
+  %1 = arith.constant dense<7.0> : tensor<3x4xf32>
+  return %0, %1 : tensor<3x4xf32>, tensor<3x4xf32>
+}
+
+// CHECK: }
+
+// -----
+
+// CHECK-LABEL: module {
+
+// Two globals are created.
+// CHECK: memref.global
+// CHECK: memref.global
+// CHECK-NOT: memref.global
+func @multiple_constants() -> (tensor<3x4xf32>, tensor<3x4xf32>) {
+  %0 = arith.constant dense<7.0> : tensor<3x4xf32>
+  %1 = arith.constant dense<8.0> : tensor<3x4xf32>
+  return %0, %1 : tensor<3x4xf32>, tensor<3x4xf32>
+}
+
+// CHECK: }
+
+// -----
+
+// CHECK-LABEL: module {
+// We don't convert non-tensor globals.
+// CHECK-NOT: memref.global
+func @non_tensor() {
+    %0 = arith.constant 7 : i32
+    return
+}
+
+// CHECK: }

diff  --git a/mlir/test/Dialect/SparseTensor/sparse_lower.mlir b/mlir/test/Dialect/SparseTensor/sparse_lower.mlir
index 45c8a36ef4379..22a8e3a2c9b53 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_lower.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_lower.mlir
@@ -4,7 +4,7 @@
 // RUN: FileCheck %s --check-prefix=CHECK-MIR
 //
 // RUN: mlir-opt %s -sparsification --sparse-tensor-conversion \
-// RUN: --func-bufferize --tensor-constant-bufferize           \
+// RUN: --func-bufferize --arith-bufferize           \
 // RUN: --tensor-bufferize --finalizing-bufferize |            \
 // RUN: FileCheck %s --check-prefix=CHECK-LIR
 

diff  --git a/mlir/test/Dialect/SparseTensor/sparse_lower_col.mlir b/mlir/test/Dialect/SparseTensor/sparse_lower_col.mlir
index fc0051f6f77b9..d06231bed7c24 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_lower_col.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_lower_col.mlir
@@ -4,7 +4,7 @@
 // RUN: FileCheck %s --check-prefix=CHECK-MIR
 //
 // RUN: mlir-opt %s -sparsification --sparse-tensor-conversion \
-// RUN: --func-bufferize --tensor-constant-bufferize           \
+// RUN: --func-bufferize --arith-bufferize           \
 // RUN: --tensor-bufferize --finalizing-bufferize |            \
 // RUN: FileCheck %s --check-prefix=CHECK-LIR
 

diff  --git a/mlir/test/Dialect/SparseTensor/sparse_lower_inplace.mlir b/mlir/test/Dialect/SparseTensor/sparse_lower_inplace.mlir
index 10f2083f0d398..e611b0a0684c8 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_lower_inplace.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_lower_inplace.mlir
@@ -4,7 +4,7 @@
 // RUN: FileCheck %s --check-prefix=CHECK-MIR
 //
 // RUN: mlir-opt %s -sparsification --sparse-tensor-conversion \
-// RUN: --func-bufferize --tensor-constant-bufferize           \
+// RUN: --func-bufferize --arith-bufferize           \
 // RUN: --tensor-bufferize --finalizing-bufferize |            \
 // RUN: FileCheck %s --check-prefix=CHECK-LIR
 

diff  --git a/mlir/test/Dialect/Standard/tensor-constant-bufferize.mlir b/mlir/test/Dialect/Standard/tensor-constant-bufferize.mlir
deleted file mode 100644
index d4a0c3fe39613..0000000000000
--- a/mlir/test/Dialect/Standard/tensor-constant-bufferize.mlir
+++ /dev/null
@@ -1,67 +0,0 @@
-// RUN: mlir-opt %s -tensor-constant-bufferize -split-input-file | FileCheck %s
-// RUN: mlir-opt %s -tensor-constant-bufferize=alignment=64 -split-input-file | FileCheck --check-prefix=ALIGNED %s
-
-// CHECK-LABEL: module {
-
-// We check the debug name too since we put some effort into making that readable.
-// The name isn't load-bearing though.
-
-// CHECK: memref.global "private" constant @__constant_3x4xf32 : memref<3x4xf32> = dense<7.000000e+00>
-// CHECK-NOT: alignment
-
-// ALIGNED: memref.global "private" constant @__constant_3x4xf32 : memref<3x4xf32> = dense<7.000000e+00>
-// ALIGNED-SAME: {alignment = 64 : i64}
-
-// CHECK: @basic
-func @basic() -> tensor<3x4xf32> {
-  // CHECK: %[[MEMREF:.*]] = memref.get_global @__constant_3x4xf32 : memref<3x4xf32>
-  // CHECK: %[[TENSOR:.*]] = bufferization.to_tensor %[[MEMREF]]
-  %0 = arith.constant dense<7.0> : tensor<3x4xf32>
-  // CHECK: return %[[TENSOR]]
-  return %0 : tensor<3x4xf32>
-}
-
-// CHECK: }
-
-// -----
-
-// CHECK-LABEL: module {
-
-// Only one global is created.
-// CHECK: memref.global
-// CHECK-NOT: memref.global
-func @duplicate_constants() -> (tensor<3x4xf32>, tensor<3x4xf32>) {
-  %0 = arith.constant dense<7.0> : tensor<3x4xf32>
-  %1 = arith.constant dense<7.0> : tensor<3x4xf32>
-  return %0, %1 : tensor<3x4xf32>, tensor<3x4xf32>
-}
-
-// CHECK: }
-
-// -----
-
-// CHECK-LABEL: module {
-
-// Two globals are created.
-// CHECK: memref.global
-// CHECK: memref.global
-// CHECK-NOT: memref.global
-func @multiple_constants() -> (tensor<3x4xf32>, tensor<3x4xf32>) {
-  %0 = arith.constant dense<7.0> : tensor<3x4xf32>
-  %1 = arith.constant dense<8.0> : tensor<3x4xf32>
-  return %0, %1 : tensor<3x4xf32>, tensor<3x4xf32>
-}
-
-// CHECK: }
-
-// -----
-
-// CHECK-LABEL: module {
-// We don't convert non-tensor globals.
-// CHECK-NOT: memref.global
-func @non_tensor() {
-    %0 = arith.constant 7 : i32
-    return
-}
-
-// CHECK: }

diff  --git a/mlir/test/Integration/Dialect/Linalg/CPU/test-collapse-tensor.mlir b/mlir/test/Integration/Dialect/Linalg/CPU/test-collapse-tensor.mlir
index 1003a46ac7c16..8edeed690af48 100644
--- a/mlir/test/Integration/Dialect/Linalg/CPU/test-collapse-tensor.mlir
+++ b/mlir/test/Integration/Dialect/Linalg/CPU/test-collapse-tensor.mlir
@@ -1,5 +1,5 @@
 // RUN: mlir-opt %s -linalg-bufferize -std-bufferize \
-// RUN: -tensor-constant-bufferize -tensor-bufferize -func-bufferize \
+// RUN: -arith-bufferize -tensor-bufferize -func-bufferize \
 // RUN: -finalizing-bufferize -buffer-deallocation -convert-linalg-to-llvm \
 // RUN: -convert-memref-to-llvm -convert-std-to-llvm -reconcile-unrealized-casts | \
 // RUN: mlir-cpu-runner -e main -entry-point-result=void \

diff  --git a/mlir/test/Integration/Dialect/Linalg/CPU/test-elementwise.mlir b/mlir/test/Integration/Dialect/Linalg/CPU/test-elementwise.mlir
index b9c1f603106e7..1041497ce0ab7 100644
--- a/mlir/test/Integration/Dialect/Linalg/CPU/test-elementwise.mlir
+++ b/mlir/test/Integration/Dialect/Linalg/CPU/test-elementwise.mlir
@@ -1,5 +1,5 @@
 // RUN: mlir-opt %s -convert-elementwise-to-linalg -std-bufferize \
-// RUN: -tensor-constant-bufferize -linalg-bufferize -tensor-bufferize \
+// RUN: -arith-bufferize -linalg-bufferize -tensor-bufferize \
 // RUN: -func-bufferize -buffer-deallocation -convert-linalg-to-loops \
 // RUN: -convert-linalg-to-llvm --convert-memref-to-llvm -convert-std-to-llvm \
 // RUN: -reconcile-unrealized-casts | \

diff  --git a/mlir/test/Integration/Dialect/Linalg/CPU/test-expand-tensor.mlir b/mlir/test/Integration/Dialect/Linalg/CPU/test-expand-tensor.mlir
index 49c56d7ae268d..a81e7f5a9368e 100644
--- a/mlir/test/Integration/Dialect/Linalg/CPU/test-expand-tensor.mlir
+++ b/mlir/test/Integration/Dialect/Linalg/CPU/test-expand-tensor.mlir
@@ -1,5 +1,5 @@
 // RUN: mlir-opt %s -linalg-bufferize -std-bufferize \
-// RUN: -tensor-constant-bufferize -tensor-bufferize -func-bufferize \
+// RUN: -arith-bufferize -tensor-bufferize -func-bufferize \
 // RUN: -finalizing-bufferize -buffer-deallocation -convert-linalg-to-llvm \
 // RUN: -convert-memref-to-llvm -convert-std-to-llvm -reconcile-unrealized-casts | \
 // RUN: mlir-cpu-runner -e main -entry-point-result=void \

diff  --git a/mlir/test/Integration/Dialect/Linalg/CPU/test-padtensor.mlir b/mlir/test/Integration/Dialect/Linalg/CPU/test-padtensor.mlir
index ced7a49073b37..bfbd3e150d380 100644
--- a/mlir/test/Integration/Dialect/Linalg/CPU/test-padtensor.mlir
+++ b/mlir/test/Integration/Dialect/Linalg/CPU/test-padtensor.mlir
@@ -1,5 +1,5 @@
 // RUN: mlir-opt %s -linalg-bufferize -std-bufferize \
-// RUN: -tensor-constant-bufferize -tensor-bufferize -func-bufferize \
+// RUN: -arith-bufferize -tensor-bufferize -func-bufferize \
 // RUN: -finalizing-bufferize -buffer-deallocation \
 // RUN: -convert-linalg-to-loops -convert-scf-to-std -convert-linalg-to-llvm -convert-memref-to-llvm -convert-std-to-llvm -reconcile-unrealized-casts | \
 // RUN: mlir-cpu-runner -e main -entry-point-result=void \

diff  --git a/mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert-multiple-uses.mlir b/mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert-multiple-uses.mlir
index bfd1de608ebeb..3a321e27bf35c 100644
--- a/mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert-multiple-uses.mlir
+++ b/mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert-multiple-uses.mlir
@@ -1,5 +1,5 @@
 // RUN: mlir-opt %s -linalg-bufferize -std-bufferize \
-// RUN: -tensor-constant-bufferize -tensor-bufferize -func-bufferize \
+// RUN: -arith-bufferize -tensor-bufferize -func-bufferize \
 // RUN: -finalizing-bufferize -buffer-deallocation \
 // RUN: -convert-linalg-to-loops -convert-scf-to-std -convert-linalg-to-llvm --convert-memref-to-llvm -convert-std-to-llvm -reconcile-unrealized-casts | \
 // RUN: mlir-cpu-runner -e main -entry-point-result=void \

diff  --git a/mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert.mlir b/mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert.mlir
index 3a584b8a3b13e..c22cbc5f6ab19 100644
--- a/mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert.mlir
+++ b/mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert.mlir
@@ -1,5 +1,5 @@
 // RUN: mlir-opt %s -linalg-bufferize -std-bufferize \
-// RUN: -tensor-constant-bufferize -tensor-bufferize -func-bufferize \
+// RUN: -arith-bufferize -tensor-bufferize -func-bufferize \
 // RUN: -finalizing-bufferize -buffer-deallocation \
 // RUN: -convert-linalg-to-loops -convert-scf-to-std -convert-linalg-to-llvm --convert-memref-to-llvm -convert-std-to-llvm -reconcile-unrealized-casts | \
 // RUN: mlir-cpu-runner -e main -entry-point-result=void \

diff  --git a/mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-e2e.mlir b/mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-e2e.mlir
index d8e4ea6721013..360717f75223b 100644
--- a/mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-e2e.mlir
+++ b/mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-e2e.mlir
@@ -1,4 +1,4 @@
-// RUN: mlir-opt %s -tensor-constant-bufferize -std-bufferize -linalg-bufferize \
+// RUN: mlir-opt %s -arith-bufferize -std-bufferize -linalg-bufferize \
 // RUN: -tensor-bufferize -func-bufferize -finalizing-bufferize -buffer-deallocation -convert-linalg-to-loops \
 // RUN: -convert-linalg-to-llvm --convert-memref-to-llvm -convert-std-to-llvm -reconcile-unrealized-casts | \
 // RUN: mlir-cpu-runner -e main -entry-point-result=void \

diff  --git a/mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-matmul.mlir b/mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-matmul.mlir
index dc25f9583cbdd..e98a5a6c4eaa5 100644
--- a/mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-matmul.mlir
+++ b/mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-matmul.mlir
@@ -1,5 +1,5 @@
 // UNSUPPORTED: asan
-// RUN: mlir-opt %s -linalg-bufferize -std-bufferize -tensor-constant-bufferize \
+// RUN: mlir-opt %s -linalg-bufferize -std-bufferize -arith-bufferize \
 // RUN: -tensor-bufferize -func-bufferize -finalizing-bufferize -buffer-deallocation -convert-linalg-to-loops -convert-scf-to-std \
 // RUN: -convert-linalg-to-llvm -lower-affine -convert-scf-to-std --convert-memref-to-llvm -convert-std-to-llvm -reconcile-unrealized-casts | \
 // RUN: mlir-cpu-runner -e main -entry-point-result=void \
@@ -7,7 +7,7 @@
 // RUN: | FileCheck %s
 
 // RUN: mlir-opt %s  -linalg-tile="tile-sizes=1,2,3" -linalg-bufferize \
-// RUN: -scf-bufferize -std-bufferize -tensor-constant-bufferize -tensor-bufferize \
+// RUN: -scf-bufferize -std-bufferize -arith-bufferize -tensor-bufferize \
 // RUN: -func-bufferize \
 // RUN: -finalizing-bufferize -convert-linalg-to-loops -convert-scf-to-std -convert-scf-to-std \
 // RUN: -convert-linalg-to-llvm -lower-affine -convert-scf-to-std --convert-memref-to-llvm -convert-std-to-llvm -reconcile-unrealized-casts | \

diff  --git a/mlir/test/Integration/Dialect/SparseTensor/python/test_SDDMM.py b/mlir/test/Integration/Dialect/SparseTensor/python/test_SDDMM.py
index feeedcc9bc97f..ccc87af7287cb 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/python/test_SDDMM.py
+++ b/mlir/test/Integration/Dialect/SparseTensor/python/test_SDDMM.py
@@ -129,7 +129,7 @@ def __init__(self, options: str):
         f'builtin.func(linalg-bufferize,convert-linalg-to-loops,convert-vector-to-scf),'
         f'convert-scf-to-std,'
         f'func-bufferize,'
-        f'tensor-constant-bufferize,'
+        f'arith-bufferize,'
         f'builtin.func(tensor-bufferize,std-bufferize,finalizing-bufferize),'
         f'convert-vector-to-llvm{{reassociate-fp-reductions=1 enable-index-optimizations=1}},'
         f'lower-affine,'

diff  --git a/mlir/test/Integration/Dialect/SparseTensor/python/test_SpMM.py b/mlir/test/Integration/Dialect/SparseTensor/python/test_SpMM.py
index 76ff846aeea6b..3c94581c30c41 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/python/test_SpMM.py
+++ b/mlir/test/Integration/Dialect/SparseTensor/python/test_SpMM.py
@@ -119,7 +119,7 @@ def __init__(self, options: str):
         f'builtin.func(linalg-bufferize,convert-linalg-to-loops,convert-vector-to-scf),'
         f'convert-scf-to-std,'
         f'func-bufferize,'
-        f'tensor-constant-bufferize,'
+        f'arith-bufferize,'
         f'builtin.func(tensor-bufferize,std-bufferize,finalizing-bufferize),'
         f'convert-vector-to-llvm{{reassociate-fp-reductions=1 enable-index-optimizations=1}},'
         f'lower-affine,'

diff  --git a/mlir/test/Integration/Dialect/SparseTensor/python/test_elementwise_add_sparse_output.py b/mlir/test/Integration/Dialect/SparseTensor/python/test_elementwise_add_sparse_output.py
index 99ebe11ff5279..ae12394207133 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/python/test_elementwise_add_sparse_output.py
+++ b/mlir/test/Integration/Dialect/SparseTensor/python/test_elementwise_add_sparse_output.py
@@ -71,7 +71,7 @@ def __init__(self):
         f'builtin.func(linalg-bufferize,convert-linalg-to-loops,convert-vector-to-scf),'
         f'convert-scf-to-std,'
         f'func-bufferize,'
-        f'tensor-constant-bufferize,'
+        f'arith-bufferize,'
         f'builtin.func(tensor-bufferize,std-bufferize,finalizing-bufferize),'
         f'convert-vector-to-llvm{{reassociate-fp-reductions=1 enable-index-optimizations=1}},'
         f'lower-affine,'

diff  --git a/mlir/test/Integration/Dialect/SparseTensor/python/test_output.py b/mlir/test/Integration/Dialect/SparseTensor/python/test_output.py
index eaf39db09a2b0..c2635c58b7313 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/python/test_output.py
+++ b/mlir/test/Integration/Dialect/SparseTensor/python/test_output.py
@@ -79,7 +79,7 @@ def __init__(self):
         f'builtin.func(linalg-bufferize,convert-linalg-to-loops,convert-vector-to-scf),'
         f'convert-scf-to-std,'
         f'func-bufferize,'
-        f'tensor-constant-bufferize,'
+        f'arith-bufferize,'
         f'builtin.func(tensor-bufferize,std-bufferize,finalizing-bufferize),'
         f'convert-vector-to-llvm{{reassociate-fp-reductions=1 enable-index-optimizations=1}},'
         f'lower-affine,'

diff  --git a/mlir/test/Integration/Dialect/SparseTensor/python/test_stress.py b/mlir/test/Integration/Dialect/SparseTensor/python/test_stress.py
index 55e64668d4955..761620c5b0715 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/python/test_stress.py
+++ b/mlir/test/Integration/Dialect/SparseTensor/python/test_stress.py
@@ -177,7 +177,7 @@ def __init__(self, sparsification_options: str, support_lib: str):
         f'builtin.func(linalg-bufferize,convert-linalg-to-loops,convert-vector-to-scf),'
         f'convert-scf-to-std,'
         f'func-bufferize,'
-        f'tensor-constant-bufferize,'
+        f'arith-bufferize,'
         f'builtin.func(tensor-bufferize,std-bufferize,finalizing-bufferize),'
         f'convert-vector-to-llvm{{reassociate-fp-reductions=1 enable-index-optimizations=1}},'
         f'lower-affine,'

diff  --git a/mlir/test/Integration/Dialect/SparseTensor/taco/tools/mlir_pytaco.py b/mlir/test/Integration/Dialect/SparseTensor/taco/tools/mlir_pytaco.py
index 657185cd0350a..f3e865ba860a3 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/taco/tools/mlir_pytaco.py
+++ b/mlir/test/Integration/Dialect/SparseTensor/taco/tools/mlir_pytaco.py
@@ -136,7 +136,7 @@ def _mlir_type_from_taco_type(dtype: DType) -> ir.Type:
 
 def _compile_mlir(module: ir.Module) -> ir.Module:
   """Compiles an MLIR module and returns the compiled module."""
-  # TODO: Replace this with a pipeline implemented for 
+  # TODO: Replace this with a pipeline implemented for
   #   https://github.com/llvm/llvm-project/issues/51751.
   pipeline = (
       f"sparsification,"
@@ -144,7 +144,7 @@ def _compile_mlir(module: ir.Module) -> ir.Module:
       f"builtin.func(linalg-bufferize,convert-linalg-to-loops,convert-vector-to-scf),"
       f"convert-scf-to-std,"
       f"func-bufferize,"
-      f"tensor-constant-bufferize,"
+      f"arith-bufferize,"
       f"builtin.func(tensor-bufferize,std-bufferize,finalizing-bufferize),"
       f"convert-vector-to-llvm{{reassociate-fp-reductions=1 enable-index-optimizations=1}},"
       f"lower-affine,"

diff  --git a/mlir/test/lib/Dialect/Linalg/TestComprehensiveBufferize.cpp b/mlir/test/lib/Dialect/Linalg/TestComprehensiveBufferize.cpp
index d4371b4b22329..fe2698c63b40d 100644
--- a/mlir/test/lib/Dialect/Linalg/TestComprehensiveBufferize.cpp
+++ b/mlir/test/lib/Dialect/Linalg/TestComprehensiveBufferize.cpp
@@ -116,9 +116,9 @@ void TestComprehensiveFunctionBufferize::runOnOperation() {
   options->createDeallocs = createDeallocs;
 
   if (dialectFilter.hasValue()) {
-    options->dialectFilter.emplace();
+    options->hasFilter = true;
     for (const std::string &dialectNamespace : dialectFilter)
-      options->dialectFilter->insert(dialectNamespace);
+      options->dialectFilter.insert(dialectNamespace);
   }
 
   Operation *op = getOperation();


        


More information about the Mlir-commits mailing list