[Mlir-commits] [mlir] [MLIR][Bufferization] Retire `enforce-aliasing-invariants` (PR #130929)
lorenzo chelini
llvmlistbot at llvm.org
Wed Mar 12 02:10:18 PDT 2025
https://github.com/chelini created https://github.com/llvm/llvm-project/pull/130929
Why? This option can lead to incorrect IR if used in isolation, for example, consider the IR below:
```mlir
func.func @loop_with_aliasing(%arg0: tensor<5xf32>, %arg1: index, %arg2: index) -> tensor<5xf32> {
%c1 = arith.constant 1 : index
%cst = arith.constant 1.000000e+00 : f32
%0 = tensor.empty() : tensor<5xf32>
%1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<5xf32>) -> tensor<5xf32>
// The BufferizableOpInterface says that %2 alias with %arg0 or be a newly
// allocated buffer
%2 = scf.for %arg3 = %arg1 to %arg2 step %c1 iter_args(%arg4 = %arg0) -> (tensor<5xf32>) {
scf.yield %1 : tensor<5xf32>
}
%cst_0 = arith.constant 1.000000e+00 : f32
%inserted = tensor.insert %cst_0 into %1[%c1] : tensor<5xf32>
return %2 : tensor<5xf32>
}
```
If we bufferize with: enforce-aliasing-invariants=false, we get:
```
func.func @loop_with_aliasing(%arg0: memref<5xf32, strided<[?], offset: ?>>, %arg1: index, %arg2: index) -> memref<5xf32, strided<[?], offset: ?>> {
%c1 = arith.constant 1 : index
%cst = arith.constant 1.000000e+00 : f32
%alloc = memref.alloc() {alignment = 64 : i64} : memref<5xf32>
linalg.fill ins(%cst : f32) outs(%alloc : memref<5xf32>)
%0 = scf.for %arg3 = %arg1 to %arg2 step %c1 iter_args(%arg4 = %arg0) -> (memref<5xf32, strided<[?], offset: ?>>) {
%cast = memref.cast %alloc : memref<5xf32> to memref<5xf32, strided<[?], offset: ?>>
scf.yield %cast : memref<5xf32, strided<[?], offset: ?>>
}
%cst_0 = arith.constant 1.000000e+00 : f32
memref.store %cst_0, %alloc[%c1] : memref<5xf32>
return %0 : memref<5xf32, strided<[?], offset: ?>>
}
```
Which is not correct IR since the loop yields the allocation.
I am using this option. What do I need to do now?
If you are using this option in isolation, you are possibly generating incorrect IR, so you need to revisit your bufferization strategy. If you are using it together with `copyBeforeWrite,` you simply need to retire the `enforceAliasingInvariants` option.
>From 2fb2f70f35704e9ed83eb712b8127aedef0564f3 Mon Sep 17 00:00:00 2001
From: lorenzo chelini <lchelini at nvidia.com>
Date: Wed, 12 Mar 2025 09:54:55 +0100
Subject: [PATCH] [MLIR][Bufferization] Retire `enforce-aliasing-invariants`
Why? This option can lead to incorrect IR if used in isolation, for example,
consider the IR below:
```mlir
func.func @loop_with_aliasing(%arg0: tensor<5xf32>, %arg1: index, %arg2: index) -> tensor<5xf32> {
%c1 = arith.constant 1 : index
%cst = arith.constant 1.000000e+00 : f32
%0 = tensor.empty() : tensor<5xf32>
%1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<5xf32>) -> tensor<5xf32>
// The BufferizableOpInterface says that %2 alias with %arg0 or be a newly
// allocated buffer
%2 = scf.for %arg3 = %arg1 to %arg2 step %c1 iter_args(%arg4 = %arg0) -> (tensor<5xf32>) {
scf.yield %1 : tensor<5xf32>
}
%cst_0 = arith.constant 1.000000e+00 : f32
%inserted = tensor.insert %cst_0 into %1[%c1] : tensor<5xf32>
return %2 : tensor<5xf32>
}
```
If we bufferize with: enforce-aliasing-invariants=false, we get:
```
func.func @loop_with_aliasing(%arg0: memref<5xf32, strided<[?], offset: ?>>, %arg1: index, %arg2: index) -> memref<5xf32, strided<[?], offset: ?>> {
%c1 = arith.constant 1 : index
%cst = arith.constant 1.000000e+00 : f32
%alloc = memref.alloc() {alignment = 64 : i64} : memref<5xf32>
linalg.fill ins(%cst : f32) outs(%alloc : memref<5xf32>)
%0 = scf.for %arg3 = %arg1 to %arg2 step %c1 iter_args(%arg4 = %arg0) -> (memref<5xf32, strided<[?], offset: ?>>) {
%cast = memref.cast %alloc : memref<5xf32> to memref<5xf32, strided<[?], offset: ?>>
scf.yield %cast : memref<5xf32, strided<[?], offset: ?>>
}
%cst_0 = arith.constant 1.000000e+00 : f32
memref.store %cst_0, %alloc[%c1] : memref<5xf32>
return %0 : memref<5xf32, strided<[?], offset: ?>>
}
```
Which is not correct IR since the loop yields the allocation.
I am using this option. What do I need to do now?
If you are using this option in isolation, you are possibly generating incorrect
IR, so you need to revisit your bufferization strategy. If you are using it
together with `copyBeforeWrite,` you simply need to retire the
`enforceAliasingInvariants` option.
Co-authored-by: Matthias Springer <mspringer at nvidia.com>
---
.../Dialect/Bufferization/IR/BufferizableOpInterface.h | 10 ----------
.../SCF/Transforms/BufferizableOpInterfaceImpl.cpp | 6 ++----
2 files changed, 2 insertions(+), 14 deletions(-)
diff --git a/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h b/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h
index ea7af3018bda8..ada9539e87121 100644
--- a/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h
+++ b/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h
@@ -315,16 +315,6 @@ struct BufferizationOptions {
// outside of the parallel region will be given a new buffer.
bool checkParallelRegions = true;
- /// Certain ops have aliasing OpOperand/OpResult invariants (e.g., scf.for).
- /// If this flag is set to `false`, those invariants are no longer enforced
- /// with buffer copies.
- ///
- /// Note: Deactivating this flag can lead to incorrect bufferization results
- /// when used incorrectly. This flag is useful with
- /// `AlwaysCopyAnalysisState` which bufferizes all writing tensor
- /// OpOperands out-of-place.
- bool enforceAliasingInvariants = true;
-
/// This function controls buffer types on function signatures. Sets
/// `functionArgTypeConverterFn` and `inferFunctionResultLayout` accordingly.
///
diff --git a/mlir/lib/Dialect/SCF/Transforms/BufferizableOpInterfaceImpl.cpp b/mlir/lib/Dialect/SCF/Transforms/BufferizableOpInterfaceImpl.cpp
index e9d7dc1b847c6..74e36796d498e 100644
--- a/mlir/lib/Dialect/SCF/Transforms/BufferizableOpInterfaceImpl.cpp
+++ b/mlir/lib/Dialect/SCF/Transforms/BufferizableOpInterfaceImpl.cpp
@@ -652,8 +652,7 @@ struct ForOpInterface
if (failed(bufferizableOp.resolveTensorOpOperandConflicts(rewriter, state)))
return failure();
- if (!state.getOptions().enforceAliasingInvariants ||
- state.getOptions().copyBeforeWrite)
+ if (state.getOptions().copyBeforeWrite)
return success();
// According to the `getAliasing...` implementations, a bufferized OpResult
@@ -894,8 +893,7 @@ struct WhileOpInterface
if (failed(bufferizableOp.resolveTensorOpOperandConflicts(rewriter, state)))
return failure();
- if (!state.getOptions().enforceAliasingInvariants ||
- state.getOptions().copyBeforeWrite)
+ if (state.getOptions().copyBeforeWrite)
return success();
// According to the `getAliasing...` implementations, a bufferized OpResult
More information about the Mlir-commits
mailing list